Re: Extention String returning

2005-10-24 Thread Jp Calderone
On 24 Oct 2005 11:28:23 -0700, Tuvas [EMAIL PROTECTED] wrote:
I have been writing a program that is designed to return an 8 byte
string from C to Python. Occasionally one or more of these bytes will
be null, but the size of it will always be known. How can I write an
extention module that will return the correct bytes, and not just until
the null? I would think there would be a fairly  easy way to do this,
but, well... Thanks!

Use PyString_FromStringAndSize instead of PyString_FromString.

Jean-Paul
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python gc performance in large apps

2005-10-22 Thread Jp Calderone
On Fri, 21 Oct 2005 16:13:09 -0400, Robby Dermody [EMAIL PROTECTED] wrote:

Hey guys (thus begins a book of a post :),

I'm in the process of writing a commercial VoIP call monitoring and
recording application suite in python and pyrex. Basically, this
software sits in a VoIP callcenter-type environment (complete with agent
phones and VoIP servers), sniffs voice data off of the network, and
allows users to listen into calls. It can record calls as well. The
project is about a year and 3 months in the making and lately the
codebase has stabilized enough to where it can be used by some of our
clients. The entire project has about 37,000 lines of python and pyrex
code (along with 1-2K lines of unrelated java code).

 [snip - it leaks memory]

One thing to consider is that the process may be growing in size, not because 
garbage objects are not being freed, but because objects which should be 
garbage are being held onto by application-level code.

gc.objects may be useful for determining if this is the case, and 
gc.get_objects() may be useful for discovering what kinds of objects are piling 
up.  These may give you a hint as to where to look to allow these objects to be 
released, if this is the problem.

Of course, it's also possible one of the libraries you are using is either 
leaking objects in this fashion, or for the extension modules, may just be 
leaking memory.  The above techniques may help you find an object leak, but 
they won't help you find a memory leak.  For this, you might give Valgrind a 
try (use the suppression file in Python CVS to get rid of the spew PyMalloc and 
friends generate).

Also, I can point out two things: Twisted's URL parsing extension leaked some 
memory in 1.3, but has been fixed since 2.0; and Nevow 0.4.1 made it easy to 
write applications that leaked several page objects per request, which has been 
fixed since 0.5.  If you're using either of these older versions, upgrading may 
fix your difficulties.

Hope this helps,

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: KeyboardInterrupt vs extension written in C

2005-10-22 Thread Jp Calderone
On 22 Oct 2005 22:02:46 +0200, Dieter Maurer [EMAIL PROTECTED] wrote:
Tamas Nepusz [EMAIL PROTECTED] writes on 20 Oct 2005 15:39:54 -0700:
 The library I'm working on
 is designed for performing calculations on large-scale graphs (~1
 nodes and edges). I want to create a Python interface for that library,
 so what I want to accomplish is that I could just type from igraph
 import * in a Python command line and then access all of the
 functionalities of the igraph library. Now it works, except the fact
 that if, for example, I start computing the diameter of a random graph
 of ~10 nodes and ~20 edges, I can't cancel it, because the
 KeyboardInterrupt is not propagated to the Python toplevel (or it isn't
 even generated until the igraph library routine returns).

Python installs a SIGINT handler that just notes that
such a signal was received. The note is handled during
bytecode execution. This way, Python handles the (dangerous)
asynchronous signal synchronously (which is much safer).
But, it also means that a signal during execution of your C extension
is only handled when it finished.

What you can do in your wrapper code:

   Temporarily install a new handler for SIGINT that
   uses longjmp to quit the C extension execution when
   the signal occurs.

   Note that longjmp is dangerous. Great care is necessary.

   It is likely that SIGINT occurrences will lead to big
   resource leaks (because your C extension will have no
   way to release resources when it gets quit with longjmp).


Note that swapcontext() is probably preferable to longjmp() in almost all 
circumstances.  In cases where it isn't, siglongjmp() definitely is.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: High Order Messages in Python

2005-10-22 Thread Jp Calderone
On 22 Oct 2005 14:12:16 -0700, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
I'm reading about high order messages in Ruby by Nat Pryce, and
thinking if it could be  util and if so, if it could be done in Python.
Someone already tried?

Here's an example of the idea, in Python:

def messageA():
print 'Message A received!'

def messageB(otherMessage):
print 'Message B received!  Sending some other message.'
otherMessage()

messageB(messageA)

Since this is a basic feature of Python, we usually don't call them messages.  
Instead, functions or sometimes methods.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: High Order Messages in Python

2005-10-22 Thread Jp Calderone
On 22 Oct 2005 15:11:39 -0700, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
Hum... I thnk you dont get the ideia: I'm not talking abou High Order
Functions.
What ho call High Order Methods is some like connecting some
'generic' methods created to do things like this:
claimants.where.retired?.do.receive_benefit 50
The 2nd and 3rd links that in the first post is the most relevant to
undestand the concept. Read this too:
http://www.metaobject.com/papers/Higher_Order_Messaging_OOPSLA_2005.pdf

These are just more involved applications of the same idea.  They're easily 
implemented in Python, using primitives such as HOF, if one desires.

However, I don't see why one would want to write the above mish-mash, rather 
than:

for cl in claimaints:
if cl.retired():
cl.receive_benefit(50)

Or:

[cl.receive_benefit(50) for cl in claimaints if cl.retired())

Or:

map(
ClaimaintType.receive_benefit,
filter(
ClaimaintType.retired,
claimaints),
itertools.repeat(50))

Or:

claimaintGroup.disburse()

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: write a loopin one line; process file paths

2005-10-18 Thread Jp Calderone
On 18 Oct 2005 14:56:32 -0700, Xah Lee [EMAIL PROTECTED] wrote:
is there a way to condense the following loop into one line?

There is.

exec('696d706f72742072652c206f732e706174680a0a696d6750617468733d5b75272f55736572732f742f7765622f506572696f6469635f646f736167655f6469722f6c616e63692f74342f6f682f4453434e323035396d2d732e6a7067272c0a75272f55736572732f742f7765622f506572696f6469635f646f736167655f6469722f6c616e63692f74342f6f682f4453434e323036326d2d732e6a7067272c0a75272f55736572732f742f7765622f506572696f6469635f646f736167655f6469722f6c616e63692f74342f6f682f4453434e323039376d2d732e6a7067272c0a75272f55736572732f742f7765622f506572696f6469635f646f736167655f6469722f6c616e63692f74342f6f682f4453434e323039396d2d732e6a7067272c0a75272f55736572732f742f7765622f49636f6e735f6469722f69636f6e5f73756d2e676966275d0a0a23206368616e67652074686520696d616765207061746820746f207468652066756c6c2073697a656420696d6167652c206966206974206578697374730a2320746861742069732c20696620696d61676520656e647320696e202d732e6a70672c2066696e64206f6e6520776974686f75742074686520272d73272e0a74656d703d696d6750617468735b3a5d0a696d6750617468733d5b5d0a666f72206d795!
 
061746820696e2074656d703a0a20202020703d6d79506174680a20202020286469724e616d652c2066696c654e616d6529203d206f732e706174682e73706c6974286d7950617468290a202020202866696c65426173654e616d652c2066696c65457874656e73696f6e293d6f732e706174682e73706c69746578742866696c654e616d65290a2020202069662872652e7365617263682872272d7324272c66696c65426173654e616d652c72652e5529293a0a202020202020202070323d6f732e706174682e6a6f696e286469724e616d652c66696c65426173654e616d655b303a2d325d29202b2066696c65457874656e73696f6e0a20202020202020206966206f732e706174682e657869737473287032293a20703d70320a20202020696d6750617468732e617070656e642870290a0a74656d703d5b5d0a7072696e7420696d6750617468730a'.decode('hex'))

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Help with cPickle for deserializing datetime.datetime instances

2005-10-14 Thread Jp Calderone
On Fri, 14 Oct 2005 01:25:27 -0500, Mingus Tsai [EMAIL PROTECTED] wrote:
Hello- please help with unpickling problem:

I am using Python version 2.3.4 with IDLE version 1.0.3 on a Windows
XPhome system.

My problem is with using cPickle to deserialize my pickled arrays of
datetime.datetime instances.  The following is the code I have written:

   import cPickle, datetime
   import Numeric

#the file below contains a serialized dict with arrays of datetime
#objects.  When these three statements run, the IDLE crashes!

   input1 = open('tsm2_outa','r')
   time1 = cPickle.load(input1)
   input1.close()

#the file below contains serialized dict with arrays of built-in objects
#it unpickles without any problem, when I omit the above unpickling
#operation.

   input2 = open('tsm2_outb','rb')
   data1 = cPickle.load(input2)
   input2.close()

My guess is that I need to somehow tell the pickle.load command that it
is loading datetime instances, but I have no idea how to do this.  Any
help would be much appreciated.

As I recall, Pickling Numeric arrays is a tricky business.  You probably 
shouldn't even try to do it.  Instead, pickle regular arrays or lists.

For anyone else who's interested, this can easily be reproduced without an 
existing pickle file:

[EMAIL PROTECTED]:~$ python
Python 2.4.2 (#2, Sep 30 2005, 21:19:01) 
[GCC 4.0.2 20050808 (prerelease) (Ubuntu 4.0.1-4ubuntu8)] on linux2
Type help, copyright, credits or license for more information.
 import Numeric, cPickle, datetime
 cPickle.loads(cPickle.dumps(Numeric.array([datetime.datetime.now() for n in 
 range(50)])))
Segmentation fault
[EMAIL PROTECTED]:~$ 

Values smaller than 50 randomly mangle memory, but sometimes don't segfault the 
interpreter.  You can get exciting objects like instances of cPickle.Pdata or 
refcnt back from the loads() call in these cases.

So, the summary is, don't do this.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python reliability

2005-10-09 Thread Jp Calderone
On Sun, 9 Oct 2005 23:00:04 +0300 (EEST), Ville Voipio [EMAIL PROTECTED] 
wrote:
I would need to make some high-reliability software
running on Linux in an embedded system. Performance
(or lack of it) is not an issue, reliability is.

The piece of software is rather simple, probably a
few hundred lines of code in Python. There is a need
to interact with network using the socket module,
and then probably a need to do something hardware-
related which will get its own driver written in
C.

Threading and other more error-prone techniques can
be left aside, everything can run in one thread with
a poll loop.

The software should be running continously for
practically forever (at least a year without a reboot).
Is the Python interpreter (on Linux) stable and
leak-free enough to achieve this?


As a data point, I've had python programs run on linux for more than a year 
using both Python 2.1.3 and 2.2.3.  These were network apps, with both client 
and server functionality, using Twisted.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python reliability

2005-10-09 Thread Jp Calderone
On Mon, 10 Oct 2005 12:18:42 +1000, Steven D'Aprano [EMAIL PROTECTED] wrote:
George Sakkis wrote:

 Steven D'Aprano wrote:


On Sun, 09 Oct 2005 23:00:04 +0300, Ville Voipio wrote:


I would need to make some high-reliability software
running on Linux in an embedded system. Performance
(or lack of it) is not an issue, reliability is.

[snip]


The software should be running continously for
practically forever (at least a year without a reboot).
Is the Python interpreter (on Linux) stable and
leak-free enough to achieve this?

If performance is really not such an issue, would it really matter if you
periodically restarted Python? Starting Python takes a tiny amount of time:


 You must have missed or misinterpreted the The software should be
 running continously for practically forever part. The problem of
 restarting python is not the 200 msec lost but putting at stake
 reliability (e.g. for health monitoring devices, avionics, nuclear
 reactor controllers, etc.) and robustness (e.g. a computation that
 takes weeks of cpu time to complete is interrupted without the
 possibility to restart from the point it stopped).


Er, no, I didn't miss that at all. I did miss that it
needed continual network connections. I don't know if
there is a way around that issue, although mobile
phones move in and out of network areas, swapping
connections when and as needed.

But as for reliability, well, tell that to Buzz Aldrin
and Neil Armstrong. The Apollo 11 moon lander rebooted
multiple times on the way down to the surface. It was
designed to recover gracefully when rebooting unexpectedly:

http://www.hq.nasa.gov/office/pao/History/alsj/a11/a11.1201-pa.html


This reminds me of crash-only software:

  http://www.stanford.edu/~candea/papers/crashonly/crashonly.html

Which seems to have some merits.  I have yet to attempt to develop any large 
scale software explicitly using this technique (although I have worked on 
several systems that very loosely used this approach; eg, a server which 
divided tasks into two processes, with one restarting the other whenever it 
noticed it was gone), but as you point out, there's certainly precedent.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Lambda evaluation

2005-10-06 Thread Jp Calderone
On Thu, 06 Oct 2005 16:18:15 -0400, Joshua Ginsberg [EMAIL PROTECTED] wrote:
So this part makes total sense to me:

 d = {}
 for x in [1,2,3]:
... d[x] = lambda y: y*x
...
 d[1](3)
9

Because x in the lambda definition isn't evaluated until the lambda is
executed, at which point x is 3.

Is there a way to specifically hard code into that lambda definition the
contemporary value of an external variable? In other words, is there a
way to rewrite the line d[x] = lambda y: y*x so that it is always the
case that d[1](3) = 3?

There are several ways, but this one involves the least additional typing:

 d = {}
 for x in 1, 2, 3:
... d[x] = lambda y, x=x: y * x
... 
 d[1](3)
3

Who needs closures, anyway? :)

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: updating local()

2005-10-05 Thread Jp Calderone


On Wed, 5 Oct 2005 18:47:06 +0200, Sybren Stuvel [EMAIL PROTECTED] wrote:
Flavio enlightened us with:
 Can anyone tell me why, if the following code works, I should not do
 this?

 def fun(a=1,b=2,**args):

  print 'locals:',locals()
  locals().update(args)
  print locals()

Because it's very, very, very insecure. What would happen if someone
found a way to call that function? It could replace any name in the
locals dictionary, including functions from __builtins__. In other
words: probably the whole program could be taken over by other code by
just one call to that function.


If I can call functions in your process space, I've already taken over your 
whole program.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: how to debug when Segmentation fault

2005-10-04 Thread Jp Calderone
On Tue, 4 Oct 2005 11:22:24 -0500, Michael Ekstrand [EMAIL PROTECTED] wrote:
On Tuesday 04 October 2005 11:13, Maksim Kasimov wrote:
 my programm sometime gives Segmentation fault message (no matter
 how long the programm had run (1 day or 2 weeks). And there is
 nothing in log-files that can points the problem. My question is how
 it possible to find out where is the problem in the code? Thanks for
 any help.

What extension modules are you using?

I've never seen stock Python (stable release w/ only included modules)
segfault, but did see a segfault with an extension module I was using
the other week (lxml IIRC, but I'm not sure).


[EMAIL PROTECTED]:~$ python
Python 2.4.2c1 (#2, Sep 24 2005, 00:48:19) 
[GCC 4.0.2 20050808 (prerelease) (Ubuntu 4.0.1-4ubuntu8)] on linux2
Type help, copyright, credits or license for more information.
 import sys
 sys.setrecursionlimit(1e9)
__main__:1: DeprecationWarning: integer argument expected, got float
 (lambda f: f(f))(lambda f: f(f))
Segmentation fault
[EMAIL PROTECTED]:~$ python
Python 2.4.2c1 (#2, Sep 24 2005, 00:48:19) 
[GCC 4.0.2 20050808 (prerelease) (Ubuntu 4.0.1-4ubuntu8)] on linux2
Type help, copyright, credits or license for more information.
 class foo(type):
... def mro(self):
... return [float]
... 
 class bar:
... __metaclass__ = foo
... 
Segmentation fault
[EMAIL PROTECTED]:~$ python
Python 2.4.2c1 (#2, Sep 24 2005, 00:48:19) 
[GCC 4.0.2 20050808 (prerelease) (Ubuntu 4.0.1-4ubuntu8)] on linux2
Type help, copyright, credits or license for more information.
 import dl
 dl.open('libc.so.6').call('memcpy', 0, 0, 1024)
Segmentation fault
[EMAIL PROTECTED]:~$ 

Though to be honest, even I consider the 3rd example a bit of a cheat ;)

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Dynamical loading of modules

2005-10-03 Thread Jp Calderone


On Mon, 03 Oct 2005 21:52:21 +0200, Jacob Kroon [EMAIL PROTECTED] wrote:
Hi, I'm having some problems with implementing dynamical module loading.
First let me
describe the scenario with an example:

modules/
fruit/
__init__.py
apple.py
banana.py

apple.py defines a class 'Apple', banana defines a class 'Banana'. The
problem lies in the
fact that I want to be able to just drop a new .py-file, for instance
peach.py, and not change
__init__.py, and it should automatically pickup the new file in
__init__.py. I've come halfway
by using some imp module magic in __init__.py, but the problem I have is
that the instantiated
objects class-names becomes fruit.apple.Apple/fruit.banana.Banana, whild
I want it to be
fruit.Apple/fruit.Banana.

Is there a smarter way of accomplishing what I am trying to do ?
If someone could give me a small example of how to achieve this I would
be very grateful.

The __module__ attribute of class objects is mutable.  I don't understand *why* 
this makes a difference, though.  The class's name is a pointer to where it is 
defined: this is useful because it saves a lot of grepping, and unambiguously 
tells the reader where the class came from.  If you start making it mean 
something else, you'll end up confusing people.  If you just want a pretty 
name, use something /other/ than the class's fully qualified Python name.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Feature Proposal: Sequence .join method

2005-09-30 Thread Jp Calderone
On Fri, 30 Sep 2005 09:38:25 -0700, Michael Spencer [EMAIL PROTECTED] wrote:
Terry Reedy wrote:
 David Murmann [EMAIL PROTECTED] wrote in message
 news:[EMAIL PROTECTED]

def join(sep, seq):
return reduce(lambda x, y: x + sep + y, seq, type(sep)())

damn, i wanted too much. Proper implementation:

def join(sep, seq):
if len(seq):
return reduce(lambda x, y: x + sep + y, seq)
return type(sep)()

but still short enough


 For general use, this is both too general and not general enough.

 If len(seq) exists then seq is probably reiterable, in which case it may be
 possible to determine the output length and preallocate to make the process
 O(n) instead of O(n**2).  I believe str.join does this.  A user written
 join for lists could also.  A tuple function could make a list first and
 then tuple(it) at the end.

 If seq is a general (non-empty) iterable, len(seq) may raise an exception
 even though the reduce would work fine.

 Terry J. Reedy



For the general iterable case, you could have something like this:

   def interleave(sep, iterable):
  ... it = iter(iterable)
  ... next = it.next()
  ... try:
  ... while 1:
  ... item = next
  ... next = it.next()
  ... yield item
  ... yield sep
  ... except StopIteration:
  ... yield item
  ...
   list(interleave(100,range(10)))
  [0, 100, 1, 100, 2, 100, 3, 100, 4, 100, 5, 100, 6, 100, 7, 100, 8, 100, 9]
  

but I can't think of a use for it ;-)

I have this version:

def interlace(x, i):
interlace(x, i) - i0, x, i1, x, ..., x, iN

i = iter(i)
try:
yield i.next()
except StopIteration:
return
for e in i:
yield x
yield e

And I use it for things like interleave(, , [foo, bar, baz]), where bar 
is not a string, but can be handled along with strings by a lower-level chunk 
of code.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Metaclasses, decorators, and synchronization

2005-09-25 Thread Jp Calderone
On Sun, 25 Sep 2005 23:30:21 -0400, Victor Ng [EMAIL PROTECTED] wrote:
You could do it with a metaclass, but I think that's probably overkill.

It's not really efficient as it's doing test/set of an RLock all the
time, but hey - you didn't ask for efficient.  :)

There's a race condition in this version of synchronized which can allow two or 
more threads to execute the synchronized function simultaneously.


  1 import threading
  2
  3 def synchronized(func):
  4 def innerMethod(self, *args, **kwargs):
  5 if not hasattr(self, '_sync_lock'):

Imagine two threads reach the above test at the same time - they both discover 
there is no RLock protecting this function.  They both entire this suite to 
create one.

  6 self._sync_lock = threading.RLock()

Now one of them zooms ahead, creating the RLock and acquiring it on the next 
line.  The other one finally manages to get some runtime again afterwards and 
creates another RLock, clobbering the first.

  7 self._sync_lock.acquire()

Now it proceeds to this point and acquires the newly created RLock.  Woops.  
Two threads now think they are allowed to run this function.

  8 print 'acquired %r' % self._sync_lock
  9 try:
 10 return func(self, *args, **kwargs)

And so they do.

 11 finally:
 12 self._sync_lock.release()
 13 print 'released %r' % self._sync_lock

Of course, when the second gets to the finally suite, it will explode, since it 
will be releasing the same lock the first thread to get here has already 
released.

 14 return innerMethod
 15
 16 class Foo(object):
 17 @synchronized
 18 def mySyncMethod(self):
 19 print blah
 20
 21
 22 f = Foo()
 23 f.mySyncMethod()

To avoid this race condition, you need to serialize lock creation.  This is 
exactly what Twisted's implementation does.  You can read that version at 
http://svn.twistedmatrix.com/cvs/trunk/twisted/python/threadable.py?view=markuprev=13745.
The code is factored somewhat differently: the functionality is presented as 
pre- and post-execution hooks, and there is function decorator.  The concept is 
the same, however.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Threading, real or simulated?

2005-09-21 Thread Jp Calderone
On Wed, 21 Sep 2005 18:23:33 -0500, Sam [EMAIL PROTECTED] wrote:
I'm using Python 2.3.5 with pygtk 2.4.1, and I'm using the second threading 
approach from pygtk's FAQ 20.6 - invoking gtk.gdk.threads_init(), and 
wrapping all gtk/gdk function calls with 
gtk.threads_enter()/gtk.threads_leave()

I start a thread, via thread.Threading.start().  The thread then calls a 
particularly time consuming C function, from an extension module.  I find 
that when the thread is running the C code, the GUI hangs even though I'm 
not inside the threads_enter/threads_leave territory.


  Does the extension module release the GIL?  It sounds like it does not.  Of 
course, there are a dozen other mistakes that could be made which would have 
roughly this symptom.  It's difficult to say which is the problem without 
actually seeing any code.

It looks like thread.Threading() only simulates threading, by having the 
python interpreter multiplex between running threads.  Is real threading 
possible, so that I do something time-consuming in the thread, without 
hanging the GUI?


Assuming you mean threading.Thread, this is a native thread.  It is not a 
simulation.  Something else is going wrong.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Crypto.Cipher.ARC4, bust or me doing something wrong?

2005-09-20 Thread Jp Calderone
On Tue, 20 Sep 2005 16:08:19 +0100, Michael Sparks [EMAIL PROTECTED] wrote:
Hi,


I suspect this is a bug with AMK's Crypto package from
http://www.amk.ca/python/code/crypto , but want to
check to see if I'm being dumb before posting a bug
report.

I'm looking at using this library and to familiarise myself writing
small tests with each of the ciphers. When I hit Crypto.Cipher.ARC4 I've
found that I can't get it to decode what it encodes. This might be a
case of PEBKAC, but I'm trying the following:

 from Crypto.Cipher import ARC4 as cipher
 key = 
 obj = cipher.new(key)
 obj.encrypt(This is some random text)
')f\xd4\xf6\xa6Lm\x9a%}\x8a\x95\x8ef\x00\xd6:\x12\x00!\xf3k\xafX'
 X=_
 X
')f\xd4\xf6\xa6Lm\x9a%}\x8a\x95\x8ef\x00\xd6:\x12\x00!\xf3k\xafX'
 obj.decrypt(X)
'\x87\xe1\x83\xc1\x93\xdb\xed\x93U\xe4_\x92}\x9f\xdb\x84Y\xa3\xd4b\x9eHu~'

Clearly this decode doesn't match the encode. Me being dumb or bug?

Any comments welcome :)


You need two ARC4 instances.  Performing any operation alters the internal 
state (as it is a stream cipher), which is why your bytes did not come out 
intact.

   import Crypto.Cipher.ARC4 as ARC4
   o = ARC4.new('hello monkeys')
   p = ARC4.new('hello monkeys')
   p.decrypt(o.encrypt('super secret message of doom'))
  'super secret message of doom'
  

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: threading.Thread vs. signal.signal

2005-09-17 Thread Jp Calderone
On Sat, 17 Sep 2005 19:24:54 -0400, Jack Orenstein [EMAIL PROTECTED] wrote:
I'd like to create a program that invokes a function once a second,
and terminates when the user types ctrl-c. So I created a signal
handler, created a threading.Thread which does the invocation every
second, and started the thread. The signal handler seems to be
ineffective. Any idea what I'm doing wrong? This is on Fedora FC4 and
Python 2.4.1. The code appears below.

If I do the while ... sleep in the main thread, then the signal
handler works as expected. (This isn't really a satisfactory
implementation because the function called every second might
take a significant fraction of a second to execute.)

Jack Orenstein


import sys
import signal
import threading
import datetime
import time

class metronome(threading.Thread):
 def __init__(self, interval, function):
 threading.Thread.__init__(self)
 self.interval = interval
 self.function = function
 self.done = False

 def cancel(self):
 print ' cancel'
 self.done = True

 def run(self):
 while not self.done:
 time.sleep(self.interval)
 if self.done:
 print ' break!'
 break
 else:
 self.function()

def ctrl_c_handler(signal, frame):
 print ' ctrl c'
 global t
 t.cancel()
 sys.stdout.close()
 sys.stderr.close()
 sys.exit(0)

signal.signal(signal.SIGINT, ctrl_c_handler)

def hello():
 print datetime.datetime.now()

t = metronome(1, hello)
t.start()

The problem is that you allowed the main thread to complete.  No longer 
running, it can no longer process signals.  If you add something like this to 
the end of the program, you should see the behavior you wanted:

while not t.done:
time.sleep(1)

Incidentally, the last 3 lines of ctrl_c_handler aren't really necessary.

That said, here's a simpler version of the same program, using Twisted:

import datetime
from twisted.internet import reactor, task

def hello():
print datetime.datetime.now()

task.LoopingCall(hello).start(1, now=False)
reactor.run()

Hope this helps!

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Can someone explain what I've done wrong...

2005-09-17 Thread Jp Calderone
On Sun, 18 Sep 2005 02:10:50 +0100, Jason [EMAIL PROTECTED] wrote:
Hi,

I'm following a tutorial about classes, and have created the following
(well, copied it from the manual buy added my own and wifes names)...

class Person:
 population=0

 def __init__(self,name):
 self.name=name
 print '(Initialising %s)' % self.name
 Person.population += 1

 def __del__(self):
 print %s says bye. % self.name

 Person.population -= 1

 if Person.population == 0:
 print I am the last one
 else:
 print There are still %d people left. % Person.population

 def sayHi(self):
 '''Greeting by the person.

 That's all it does.'''
 print Hi, my name is %s % self.name

 def howMany(self):
 if Person.population==1:
 print I am on the only person here.
 else:
 print We have %d persons here. % Person.population

Jason=Person(Jason)
Jason.sayHi()
Jason.howMany()

Sophie=Person(Sophie)
Sophie.sayHi()
Sophie.howMany()

Jason.sayHi()

The code, when run, should produce the following...

Hi, my name is Jason.
I am the only person here.
(Initializing Sophie)
Hi, my name is Sophie.
We have 2 persons here.
Hi, my name is Jason.
We have 2 persons here.
Jason says bye.
There are still 1 people left.
Sophie says bye.
I am the last one.

But what I actually get is...

(Initialising Jason)
Hi, my name is Jason
I am on the only person here.
(Initialising Sophie)
Hi, my name is Sophie
We have 2 persons here.
Hi, my name is Jason
We have 2 persons here.
Jason says bye.
There are still 1 people left.
Sophie says bye.
Exception exceptions.AttributeError: 'NoneType' object has no attribute
'popula
tion' in bound method Person.__del__ of __main__.Person instance at
0x0097B53
0 ignored

I've looked through the code but can't find anything obvious.

I also want to apologise if this isn't the write newsgroup to post on,
but it's the only one I know of.  IF anyone knows a good newsgroup, I'd
appreciate it.

TIA

The __del__ method is not a reliable cleanup mechanism.  It runs when an object 
is garbage collected, but garbage collection is unpredictable and not 
guaranteed to ever occur.

In your particular case, __del__ is running during interpreter shut down.  
Another part of interpreter shutdown is to set the attributes of all modules to 
None.  Your code executes after this, tries to change None.population (since 
Person is now None), and explodes.

Your best bet is not to use __del__ at all.  Instead, add a method to be 
invoked when a person is going away, and then invoke it.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Sockets: code works locally but fails over LAN

2005-08-31 Thread Jp Calderone
On 31 Aug 2005 06:03:00 -0700, n00m [EMAIL PROTECTED] wrote:
import socket, thread
host, port = '192.168.0.3', 1434
s1 = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s2 = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s2.connect((host, 1433))
s1.bind((host, port))
s1.listen(1)
cn, addr = s1.accept()

def VB_SCRIPT():
while 1:
data = cn.recv(4096)
if not data: return
s2.send(data)
print 'VB_SCRIPT:' + data + '\n\n'

def SQL_SERVER():
while 1:
data = s2.recv(4096)
if not data: return
cn.send(data)
print 'SQL_SERVER:' + data + '\n\n'

thread.start_new_thread(VB_SCRIPT,())
thread.start_new_thread(SQL_SERVER,())

This is about the same as:

mktap portforward --port 1434 --host 192.168.0.3 --dest_port 1433
twistd -f portforward.tap

You'll find the code behind these two commands here:

http://cvs.twistedmatrix.com/cvs/trunk/twisted/tap/portforward.py?view=markuprev=13278

and here:

http://cvs.twistedmatrix.com/cvs/trunk/twisted/protocols/portforward.py?view=markuprev=12914

And of course, the main Twisted site is http://twistedmatrix.com/.

Some differences between portforward.tap and your code include:

portforward.tap will accept multiple connections, rather than just one.  
portforward.tap won't print out all the bytes it receives (I assume this is 
just for debugging purposes anyway - if not, a simple modification will cause 
it to do this).  portforward.tap won't non-deterministically drop traffic, 
since Twisted checks the return value of send() and properly re-transmits 
anything which has not actually been sent.

Hope this helps,

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Basic Server/Client socket pair not working

2005-08-29 Thread Jp Calderone
On Mon, 29 Aug 2005 21:30:51 +0200, Michael Goettsche [EMAIL PROTECTED] wrote:
Hi there,

I'm trying to write a simple server/client example. The client should be able
to send text to the server and the server should distribute the text to all
connected clients. However, it seems that only the first entered text is sent
and received. When I then get prompted for input again and press return,
nothing gets back to me. Any hints on what I have done would be very much
appreciated!

You aren't handling readiness notification properly.  Twisted is a good way to 
not have to deal with all the niggling details of readiness notification.  I 
haven't trotted out this example in a while, and it's /almost/ appropriate here 
;) so...

http://www.livejournal.com/users/jcalderone/2660.html

Of course, that's highly obscure and not typical of a Twisted application.  
Nevertheless, I believe it satisfies your requirements.  You might want to take 
a look at http://twistedmatrix.com/projects/core/documentation/howto/index.html 
for a gentler introduction, though.  In particular, the client and server 
HOWTOs.

I'll comment on your code below.


Here's my code:

 SERVER ##
import socket
import select

mySocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
mySocket.bind(('', 1))
mySocket.listen(1)

clientlist = []

while True:
   connection, details = mySocket.accept()
   print 'We have opened a connection with', details
   clientlist.append(connection)
   readable = select.select(clientlist, [], [])
   msg = ''
   for i in readable[0]:

You'll need to monitor sockets for disconnection.  The way you do this varies 
between platforms.  On POSIX, a socket will become readable but reading from it 
will return ''.  On Win32, a socket will show up in the exceptions set (which 
you are not populating).  With the below loop, your server will go into an 
infinite loop whenever a client disconnects, because the chunk will be empty 
and the message will never get long enough to break the loop.

  while len(msg)  1024:
 chunk = i.recv(1024 - len(msg))
 msg = msg + chunk

Additionally, select() has only told you that at least /one/ recv() call will 
return without blocking.  Since you call recv() repeatedly, each time through 
the loop after the first has the potential to block indefinitely, only 
returning when more bytes manage to arrive from the client.  This is most 
likely the cause of the freezes you are seeing.  Instead, call recv() only once 
and buffer up to 1024 (if this is really necessary - I do not think it is) over 
multiple iterations through the outermost loop (the while True: loop).


   for i in clientlist:
  totalsent = 0
  while totalsent  1024:
 sent = i.send(msg)
 totalsent = totalsent + sent

This isn't quite right either - though it's close.  You correctly monitor how 
much of the message you have sent to the client, since send() may not send the 
entire thing.  However there are two problems.  One is trivial, and probably 
just a think-o: each time through the loop, you re-send `msg', when you 
probably meant to send `msg[totalsent:]' to avoid duplicating the beginning of 
the message and losing the end of it.  The other problem is that the send() 
call may block.  You need to monitor each socket for write-readiness (the 2nd 
group of sockets to select()) and only call send() once for each time a socket 
appears as writable.


## CLIENT 
import socket
import select

socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
socket.connect((127.0.0.1, 1))

while True:
text = raw_input(Du bist an der Reihe)
text = text + ((1024 - len(text)) * .)
totalsent = 0
while totalsent  len(text):
sent = socket.send(text)
totalsent = totalsent + sent

Similar problem here as in the server-side send() loop - 
`send(text[totalsent:])' instead of sending just `text'.


msg = ''
while len(msg)  1024:
chunk = socket.recv(1024 - len(msg))
msg = msg + chunk

This is problematic too, since it means you will only be able to send one 
message for each message received from the server, and vice versa.  Most chat 
sessions don't play out like this.


print msg

I encourage you to take a look at Twisted.  It takes care of all these little 
details in a cross-platform manner, allowing you to focus on your unique 
application logic, rather than solving the same boring problems that so many 
programmers before you have solved.

Hope this helps,

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python and ajax

2005-08-29 Thread Jp Calderone
On Mon, 29 Aug 2005 12:04:46 -0700 (PDT), Steve Young [EMAIL PROTECTED] wrote:
Hi, I was wondering if anybody knew of any good
tutorial/example of AJAX/xmlhttprequest in python.
Thanks.


There's a short example of Nevow's LivePage online here: 
http://divmod.org/svn/Nevow/trunk/examples/livepage/livepage.py

It's not a tutorial by itself, but if you poke around some of the other 
examples and read http://divmod.org/projects/nevow and some of the documents it 
references, you should be able to figure things out.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Using select on a unix command in lieu of signal

2005-08-29 Thread Jp Calderone
On 29 Aug 2005 17:57:34 -0700, rh0dium [EMAIL PROTECTED] wrote:
So here's how I solved this..  It's seems crude - but hey it works.
select not needed..

def runCmd( self, cmd, timeout=None ):
self.logger.debug(Initializing function %s - %s %
(sys._getframe().f_code.co_name,cmd) )
command = cmd + \n

child = popen2.Popen3(command)
t0 = time.time()

out = None
while time.time() - t0  timeout:
if child.poll() != -1:
self.logger.debug(Command %s completed succesfully %
cmd )
out = child.poll()
results = .join(child.fromchild.readlines())
results = results.rstrip()
break
print Still waiting.., child.poll(), time.time() -
t0, t0
time.sleep(.5)

if out == None:
self.logger.warning( Command: %s failed! % cmd)
kill = os.kill(child.pid,9)
self.logger.debug( Killing command %s - Result: %s %
(cmd, kill))
out = results = None

else:

self.logger.debug(Exit: %s Reullts: %s % (out,results))

child.tochild.close()
child.fromchild.close()
return out,results

Comments..


Here's how I'd do it...


from twisted.internet import reactor, protocol, error

class PrematureTermination(Exception):
Indicates the process exited abnormally, either by receiving an 
unhandled signal or with a non-zero exit code.


class TimeoutOutputProcessProtocol(protocol.ProcessProtocol):
timeoutCall = None
onCompletion = None

def __init__(self, onCompletion, timeout=None):
# Take a Deferred which we will use to signal completion (successful 
# or otherwise), as well as an optional timeout, which is the maximum 
# number of seconds (may include a fractional part) for which we will 
# await the process' completion.
self.onCompletion = onCompletion
self.timeout = timeout

def connectionMade(self):
# The child process has been created.  Set up a buffer for its output,
# as well as a timer if we were given a timeout.
self.output = []
if self.timeout is not None:
self.timeoutCall = reactor.callLater(
self.timeout, self._terminate)

def outReceived(self, data):
# Record some data from the child process.  This will be called 
# repeatedly, possibly with a large amount of data, so we use a list 
# to accumulate the results to avoid quadratic string-concatenation 
# behavior.  If desired, this method could also extend the timeout: 
# since it is producing output, the child process is clearly not hung;
# for some applications it may make sense to give it some leeway in 
# this case.  If we wanted to do this, we'd add lines to this effect:
# if self.timeoutCall is not None:
# self.timeoutCall.delay(someNumberOfSeconds)
self.output.append(data)

def _terminate(self):
# Callback set up in connectionMade - if we get here, we've run out of 
# time.  Error-back the waiting Deferred with a TimeoutError including
# the output we've received so far (in case the application can still 
# make use of it somehow) and kill the child process (rather 
# forcefully - a nicer implementation might want to start with a 
# gentler signal and set up another timeout to try again with KILL).
self.timeoutCall = None
self.onCompletion.errback(error.TimeoutError(''.join(self.output)))
self.onCompletion = None
self.output = None
self.transport.signalProcess('KILL')

def processEnded(self, reason):
# Callback indicating the child process has exited.  If the timeout 
# has not expired and the process exited normally, callback the 
# waiting Deferred with all our results.  If we did time out, nothing 
# more needs to be done here since the Deferred has already been 
# errored-back.  If we exited abnormally, error-back the Deferred in a
# different way indicating this.
if self.onCompletion is not None:
# We didn't time out
self.timeoutCall.cancel()
self.timeoutCall = None

if reason.check(error.ProcessTerminated):
# The child exited abnormally
self.onCompletion.errback(
PrematureTermination(reason, ''.join(self.output)))
else:
# Success!  Pass on our output.
self.onCompletion.callback(''.join(self.output)))

# Misc. cleanup
self.onCompletion = None
self.output = None

def runCmd(executable, args, timeout=None, **kw):
d = defer.Deferred()
p = TimeoutOutputProcessProtocol(d, timeout)
reactor.spawnProcess(p, executable, args, **kw)
return d


Re: Creating a graphical interface on top of SSH. How?

2005-08-16 Thread Jp Calderone
On 16 Aug 2005 07:10:25 -0700, John F. [EMAIL PROTECTED] wrote:
I want to write a client app in Python using wxWindows that connects to
my FreeBSD server via SSH (using my machine account credentials) and
runs a python or shell script when requested (by clicking a button for
instance).

Can someone give me some advice on how to create a graphical shell
per se?

tkconch might be interesting to look at.  
http://www.twistedmatrix.com/projects/conch/
-- 
http://mail.python.org/mailman/listinfo/python-list


RE: Terminate a thread that doesn't check for events

2005-08-02 Thread Jp Calderone
On Tue, 2 Aug 2005 09:51:31 -0400, Liu Shuai [EMAIL PROTECTED] wrote:
Can someone please comment on this?

 [snip - how to stop a thread without its cooperation?]

There's no way to do this with threads, sorry.

Perhaps you could use a child process, instead.  Those are typically easy to 
terminate at arbitrary times.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Simple Problem

2005-07-24 Thread Jp Calderone
On 24 Jul 2005 18:14:13 -0700, ncf [EMAIL PROTECTED] wrote:
I know I've seen this somewhere before, but does anyone know what the
function to escape a string is? (i.e., encoding newline to \n and a
chr(254) to \xfe) (and visa-versa)

Thanks for helping my ignorance :P

Python 2.4.1 (#2, Mar 30 2005, 21:51:10) 
[GCC 3.3.5 (Debian 1:3.3.5-8ubuntu2)] on linux2
Type help, copyright, credits or license for more information.
 '\n\xfe'.encode('string-escape')
'\\n\\xfe'
 '\\n\\xfe'.decode('string-escape')
'\n\xfe'
 

  Introduced in Python 2.3

  Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Need to interrupt to check for mouse movement

2005-07-21 Thread Jp Calderone
On Thu, 21 Jul 2005 00:51:45 -0400, Christopher Subich [EMAIL PROTECTED] 
wrote:
Jp Calderone wrote:

 In the particular case of wxWidgets, it turns out that the *GUI* blocks
 for long periods of time, preventing the *network* from getting
 attention.  But I agree with your position for other toolkits, such as
 Gtk, Qt, or Tk.

Wow, I'm not familiar with wxWidgets; how's that work?

wxWidgets' event loop doesn't differentiate between two unrelated (but similar 
sounding) concepts: blocking arbitrary input from the user (as in the case of 
modal dialogs) and blocking execution of code.

When you pop up a modal dialog, your code will not get another chance to run 
until the user dismisses it.  Similarly, as long as a menu is open, your code 
will not get to run.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Need to interrupt to check for mouse movement

2005-07-21 Thread Jp Calderone
On 20 Jul 2005 22:06:31 -0700, Paul Rubin http://phr.cx@nospam.invalid 
wrote:
Christopher Subich [EMAIL PROTECTED] writes:
  In the particular case of wxWidgets, it turns out that the *GUI*
  blocks for long periods of time, preventing the *network* from
  getting attention.  But I agree with your position for other
  toolkits, such as Gtk, Qt, or Tk.

 Wow, I'm not familiar with wxWidgets; how's that work?

Huh?  It's pretty normal, the gui blocks while waiting for events
from the window system.  I expect that Qt and Tk work the same way.

But not Gtk? :)  I meant what I said: wxWidgets behaves differently in this 
regard than Gtk, Qt, and Tk.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Need to interrupt to check for mouse movement

2005-07-21 Thread Jp Calderone
On Thu, 21 Jul 2005 05:42:32 -, Donn Cave [EMAIL PROTECTED] wrote:
Quoth Paul Rubin http://[EMAIL PROTECTED]:
| Christopher Subich [EMAIL PROTECTED] writes:
|   In the particular case of wxWidgets, it turns out that the *GUI*
|   blocks for long periods of time, preventing the *network* from
|   getting attention.  But I agree with your position for other
|   toolkits, such as Gtk, Qt, or Tk.
| 
|  Wow, I'm not familiar with wxWidgets; how's that work?
|
| Huh?  It's pretty normal, the gui blocks while waiting for events
| from the window system.  I expect that Qt and Tk work the same way.

In fact anything works that way, that being the nature of I/O.
But usually there's a way to add your own I/O source to be
dispatched along with the UI events -- the toolkit will for
example use select() to wait for X11 socket I/O, so it can also
respond to incoming data on another socket, provided along with a
callback function by the application.

Am I hearing that wxWindows or other popular toolkits don't provide
any such feature, and need multiple threads for this reason?


Other popular toolkits do.  wxWindows doesn't.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Need to interrupt to check for mouse movement

2005-07-21 Thread Jp Calderone
On Thu, 21 Jul 2005 02:33:05 -0400, Peter Hansen [EMAIL PROTECTED] wrote:
Jp Calderone wrote:
 In the particular case of wxWidgets, it turns out that the *GUI* blocks
 for long periods of time, preventing the *network* from getting
 attention.  But I agree with your position for other toolkits, such as
 Gtk, Qt, or Tk.

Are you simply showing that there are two points of view here, that one
can look at the wx main loop as being blocking, waiting for I/O, even
though it is simply doing asynchronous event-driven processing the same
as Twisted?  Or am I missing something?  Allowing for the fact that wx
blocks, not just for long periods of time, but *indefinitely* (as long
as no events are arriving) I still don't see how that makes it different
from Twisted or from any other typical GUI framework, which do exactly
the same thing.  (And since there is even a wxPython main loop
integrated with and provided in Twisted, surely you aren't arguing that
what wx does is somehow unusual or bad.)

Providing wx support in Twisted has been orders of magnitude more difficult 
than providing Tk, Qt, or Gtk support has been.  And wxsupport and wxreactor 
are each broken in slightly different ways, so I wouldn't say we've been 
successful, either.

Blocking inside the mainloop while waiting for events is fine.  It's blocking 
elsewhere that is problematic.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Buffering problem using subprocess module

2005-07-21 Thread Jp Calderone
On 21 Jul 2005 06:14:25 -0700, Dr. Who [EMAIL PROTECTED] wrote:
I am using the subprocess module in 2.4.  Here's the fragment:

bufcaller.py:
   import sys, subprocess
   proc = subprocess.Popen('python bufcallee.py', bufsize=0, shell=True,
stdout=subprocess.PIPE)
   for line in proc.stdout:
   sys.stdout.write(line)

bufcallee.py:
   import time
   print 'START'
   time.sleep(10)
   print 'STOP'

Although the documentation says that the output should be unbuffered
(bufsize=0) the program (bufcaller) pauses for 10 seconds and then
prints START immediately followed by 'STOP' rather than pausing 10
seconds in between them.  Note that I made bufcallee a Python script
for ease of the example but in the real-world problem I am trying to
solve it is simply an executable.

Any ideas?

There are a few places buffering can come into play.  The bufsize parameter to 
Popen() controls buffering on the reading side, but it has no effect on 
buffering on the writing side.  If you add a sys.stdout.flush() after the 
prints in the child process, you should see the bytes show up immediately.  
Another possibility is to start Python in unbuffered mode (pass the -u flag, or 
set PYTHONUNBUFFERED in the environment), but obviously this only applies to 
Python programs.  Still another possibility (generally the nicest) is to use a 
PTY instead of a pipe: when the C library sees stdout is a pipe, it generally 
decides to put output into a different buffering mode than when it sees stdout 
is a pty.  I'm not sure how you use ptys with the subprocess module.

Hope this helps,

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Stupid question: Making scripts python-scripts

2005-07-21 Thread Jp Calderone
On Thu, 21 Jul 2005 16:34:30 +0200, Jan Danielsson [EMAIL PROTECTED] wrote:
Hello all,

   How do I make a python script actually a _python_ in unix:ish
environments?

 [snip]

Put #!/usr/bin/python.  Install the program using distutils: if necessary, 
distutils will rewrite the #! line to fit the configuration of the system the 
program is being installed on.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Overriding a built-in exception handler

2005-07-21 Thread Jp Calderone
On 21 Jul 2005 07:39:10 -0700, [EMAIL PROTECTED] wrote:
I'm having a tough time figuring this one out:


class MyKBInterrupt( . ):
   print Are you sure you want to do that?

if __name__ == __main__:
   while 1:
   print Still here...


So this thing keeps printing Still here... until the user hits ctl-c,
at which time the exception is passed to MyKBInterrupt to handle the
exception, rather than to whatever the built-in handler would be.

I've Read-TFM, but I only see good info on how to create my own class
of exception;  I don't see anything on how to override an existing
exception handler.

Thanks in advance for any help.

See excepthook in the sys module documentation: 

  http://python.org/doc/lib/module-sys.html

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Need to interrupt to check for mouse movement

2005-07-20 Thread Jp Calderone
On Thu, 21 Jul 2005 00:18:58 -0400, Christopher Subich [EMAIL PROTECTED] 
wrote:
Peter Hansen wrote:
 stringy wrote:

 I have a program that shows a 3d representation of a cell, depending on
 some data that it receives from some C++. It runs with wx.timer(500),
 and on wx.EVT_TIMER, it updates the the data, and receives it over the
 socket.


 It's generally inappropriate to have a GUI program do network
 communications in the main GUI thread.  You should create a worker
 thread and communicate with it using Queues and possibly the
 AddPendingEvent() or PostEvent() methods in wx.  There should be many
 easily accessible examples of how to do such things.  Post again if you
 need help finding them.

I'd argue that point; it's certainly inappropriate to do
(long-)/blocking/ network communications in a main GUI thread, but
that's just the same as any blocking IO.  If the main thread is blocked
on IO, it can't respond to the user which is Bad.

However, instead of building threads (possibly needlessly) and dealing
with synchronization issues, I'd argue that the solution is to use a
nonblocking network IO package that integrates with the GUI event loop.
  Something like Twisted is perfect for this task, although it might
involve a significant application restructuring for the grandparent poster.

Since blocking network IO is generally slow, this should help the
grandparent poster -- I am presuming that the program updating itself
is an IO-bound, rather than processor-bound process.

In the particular case of wxWidgets, it turns out that the *GUI* blocks for 
long periods of time, preventing the *network* from getting attention.  But I 
agree with your position for other toolkits, such as Gtk, Qt, or Tk.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Filtering out non-readable characters

2005-07-16 Thread Jp Calderone
On Sat, 16 Jul 2005 19:01:50 -0400, Peter Hansen [EMAIL PROTECTED] wrote:
George Sakkis wrote:
 Bengt Richter [EMAIL PROTECTED] wrote:
  identity = ''.join([chr(i) for i in xrange(256)])

 Or equivalently:
identity = string.maketrans('','')

Wow!  That's handy, not to mention undocumented.  (At least in the
string module docs.)  Where did you learn that, George?


http://python.org/doc/lib/node109.html

-Peter
--
http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: threads and sleep?

2005-07-14 Thread Jp Calderone
On 14 Jul 2005 05:10:38 -0700, Paul Rubin http://phr.cx@nospam.invalid 
wrote:
Andreas Kostyrka [EMAIL PROTECTED] writes:
 Basically the current state of art in threading programming doesn't
 include a safe model. General threading programming is unsafe at the
 moment, and there's nothing to do about that. It requires the developer
 to carefully add any needed locking by hand.

So how does Java do it?  Declaring some objects and functions to be
synchronized seems to be enough, I thought.

Multithreaded Java programs have thread-related bugs in them too.  So it 
doesn't seem to be enough.  Like Python's model, Java's is mostly about 
ensuring internal interpreter state doesn't get horribly corrupted.  It doesn't 
do anything for application-level state.  For example, the following (Python, 
because it's way too early to write Java, but a straight port to Java would be 
broken in exactly the same way) program is not threadsafe:

things = []

def twiddle(thing):
if thing in things:
print 'got one'
things.remove(thing)
else:
print 'missing one'
things.append(thing)

The global list will never become corrupted.  stdout will always be fine 
(although perhaps a little weird).  The objects being appended to and removed 
from the list are perfectly safe.

But the program will double append items to the list sometimes, and raise 
ValueErrors from the list.remove() call sometimes.

Java's model isn't really too far from the traditional one.  It's a tiny bit 
safer, perhaps, but that's all.  For something different, take a look at 
Erlang's mechanism (this has been ported to Python, although I have heard 
nothing of the result since its release announcement, I wonder how it's doing?)

Hope this helps,

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: how to get rate of pop3 receiving progress?

2005-07-14 Thread Jp Calderone

On Thu, 14 Jul 2005 17:09:10 +0800, Leo Jay [EMAIL PROTECTED] wrote:

when i use POP3.retr() in poplib module, the retr() function will not
return until the  receiving progress is finished

so, is there any way to get the rate of receiving progress?



An extremely rudamentary example of how you might do this using Twisted's POP3 
client support is attached.

Jp


pop3progress.py
Description: application/python
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: DNS access

2005-07-13 Thread Jp Calderone
On 13 Jul 2005 07:44:41 -0700, laksh [EMAIL PROTECTED] wrote:
im looking for some advice regarding DNS lookup using python

is it possible to give parameters like the IP of a DNS server and the
DNS query to a python program and obtain the response from the DNS
server ?


Not using the built-in hostname resolution functions.  There are a number of 
third-party DNS libraries:

  http://devel.it.su.se/projects/python-dns/

  http://pydns.sourceforge.net/

  http://dustman.net/andy/python/adns-python/

  http://twistedmatrix.com/projects/names/

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: DNS access

2005-07-13 Thread Jp Calderone
On Wed, 13 Jul 2005 15:22:35 -0400, Chris Lambacher [EMAIL PROTECTED] wrote:
reverse dns lookup is not really special compared to a regular dns lookup.
you just need to look up a special name:
http://www.dnsstuff.com/info/revdns.htm

to format the ip address properly use something like:
def rev_dns_string(ip_str):
nums = ip_str.split('.').reverse()
nums.extend(('in-addr', 'arpa'))
return '.'.join(nums)


It may not be special, but it is different.  Reverse DNS uses PTR records, not 
A records.  The site you referenced points this out, too:


Reverse DNS entries are set up with PTR records (whereas standard DNS uses A 
records), which look like 25.2.0.192.in-addr.arpa. PTR host.example.com 
(whereas standard DNS would look like host.example.com. A 192.0.2.25).


Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Use cases for del

2005-07-06 Thread Jp Calderone
On Wed, 06 Jul 2005 09:45:56 -0400, Peter Hansen [EMAIL PROTECTED] wrote:
Tom Anderson wrote:
 How about just getting rid of del? Removal from collections could be
 done with a method call, and i'm not convinced that deleting variables
 is something we really need to be able to do (most other languages
 manage without it).

Arguing the case for del: how would I, in doing automated testing,
ensure that I've returned everything to a clean starting point in all
cases if I can't delete variables?  Sometimes a global is the simplest
way to do something... how do I delete a global if not with del?


Unless you are actually relying on the global name not being defined, 
someGlobal = None would seem to do just fine.

Relying on the global name not being defined seems like an edge case.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: map/filter/reduce/lambda opinions and background unscientificmini-survey

2005-07-03 Thread Jp Calderone
On Sun, 03 Jul 2005 14:43:14 -0400, Peter Hansen [EMAIL PROTECTED] wrote:
Steven D'Aprano wrote:
 Frankly, I find this entire discussion very surreal. Reduce etc *work*,
 right now. They have worked for years. If people don't like them, nobody
 is forcing them to use them. Python is being pushed into directions which
 are *far* harder to understand than map and reduce (currying, decorators,
 etc) and people don't complain about those.

I find it surreal too, for a different reason.

Python *works*, right now.  It has worked for years.  If people don't
like the direction it's going, nobody is forcing them to upgrade to the
new version (which is not imminent anyway).

In the unlikely event that the latest and greatest Python in, what, five
years or more?, is so alien that one can't handle it, one has the right
to fork Python and maintain a tried-and-true-and-still-including-reduce-
-filter-and-map version of it, or even just to stick with the most
recent version which still has those features.  And that's assuming it's
not acceptable (for whatever bizarre reason I can't imagine) to use the
inevitable third-party extension that will provide them anyway.

I wonder if some of those who seem most concerned are actually more
worried about losing the free support of a team of expert developers as
those developers evolve their vision of the language, than about losing
access to something as minor as reduce().

This is a specious line of reasoning.  Here's why:

Lots of people use Python, like Python, want to keep using Python.  Moreover, 
they want Python to improve, rather than the reverse.  Different people have 
different ideas about what improve means.  Guido has his ideas, and since 
he's the BDFL, those are the ideas most likely to influence the direction of 
Python's development.

However, Guido isn't the only person with ideas, nor are his ideas the only 
ones that should be allowed to influence the direction of Python's development. 
 Guido himself wouldn't even be silly enough to take this position.  He knows 
he is not the ultimate source of wisdom in the world on all matters programming 
related.

So when people disagree with him, suggesting that they should leave the Python 
community is ridiculous.  Just like Guido (and the overwhelming majority of the 
Python community - heck, maybe even all of it), these people are trying to 
improve the language.

Leaving the community isn't going to improve the language.  Continuing to 
operate actively within it just might.

For my part, I lack the time and energy to participate in many of these 
discussions, but anyone who knows me knows I'm not silent because I see eye to 
eye with Guido on every issue :)  I'm extremely greatful to the people who do 
give so much of their own time to try to further the Python language.

Suggesting people can like it or lump it is a disservice to everyone.

(Sorry to single you out Peter, I know you frequently contribute great content 
to these discussions too, and that there are plenty of other people who respond 
in the way you have in this message, but I had to pick /some/ post to reply to)

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Using regular expressions in internet searches

2005-07-03 Thread Jp Calderone
On 3 Jul 2005 10:49:03 -0700, [EMAIL PROTECTED] wrote:
What is the best way to use regular expressions to extract information
from the internet if one wants to search multiple pages? Let's say I
want to search all of www.cnn.com and get a list of all the words that
follow Michael.

(1) Is Python the best language for this? (Plus is it time-efficient?)
Is there already a search engine that can do this?

(2) How can I search multiple web pages within a single location or
path?

TIA,

Mike


Is a google search for site:cnn.com Michael not up to the task?

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Favorite non-python language trick?

2005-07-03 Thread Jp Calderone
On Sun, 03 Jul 2005 15:40:38 -0500, Rocco Moretti [EMAIL PROTECTED] wrote:
Jp Calderone wrote:
 On Fri, 01 Jul 2005 15:02:10 -0500, Rocco Moretti
 [EMAIL PROTECTED] wrote:


 I'm not aware of a language that allows it, but recently I've found
 myself wanting the ability to transparently replace objects.


 Smalltalk supports this with the become message.  I have also done an
 implementation of this for Python.

As a pure Python module, or do you have to recompile the interpreter?

Somewhere in between, I guess.  The module is all Python, but relies pretty 
heavily on one particular stdlib extension module.

The code is rather short, and online here:

  http://divmod.org/users/exarkun/become.py

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: looping over a big file

2005-07-03 Thread Jp Calderone
On Sun, 3 Jul 2005 23:52:12 +0200, martian [EMAIL PROTECTED] wrote:
Hi,

I've a couple of questions regarding the processing of a big text file
(16MB).

1) how does python handle:

 for line in big_file:

is big_file all read into memory or one line is read at a time or a buffer
is used or ...?

It uses an internal buffer to reach a happy medium between performance and 
memory usage.


2) is it possible to advance lines within the loop? The following doesn't
work:

 for line in big_file:
 line_after  = big_file.readline()


Yes, but you need to do it like this:

  fileIter = iter(big_file)
  for line in fileIter:
  line_after = fileIter.next()

  Don't mix iterating with any other file methods, since it will confuse the 
buffering scheme.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Favorite non-python language trick?

2005-07-01 Thread Jp Calderone
On Fri, 01 Jul 2005 15:02:10 -0500, Rocco Moretti [EMAIL PROTECTED] wrote:
Joseph Garvin wrote:

 I'm curious -- what is everyone's favorite trick from a non-python
 language? And -- why isn't it in Python?

I'm not aware of a language that allows it, but recently I've found
myself wanting the ability to transparently replace objects. For
example, if you have a transparent wrapper class around a certain
object, and then determine that you no longer need to wrap the object,
you can say the magic incantation, and the wrapper instance is replaced
by what it is wrapping everywhere in the program. Or you have a complex
math object, and you realize you can reduce it to a simple integer, you
can substitue the integer for the math object, everywhere.

I mainly look for it in the object replaces self form, but I guess you
could also have it for arbitrary objects, e.g. to wrap a logging object
around a function, even if you don't have access to all references of
that function.

Why isn't it in Python? It's completely counter to the conventional
object semantics.

Smalltalk supports this with the become message.  I have also done an 
implementation of this for Python.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: map/filter/reduce/lambda opinions and background unscientific mini-survey

2005-07-01 Thread Jp Calderone
On Fri, 01 Jul 2005 20:36:29 GMT, Ron Adam [EMAIL PROTECTED] wrote:
Tom Anderson wrote:

 So, if you're a pythonista who loves map and lambda, and disagrees with
 Guido, what's your background? Functional or not?

I find map too limiting, so won't miss it.  I'm +0 on removing lambda
only because I'm unsure that there's always a better alternative.

So what would be a good example of a lambda that couldn't be replaced?

lambda can always be replaced.  Just like a for loop can always be replaced:

iterator = iter(iterable)
while True:
try:
loop variable = iterator.next()
except StopIteration:
break
else:
loop body

Let's get rid of for, too.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Scket connection to server

2005-06-30 Thread Jp Calderone
On Thu, 30 Jun 2005 18:39:27 +0100, Steve Horsley [EMAIL PROTECTED] wrote:
JudgeDread wrote:
 hello python gurus

 I would like to establish a socket connection to a server running a service
 on port 2. the host address is 10.214.109.50. how do i do this using
 python?

 many thanks



Off the top of my head (so there could be errors):


There are a few:

import socket
s = socket.Socket()

s = socket.socket()

s.connect((10.214.109.50, 2))

s.connect(('10.214.109.50', 2))

s.send(Hello, Mum\r\n)

s.sendall(Hello, Mum\r\n)


That should point you in the right direction, anyway.

There is a higher level socket framework called twisted that
everyone seems to like. It may be worth looking at that too -
haven't got round to it myself yet.

Twisted is definitely worth checking out.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: importing packages from a zip file

2005-06-29 Thread Jp Calderone
On Wed, 29 Jun 2005 18:49:10 +, Peter Tillotson [EMAIL PROTECTED] wrote:
cheers Scott

should have been
from myZip.zip import base.branch1.myModule.py

and no it didn't work, anyone know a reason why this syntax is not
preferred ??

sorry posted the soln again, it works but feels nasty

Including paths in source files is a disaster.  As soon as you do it, you need 
to account for alternate installation schemes by rewriting portions of your 
source files.

Separating path names from module names lets you avoid most of this mess.

Jp

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python broadcast socket

2005-06-29 Thread Jp Calderone
On Thu, 30 Jun 2005 00:13:45 +0200, Irmen de Jong [EMAIL PROTECTED] wrote:
Grant Edwards wrote:

 Under Linux, you need to be root to send a broadcase packet.

I don't think this is true.


I think you're right.  I believe you just need to set the broadcast SOL_SOCKET 
option.

 import socket
 s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
 s.sendto('asdljk', ('255.255.255.255', 12345))
Traceback (most recent call last):
  File stdin, line 1, in ?
socket.error: (13, 'Permission denied')
 s.setsockopt(socket.SOL_SOCKET, socket.SO_BROADCAST, 1)
 s.sendto('asdljk', ('255.255.255.255', 12345))
6
 

Yep, looks like it.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [Twisted-Python] Limiting number of concurrent client connections

2005-06-28 Thread Jp Calderone
On Tue, 28 Jun 2005 10:47:04 +0100, Toby Dickenson [EMAIL PROTECTED] wrote:
Im finding that Win32Reactor raises an exception on every iteration of the
main loop if I exceed the limit of 64 WaitForMultipleObjects.

I would prefer to avoid this fairly obvious denial-of-service problem by
limiting the number of concurrent client connections. Is there a standard
solution for this?


Count the number of connections you have accepted.  When you get up to 62 or 63 
or so, stop accepting new ones.  If ServerFactory.buildProtocol() returns None, 
Twisted immediately closes the accepted connection.  If you do this (perhaps in 
conjunction with calling stopListening() on the port returned by listenXYZ()), 
you'll never overrun the 64 object limit.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Background thread

2005-06-25 Thread Jp Calderone
On Sat, 25 Jun 2005 11:36:57 +0100, Jorge Louis De Castro [EMAIL PROTECTED] 
wrote:
Hi,

I'm new to python and I'm having trouble figuring out a way to have a thread 
running on the background that over rules the raw_input function. The example 
I'm working on is something like having a thread that prints You're taking 
too long every 10 seconds, while waiting for input from the user.
The problem is that I can only read (and in batch) the thread printout 
messages on the console after giving something to raw_input.
Is it not possible to have the thread print its stuff and then return to 
raw_input() ? Any code examples or pseudo-code or documentation directions 
will be highly appreciated.

Thanks in advance


Here's one way you might do it without threads or raw_input()

  from twisted.protocols import basic, policies

  class AnnoyProtocol(basic.LineReceiver, policies.TimeoutMixin):
  from os import linesep as delimiter

  def connectionMade(self):
  self.setTimeout(10)

  def timeoutConnection(self):
  self.resetTimeout()
  self.sendLine(You're taking too long!)

  def lineReceived(self, line):
  self.resetTimeout()
  self.sendLine(Thank you for the line of input!)

  from twisted.internet import reactor, stdio
  stdio.StandardIO(AnnoyProtocol())
  reactor.run()

For fancy line editing support, twisted.conch.recvline might be of interest.

Hope this helps,

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: a dictionary from a list

2005-06-25 Thread Jp Calderone
On Sat, 25 Jun 2005 09:10:33 -0400, Roy Smith [EMAIL PROTECTED] wrote:
Terry Hancock [EMAIL PROTECTED] wrote:
 Before the dict constructor, you needed to do this:

 d={}
 for key in alist:
 d[key]=None

I just re-read the documentation on the dict() constructor.  Why does it
support keyword arguments?

   dict(foo=bar, baz=blah) == {foo:bar, baz=blah}

This smacks of creeping featurism.  Is this actually useful in real code?

Constantly.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: webserver application (via Twisted?)

2005-06-24 Thread Jp Calderone
On Fri, 24 Jun 2005 12:35:40 GMT, flupke [EMAIL PROTECTED] wrote:
I need to program and setup serveral webservices.
If i were still using jsp, i would use Tomcat to make the several
applications available on a given port.
How can i accomplish this in Python?
I was thinking about Twisted but it's not clear to me what parts i need
to make a webserver listen on a certain port and having it serve
different application based on the url that i received.

Any advice on this?

Roughly,

from twisted.web import server, resource
from twisted.internet import reactor

root = resource.Resource()
root.putChild(app1, getApp1())
root.putChild(app2, getApp2())
...
site = server.Site(root)
reactor.listenTCP(80, site)
reactor.run()

You might also want to join the twisted-web mailing list: 
http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-web

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Newbie question: how to keep a socket listening?

2005-06-24 Thread Jp Calderone
On Fri, 24 Jun 2005 21:21:34 -0400, Peter Hansen [EMAIL PROTECTED] wrote:
ncf wrote:
 Heh, like I said. I was not at all sure. :P

 Nevertheless, could this be the problem? =\

You *may* correct, mainly because the OP's code doesn't appear to spawn
off new threads to handle the client connections, which means he can
handle only one connection at a time.  Specifically, while he is talking
to one client he is not also in an accept() call on the server socket,
which means there will be (because of the listen(1) call) only a single
pending connection allowed in the backlog.

I haven't attempted a thorough analysis... just this much, trying to see
whether it is obvious that the listen(1) is at fault -- and it's not
obvious.  I thought this response might clarify the meaning of listen(1)
a little bit for some folks nevertheless.

The argument to listen() is only a _hint_ to the TCP/IP stack.  Linux, at 
least, will not create a buffer large enough for only a single connection.  You 
can test this easily: create a socket, bind it to an address, call listen(1) on 
it, and *don't* call accept().  Telnet (or connect somehow) repeatedly, until 
your connection is not accepted.  On my system (Linux 2.6.10), I can connect 
successfully 8 times before the behavior changes.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Newbie question: SOLVED (how to keep a socket listening), but still some questions

2005-06-24 Thread Jp Calderone
On Sat, 25 Jun 2005 01:36:56 -, Grant Edwards [EMAIL PROTECTED] wrote:
On 2005-06-25, Giovanni Tumiati [EMAIL PROTECTED] wrote:

 (2)Does one have to do a socket.shutdown() before one does a
 socket.close??

No.

[I've never figured out why one would do a shutdown RDWR
rather than close the connection, but I haven't put a lot of
thought into it.]

shutdown actually tears down the TCP connection; close releases the file 
descriptor.

If there is only one file descriptor referring to the TCP connection, these are 
more or less the same.  If there is more than one file descriptor, though, the 
difference should be apparent.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Hardening enviroment by overloading __import__?

2005-06-23 Thread Jp Calderone
On Thu, 23 Jun 2005 13:12:12 -0700, Steve Juranich [EMAIL PROTECTED] wrote:
If this is a FAQ, please let me know where the answer is.

I have in some code an 'eval', which I hate, but it's the shortest
path to where I need to get at this point.  I thought that one way I
could harden the enviroment against malicious code would be to
temporarily disable the import statement by overloading __import__,
but I tried what seemed obvious to me, and it didn't work.

What I want do do is something like this:

def __import__(*args, **kwargs):
raise ImportError, 'Not so fast, bucko!'

eval(potentially_dangerous_string)

del __import__ # To get the builtin behavior back.

Am I barking up the wrong tree with __import__?? Where should I look
for this answer?

__builtin__.__import__ is what you need to replace.  Note, of course, that this 
only makes it trivially more difficult for malicious code to do destructive 
things: it doesn't even prevent the code from importing any module it likes, it 
just makes it take a few extra lines of code.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: smtplib and TLS

2005-06-21 Thread Jp Calderone
On 21 Jun 2005 08:39:02 -0700, Matthias Kluwe [EMAIL PROTECTED] wrote:
 From: Paul Rubin http://phr.cx@NOSPAM.invalid

 Matthias Kluwe [EMAIL PROTECTED] writes:
 After getting a @gmail.com address, I recognized I had to use TLS in my
 python scripts using smtplib in order to get mail to the smtp.gmail.com
 server.

 [...]

 The server accepts and delivers my messages, but the last command
 raises

 socket.sslerror: (8, 'EOF occurred in violation of protocol')

 [...]

 Have you verified that its your end that is broken,  not gmail's,  do other
 servers give the same response ?

No, I have not -- I should have, as I know now: Connecting, starttls,
login and sending mail works fine without the above mentioned error
using my previous mail provider.

Does that mean Gmail is in error here? I don't know...

Most SSL servers and clients (primarily HTTP, but some SMTP as well) are broken 
in this regard: they do not properly negotiate TLS connection shutdown.  This 
causes one end or the other to notice an SSL protocol error.  Most of the time, 
the only thing left to do after the TLS connection shutdown is normal TCP 
connection shutdown, so the error doesn't lead to any problems (which is 
probably why so much software generates this problem).  Of course, there's no 
way to *tell* if this is the case or not, at least programmatically.  If you 
receive an OK response to your DATA, you probably don't need to worry, since 
you have gotten what you wanted out of the conversation.

It's entirely possible that the fault here lies on gmail's end, but it is also 
possible that the fault is in your code or the standard library ssl support.  
Unless you want to dive into Python's OpenSSL bindings or start examining 
network traces of SSL traffic, you probably won't be able to figure out who's 
to blame in this particular case.  The simplest thing to do is probably just 
capture and discard that particular error (again, assuming you are getting an 
OK resposne to your DATA command).

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: non OO behaviour of file

2005-06-15 Thread Jp Calderone
On Wed, 15 Jun 2005 16:38:32 +0100, Robin Becker [EMAIL PROTECTED] wrote:
Michael Hoffman wrote:
.

 Well, you could use python -u:


unfortunately this is in a detached process and I am just reopening stdout
as an ordinary file so another process can do tail -F on it. I imagine ther
ought to be an os dependant way to set the file as unbuffered, but can't
remember/find out what it ought to be.


open(name, 'w', 0)

For even more excitment, there's os.open() with the O_DIRECT and O_SYNC flags.  
You shouldn't need to go to this extreme, though.

FWIW, I think the behavior of Python wrt file subclasses that override write() 
is silly, too.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: IMAP Proxy

2005-06-10 Thread Jp Calderone
On Fri, 10 Jun 2005 15:52:28 +0200, Tarek Ziad [EMAIL PROTECTED] wrote:
Hi,

I want to write a small TCP Server in Python to make an IMAP Proxy for
post-processing client requests.

It is not long either complicated but needs to be very robust so...
maybe someone here has already done such a thing I can use or know where
i can get it ?

I recommend using Twisted's IMAP4 support for this.  There's a simple proxy in 
the Twisted issue tracker that might be useful to start from:

http://twistedmatrix.com/bugs/issue215

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: Annoying behaviour of the != operator

2005-06-10 Thread Jp Calderone
On 10 Jun 2005 09:05:53 -0700, Dan Bishop [EMAIL PROTECTED] wrote:
Steven D'Aprano wrote:
...
 If you were to ask, which is bigger, 1+2j or 3+4j? then you
 are asking a question about mathematical size. There is no unique answer
 (although taking the absolute value must surely come close) and the
 expression 1+2j  3+4j is undefined.

 But if you ask which should come first in a list, 1+2j or 3+4j? then you
 are asking about a completely different thing. The usual way of sorting
 arbitrary chunks of data within a list is by dictionary order, and in
 dictionary order 1+2j comes before 3+4j because 1 comes before 3.

 This suggests that perhaps sort needs a keyword argument style, one of
 dictionary, numeric or datetime, which would modify how sorting
 would compare keys.

 Perhaps in Python 3.0.

What's wrong with the Python 2.4 approach of

 clist = [7+8j, 3+4j, 1+2j, 5+6j]
 clist.sort(key=lambda z: (z.real, z.imag))
 clist
[(1+2j), (3+4j), (5+6j), (7+8j)]


It's not a general solution:

 L = [1, 'hello', 2j]
 L.sort(key=lambda x: (x.real, x.imag))
Traceback (most recent call last):
  File stdin, line 1, in ?
  File stdin, line 1, in lambda
AttributeError: 'int' object has no attribute 'real'

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Fast text display?

2005-06-08 Thread Jp Calderone
On Wed, 08 Jun 2005 14:15:35 -0400, Christopher Subich [EMAIL PROTECTED] 
wrote:
As a hobby project, I'm writing a MUD client -- this scratches an itch,
and is also a good excuse to become familiar with the Python language.
I have a conceptual handle on most of the implementation, but the
biggest unknown for me is the seemingly trivial matter of text display.

My first requirement is raw speed; none of what I'm doing is
processing-intensive, so Python itself shouldn't be a problem here.  But
still, it's highly desirable to have very fast text updates (text
inserted only at the end)-- any slower than 20ms/line stretches
usability for fast-scrolling.  EVERY other action on the text display,
though, like scrolling backwards or text selection, can be orders of
magnitude slower.

The second requirement is that it support text coloration.  The exact
markup method isn't important, just so long as individual characters can
be independently colored.

The third requirement is cross-platform-osity; if you won't hold it
against me I'll tell you that I'm developing under Cygwin in Win2k, but
I'd really like it if the app could run under 'nix and mac-osx also.

I've done this with Tkinter before.  At the time, I surveyed the various 
toolkits for the quality of their text widgets, and of Tkinter, PyGTK, PyQT, 
and wxPython, only Tkinter could satisfy the performance requirements.  This 
was about three years ago, so the field may have changed.

If you like, you can check out the code:

http://sourceforge.net/projects/originalgamer

As MUD clients go, it's pretty weak, but it solves the text display problem 
pretty decently.

Hope this helps,

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pack heterogeneous data types

2005-06-08 Thread Jp Calderone
On 8 Jun 2005 14:49:00 -0700, [EMAIL PROTECTED] wrote:
Hello,

How do i pack different data types into a struct or an array. Examples
would be helpful.

Say i need to pack an unsigned char( 0xFF) and an long( 0x)
into a single array? The reason i need to do this is send a packet over
a network.

 import struct
 struct.pack('!BL', 0xff, 0x)
'\xff\xaa\xaa\xaa\xaa'


Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Fast text display?

2005-06-08 Thread Jp Calderone
On 08 Jun 2005 17:26:30 -0700, Paul Rubin http://phr.cx@nospam.invalid 
wrote:
Riccardo Galli [EMAIL PROTECTED] writes:
 Using tkinter doesn't need downloading and installing only in Windows.
 In *nix is not so common to have tcl/tk installed (and probably in Mac too)

Hmm, in the Linux distros that I'm used to, tcl/tk is preinstalled.  I
had the impression that it was included with Python but obviously I
haven't looked that closely.

What does included with Python mean anyway?  Different packagers make 
different decisions.  Some may include Tcl/Tk, others may exclude it.  Some may 
provide a separate but trivially-installable package for it.  On systems with 
reasonable package managers, it barely makes a difference, as any packaged 
software is at most one or two simple commands away.

This applies to other libraries as well, of course.  Installing wxPython on 
Debian is a 5 second ordeal.  This is not to say debian is awesome and you 
should go install it right now or *else*, just to say that the installation of 
a single piece of software can vary greatly in difficulty between different 
platforms.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: write to the same file from multiple processes at the same time?

2005-05-27 Thread Jp Calderone
On 27 May 2005 06:21:21 -0700, Paul Rubin http://phr.cx@nospam.invalid 
wrote:
Peter Hansen [EMAIL PROTECTED] writes:
 The OP was probably on the right track when he suggested that things
 like SQLite (conveniently wrapped with PySQLite) had already solved
 this problem.

But they haven't.  They depend on messy things like server processes
constantly running, which goes against the idea of a cgi that only
runs when someone calls it.

SQLite is an in-process dbm.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: write to the same file from multiple processes at the same time?

2005-05-27 Thread Jp Calderone
On 27 May 2005 06:43:04 -0700, Paul Rubin http://phr.cx@nospam.invalid 
wrote:
Jp Calderone [EMAIL PROTECTED] writes:
 But they haven't.  They depend on messy things like server processes
 constantly running, which goes against the idea of a cgi that only
 runs when someone calls it.

 SQLite is an in-process dbm.

http://www.sqlite.org/faq.html#q7

(7) Can multiple applications or multiple instances of the same
application access a single database file at the same time?

Multiple processes can have the same database open at the same
time. Multiple processes can be doing a SELECT at the same
time. But only one process can be making changes to the database
at once.

But multiple processes changing the database simultaneously is
precisely what the OP wants to do.

Er, no.  The OP precisely wants exactly one process to be able to write at a 
time.  If he was happy with multiple processes writing simultaneously, he 
wouldn't need any locking mechanism at all :)

If you keep reading that FAQ entry, you discover that SQLite implements its own 
locking mechanism internally, allowing different processes to *interleave* 
writes to the database, and preventing any data corruption which might arise 
from simultaneous writes.

That said, I think an RDBM is a ridiculously complex solution to this simple 
problem.  A filesystem lock, preferably using the directory or symlink trick 
(but flock() is fun too, if you're into that sort of thing), is clearly the 
solution to go with here.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: lambda a plusieurs arguments

2005-05-27 Thread Jp Calderone
On Fri, 27 May 2005 19:38:33 +0200, nico [EMAIL PROTECTED] wrote:
Bonjour,

Comment faire une fonction lambda a plusieurs arguments ?

 (lambda a:a+1)(2)
3
 f=(lambda (a,b):a+b)
 f(5,6)
Traceback (most recent call last):
  File stdin, line 1, in ?
TypeError: lambda() takes exactly 1 argument (2 given)
 f((5.6))
 ^--- ,

 f((5, 6))
11


Aussi,

 f = lambda a, b: a + b
 f(5, 6)
11


Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problem: using signal.alarm() to stop a run-away os.system() command

2005-05-27 Thread Jp Calderone
On 27 May 2005 12:09:39 -0700, [EMAIL PROTECTED] wrote:
I'm trying to use signal.alarm to stop a run-away os.system command.
Can anyone exlain the following behavior?

Given following the trivial program:

import os
import signal

def timeoutHandler(signum, frame):
print Timeout
raise ValueError


signal.signal(signal.SIGALRM, timeoutHandler)
signal.alarm(5)

os.system(yes)

signal.alarm(0)


What I expect is that the Linux/UNIX 'yes' command (which runs until
terminated) would be stopped after 5 seconds, when the timeoutHandler
is called, thus raising a ValueError and terminating this example
program.  Instead, the 'yes' command run until it is terminated (by,
say, a kill command), at which time the timeoutHandler is called.  In
other words, the running of the 'yes' command acts like it is blocking
the SIGALRM signal until it terminates, at which time the SIGALRM
signal is raised.  This is quite surprising, and totally defeats my
ability to catch a run-away process.  Can anyone see what I'm doing
wrong?

CPython only delivers signals to Python programs in between bytecodes.  Since 
your program is hanging around in the system(3) call, it isn't executing any 
bytecode, so CPython never gets an opportunity to deliver the signal.

Try using a pipe (popen() or the new subprocess module) and select() with a 
timeout of 5.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [Twisted-Python] xmlrpc deferred

2005-05-27 Thread Jp Calderone
On Fri, 27 May 2005 22:28:06 +0300, Catalin Constantin [EMAIL PROTECTED] 
wrote:
Hi there,

I have the following xmlrpc method:

class FeederResource(xmlrpc.XMLRPC):
def __init__(self):
xmlrpc.XMLRPC.__init__(self)
self.feeder=Feeder()

def xmlrpc_getList(self, id):
return self.feeder.get_urls(id)

The thing is that the self.feeder.get_urls takes too long to execute
and while the request is running all the others are blocked.
I want that while it computes the result the other XML RPC methods to
be available.

The only answer here is to make get_urls() take less time.

What is it doing?  Is it blocking on network I/O?  Querying a database?  
Prompting for user input?   _It_ should be creating and returned a Deferred 
(and later calling it back with a result), most likely, since it is the 
long-running operation.


I wanted to use deferrals but i found no viable example.

Eg what i've tried to do:
def xmlrpc_getList(self, id):
log.debug(getList is here for id %s % id)
d = defer.Deferred()
d.addCallback(self.feeder.get_urls)
return d

Deferred don't make things asynchronous, cooperative, or non-blocking.  They 
only make dealing with callbacks more convenient.  If you add a blocking 
function as the callback to a Deferred, it will block the reactor just as 
effectively as if you called it yourself (bacause all that happens inside the 
Deferred is that the function gets called).


My method feeder.get_urls is never called !


In the above code, nothing ever fires the Deferred - calls .callback() on it 
- so, never having a result, it never bothers to invoke any of its callbacks.  
Deferred just hook results up to callbacks.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [Twisted-Python] xmlrpc deferred

2005-05-27 Thread Jp Calderone
Err woops.  Wrong list, sorry.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: evaluated function defaults: stored where?

2005-05-27 Thread Jp Calderone
On Fri, 27 May 2005 21:07:56 GMT, David Isaac [EMAIL PROTECTED] wrote:
Alan Isaac wrote:
 Default parameter values are evaluated once when the function definition
is
 executed. Where are they stored? ... Where is this documented?

Forgive any poor phrasing: I'm not a computer science type.
At http://www.network-theory.co.uk/docs/pytut/tut_26.html we read:
The execution of a function introduces a new symbol table
used for the local variables of the function. More precisely,
all variable assignments in a function store the value in the local
symbol table; whereas variable references first look in the local
symbol table, then in the global symbol table, and then in the table of
built-in names.

But the default values of function parameters seem rather like a static
attributes of a class.
Is that a good way to think of them?
If so, are they somehow accessible?
How? Under what name?


The inspect module will get them for you:

[EMAIL PROTECTED]:~$ python
Python 2.4.1 (#2, Mar 30 2005, 21:51:10) 
[GCC 3.3.5 (Debian 1:3.3.5-8ubuntu2)] on linux2
Type help, copyright, credits or license for more information.
 def f(x='y'): pass
... 
 import inspect
 inspect.getargspec(f)
(['x'], None, None, ('y',))
 

As to where they are actually stored, this should be considered an 
implementation detail, but you can look at inspect.py to see how it pulls the 
values out (they're just in an attribute on the function object).

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: write to the same file from multiple processes at the same time?

2005-05-27 Thread Jp Calderone
On 27 May 2005 15:22:17 -0700, Paul Rubin http://phr.cx@nospam.invalid 
wrote:
Jp Calderone [EMAIL PROTECTED] writes:
 Oh, ok.  But what kind of locks does it use?

 It doesn't really matter, does it?

Huh?  Sure, if there's some simple way to accomplish the locking, the
OP's act can do the same thing without SQlite's complexity.

 I'm sure the locking mechanisms it uses have changed between
 different releases, and may even be selected based on the platform
 being used.

Well, yes, but WHAT ARE THEY??

Beats me, and I'm certainly not going to dig through the code to find out :)  
For the OP's purposes, the mechanism I mentioned earlier in this thread is 
almost certainly adequate.  To briefly re-summarize, when you want to acquire a 
lock, attempt to create a directory with a well-known name.  When you are done 
with it, delete the directory.  This works across all platforms and filesystems 
likely to be encountered by a Python program.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: write to the same file from multiple processes at the same time?

2005-05-27 Thread Jp Calderone
On 27 May 2005 15:10:16 -0700, Paul Rubin http://phr.cx@nospam.invalid 
wrote:
Peter Hansen [EMAIL PROTECTED] writes:
 And PySQLite conveniently wraps the relevant calls with retries when
 the database is locked by the writing process, making it roughly a
 no-brainer to use SQLite databases as nice simple log files where
 you're trying to write from multiple CGI processes like the OP wanted.

Oh, ok.  But what kind of locks does it use?

It doesn't really matter, does it?

I'm sure the locking mechanisms it uses have changed between different 
releases, and may even be selected based on the platform being used.

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: os independent way of seeing if an executable is on the path?

2005-05-26 Thread Jp Calderone
On Thu, 26 May 2005 11:53:04 -0700, Don [EMAIL PROTECTED] wrote:
Steven Bethard wrote:

 This has probably been answered before, but my Google skills have failed
 me so far...

 Is there an os independent way of checking to see if a particular
 executable is on the path?  Basically what I want to do is run code like:
  i, o, e = os.popen3(executable_name)
 but I'd like to give an informative error if 'executable_name' doesn't
 refer to an executable on the path.

 [snip]

 Thanks for the help,

 STeVe

I wrote this 'which' function for Linux, but I think if you changed the ':'
character, it would work on Windows (I think its a ';' on Windows, but I
can't remember):

def which( command ):
path = os.getenv( 'PATH' ).split( ':' )
found_path = ''
for p in path:
try:
files = os.listdir( p )
except:
continue
else:
if command in files:
found_path = p
break

return found_path


Here's the version that comes with Twisted:

  import os, sys, imp

  def which(name, flags=os.X_OK):
  Search PATH for executable files with the given name.

  @type name: C{str}
  @param name: The name for which to search.

  @type flags: C{int}
  @param flags: Arguments to L{os.access}.

  @rtype: C{list}
  @param: A list of the full paths to files found, in the
  order in which they were found.
  
  result = []
  exts = filter(None, os.environ.get('PATHEXT', '').split(os.pathsep))
  for p in os.environ['PATH'].split(os.pathsep):
  p = os.path.join(p, name)
  if os.access(p, flags):
  result.append(p)
  for e in exts:
  pext = p + e
  if os.access(pext, flags):
  result.append(pext)
  return result

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: line-by-line output from a subprocess

2005-05-23 Thread Jp Calderone
On 23 May 2005 13:22:04 -0700, Simon Percivall [EMAIL PROTECTED] wrote:
Okay, so the reason what you're trying to do doesn't work is that the
readahead buffer used by the file iterator is 8192 bytes, which clearly
might be too much. It also might be because the output from the
application you're running is buffered, so you might have to do
something about that as well.

Anyway, if the output from the child application is unbuffered, writing
a generator like this would work:

def iterread(fobj):
stdout = fobj.stdout.read(1) # or what you like
data = 
while stdout:
data += stdout
while \n in data:
line, data = data.split(\n, 1)
yield line
stdout = fobj.stdout.read(1)
if data:
yield data,


Or, doing the same thing, but with less code:

def iterread(fobj):
return iter(fobj.readline, '')

Haven't tried this on subprocess's pipes, but I assume they behave much the 
same way other file objects do (at least in this regard).

Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: first release of PyPy

2005-05-21 Thread Jp Calderone
On 21 May 2005 17:57:17 -0700, Paul Rubin http://phr.cx@nospam.invalid 
wrote:
Christian Tismer [EMAIL PROTECTED] writes:
 Type inference works fine for our implementation of Python,
 but it is in fact very limited for full-blown Python programs.
 Yoou cannot do much more than to try to generate effective code
 for the current situation that you see. But that's most often
 quite fine.

Type inference (or static type declarations) is one part of compiling
dynamic languages but I think its importance is overblown in these
Python compiler threads.  There's lots of compiled Lisp code out there
that's completely dynamic, with every operation dispatching on the
type tags in the Lisp objects.  Yes, the code runs slower than when
the compiler knows the type in advance, but it's still much faster
than interpreted code.

I'd expect one of the worst bottlenecks in Python is the multiple
levels of dictionary lookup needed when you say a.x().
 [snip]

  Have you profiler data in support of this?  Suggesting optimizations, 
especially ones which require semantic changes to existing behavior, without 
actually knowing that they'll speed things up, or even that they are targetted 
at bottleneck code, is kind of a waste of time.

  Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: buffer_info error

2005-05-20 Thread Jp Calderone
On 20 May 2005 13:18:33 -0700, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
i am filling in a packet with source and destination address and using
the buffer_info call to pass on the address to an underlying low level
call.

The src and dest are strings, but buffer_info expects an array. How do
i deal with this?

  What's the low-level call you're invoking?

  Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Twisted an several CPUs

2005-05-19 Thread Jp Calderone
On Thu, 19 May 2005 17:22:31 +0200, Thomas Guettler [EMAIL PROTECTED] wrote:
Hi,

Out of sheer curiosity:

Does Twisted scale if the server has several CPUs?


  No more than any other single-process Python application (no less, either).  
Unless you run multiple processes...

As far as I know twisted uses one interpreter. This
means a prefork server modul might be better to
server database driven web-apps.

  Why does it mean that?  Your database is already running in a separate 
process, right?  So there's SMP exploitation right there, regardless of whether 
your Python process is running with Twisted or anything else.


Has anyone experience high load and twisted?


  Distributing load across multiple machines scales better than distributing it 
over multiple CPUs in a single machine.  If you have serious scalability 
requirements, SMP is a minor step in the wrong direction (unless you're talking 
about something like 128-way SMP on a supercomputer :)

  Plus, any solution that works across multiple machines is likely to be 
trivially adaptable to work across multiple CPUs on a single machine, so when 
your desktop has a 128-way cell processor in it, you'll still be able to take 
advantage of it :)

  Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Twisted an several CPUs

2005-05-19 Thread Jp Calderone
On 19 May 2005 17:01:11 -0700, Paul Rubin http://phr.cx@nospam.invalid 
wrote:
Jp Calderone [EMAIL PROTECTED] writes:
   Distributing load across multiple machines scales better than
 distributing it over multiple CPUs in a single machine.  If you have
 serious scalability requirements, SMP is a minor step in the wrong
 direction (unless you're talking about something like 128-way SMP on
 a supercomputer :)

See PoSH:

  http://poshmodule.sourceforge.net/

The performance gain from multiple CPU's and shared memory is quite real.
I've been wanting for quite a long time to hack up a web server that
uses this stuff.

  performance gain != scaling

  That said, PoSH is kind of an interesting idea.  However, I prefer to share 
data between processes, instead of PyObject*'s: it performs better, wastes less 
space, incurs less complexity in the IPC mechanism, and interoperates with 
non-Python tools.

  Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: wxTimer problem

2005-05-13 Thread Jp Calderone
On Fri, 13 May 2005 14:57:26 +0800, Austin [EMAIL PROTECTED] wrote:
I wrote a GUI program on windows. (python  wxPython)
One function is to refresh the data from the COM Object continously.
In the beginning, I used the thread.start_new_thread(xxx,())
But no matter how i try, it will cause the win32com error.

After that, i use the wx.Timer to do the refresh function.
It works fine, but i find one problem.
I think timer should be independant, just like thread, but wxTimer doesn't.

1. Does python have timer function( not included by thread)?
2. About the wxTimer, does any parameter to let it be independent?


  What does independent mean?

  Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: doc tags?

2005-05-13 Thread Jp Calderone
On Fri, 13 May 2005 06:44:46 -0700, Robert Kern [EMAIL PROTECTED] wrote:
Larry Bates wrote:
 In python they are called decorators, but I've never had a
 need to use them myself, but then I'm a little old fashioned.

Decorators only work on function and method definitions. I don't think
that's what Wolfram is referring to.

  Decorator syntax may only work there.  Decorators are a more general concept 
orthogonal to the '@' syntax, which can be applied to any kind of object.

  Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: stop a thread safetely

2005-05-13 Thread Jp Calderone
On Fri, 13 May 2005 16:47:34 +0200, Zunbeltz Izaola [EMAIL PROTECTED] wrote:
On Fri, 13 May 2005 09:10:13 -0400, Peter Hansen wrote:


 How did you intend to stop the thread in a manner which might be unsafe?
 (Hint, unless you're doing something unusual, you can't.)


I have a threaded object (Mythread). It checks if want_thread
variable is True to return. The problem is that this object
execute a function that is a tcp comunication


  You cannot exit a thread except by exiting the entire process or by having 
the function it is running terminate (either by returning or raising an 
exception).

  Instead of using threads for network communication, you may want to look into 
these:

http://www.twistedmatrix.com/

http://python.org/doc/lib/module-asyncore.html

http://www.nightmare.com/medusa/

  Hope this helps,

  Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Solipsis: Python-powered Metaverse

2005-05-13 Thread Jp Calderone
7On Sat, 14 May 2005 02:28:57 +0300, Christos TZOTZIOY Georgiou [EMAIL 
PROTECTED] wrote:
On Wed, 11 May 2005 22:48:31 -0400, rumours say that Terry Reedy
[EMAIL PROTECTED] might have written:

  and what if both computers
 wanted to participate on the port 6000 fun?

Recently, I had one family member use my purchased account to logon to and
play an online action game, which sends a constant stream of update info.
Then, curious what would happen, I logged on, from a different computer but
through the same router, with a temporary guest account.  Somewhat to my
surprise, it worked -- without touching the computer (XP) or router
settings.  And kept working the whole weekend.  So there is a way to tag
update packets so they can be reliably separated into two streams (and vice
versa).  Solipsis should be able to do the same.

In your case, it's the internal address that originated the connection--
so the router can distinguish the streams:

(int1, port1)-(ext_host, port0)
maps to (router_ext, port2)-(ext_host, port0)

(int2, port3)-(ext_host, port0)
maps to (router_ext, port4)-(ext_host, port0)

Every TCP/UDP packet includes srcip, srcport, dstip, dstport, so an
internal dictionary makes the translations to and fro (the router
changes the srcip, srcport whenever a packet passes through).  The
internal computer knows not that the router mangled the packets, and the
external computer knows nothing about the internal computer address.

However, if both of your internal computers listen to port 6000 for
example, there is no easy way for the router to know to which one it
should forward an *incoming* request for connection to port 6000-- all
it knows is that some external computer connected to its external
interface ip address at some specific port.  In this case, *typically*
you would map port (router_ext, 6000) to (int1, 6000) and (router_ext,
6001) to (int2, 6000).  The internal computers would both think that
some computer is doing a connect at their port 6000.

  Combinations of marginally smart routers and non-broken protocols can deal 
with even this situation automatically, without user configuration or 
intervention.

  Traffic to (router_ext, N) from (ext, M) can be switched to (int1, N) or 
(int2, N) by simply noticing which of int1 or int2 has recently sent traffic 
from N to (ext, M).  If both have, you can still generally figure it out, by 
rewriting the source port automatically on the router.  Many routers support 
the former of these, and a sizable portion support the latter.

  Of course, if your protocol includes port numbers in the application areas of 
the packets it sends, the router usually can't properly rewrite the packet, so 
things go pear shaped.

  Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Replacing open builtin

2005-05-11 Thread Jp Calderone
On 11 May 2005 05:56:04 -0700, [EMAIL PROTECTED] wrote:
Sorry, should maybe have used __import__ as an example.
Let's say I grab import, store the reference within the Isolate class
and then redirect the builtin import to a function in the Isolate class
which only allows certain modules to be imported -eg not sys.   Would
this be secure?


  Probably not.  For example:

 (1).__class__.__bases__[0].__subclasses__()[-1]('/dev/null')
open file '/dev/null', mode 'r' at 0xb7df53c8

  Security through subtracting features usually ends up leaving some holes 
around (because there's just that *one* more thing you missed).  What the holes 
are depends on the details of the implementation, but they pretty much always 
exist.  Making a reference-restricted Python interpreter is a large challenge: 
you either have to spend a huge amount of effort taking things out of CPython 
(months and months of development time, at least), or write a new interpreter 
from scratch.

  Older versions of Python thought they had this licked, see the rexec module 
for the attempt that is no longer maintained.

  Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


RE: Interactive shell for demonstration purposes

2005-05-11 Thread Jp Calderone
On Wed, 11 May 2005 13:55:38 +0100, Tim Golden [EMAIL PROTECTED] wrote:
[Brian Quinlan]
|
| Can anyone recommend a Python interactive shell for use in
| presentations?
|
| Ideal characteristics (priority order):
| o configurable font size
| o full screen mode
| o readline support
| o syntax coloring
|
| I've tried ipython but, since it runs inside a console
| window, and the
| console window has a limited number of selectable fonts, it doesn't
| work terribly well.
|
| I've seen presentations using some sort of PyGame implemented shell.
| Does anyone have an information on that?

Warning: untested response.

Since I remembered that Michael Hudson had done such
a thing once, I Googled and came across this:

http://codespeak.net/py/current/doc/execnet.html

which might get you started.


  execnet lets you easily run arbitrary Python code on remote hosts.  This 
doesn't seem too closely related to the OP's question, which seems directed at 
finding a pretty GUI to display the results of such execution.

  Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Put a file on an ftp server over ssl

2005-05-10 Thread Jp Calderone
On 10 May 2005 11:32:39 -0700, Daniel Santa Cruz [EMAIL PROTECTED] wrote:
I looked briefly at this option, but it seems to me that I would have
to learn a whole architecture just to put a file on an ftp server.
Seems like a bit much, don't you think?

  (In the absence of any quoted material, assuming this is directed at my 
suggestion to use Twisted)

  Depends how important getting the file onto the server is to you :)  You 
might  want to consider future tasks which could be aided by the knowledge of 
Twisted and treat this as a good opportunity to get some exposure.  Or maybe 
it's really not worth it to you.  Only you can decide, I suppose.

  Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Solipsis: Python-powered Metaverse

2005-05-10 Thread Jp Calderone
On Tue, 10 May 2005 19:27:00 -0700, Paul McNett [EMAIL PROTECTED] wrote:
Joseph Garvin wrote:
 I was looking at this earlier today because I was curious how they were
 going to handle performance concerns (both due to Python and bandwidth).
 I'm having trouble understanding all of the details -- what is the
 significance of the use of a torus for the world space? Does this
 somehow help in the computation of the convex hull?

I'm having trouble understanding how they are going to make this work
server-less, pure peer-to-peer. I just tried it out and it was cool: I
could move around in the world, get near other avatars and say hey.
Only, I couldn't hear what they said back to me because I don't have UDP
port 6000 open on my firewall and forwarding to my laptop (and don't
want to do that either).

It is a shame: peer to peer has the potential to enable really cool,
imaginative multiuser worlds, but how many people are connecting
directly to the internet these days?


  This can be dealt with, fortunately.  I plan to poke the Solipsis folks about 
it the moment I get a couple adjacent free minutes. :)  It should be an easy 
change to make, since their protocol code is orthogonal to their transport code 
(and defeating NATs and firewalls is a transport issue).

  Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Database backend?

2005-05-08 Thread Jp Calderone
On Sun, 08 May 2005 20:09:29 +0200, Mikkel Høgh [EMAIL PROTECTED] wrote:
I am in the progress of laying the groundwork for a small application I
intend to make, and I'd like some expert advice, since this is the first
larger project I've gotten myself into.
First problem is which backend to use for data storage. The application I am
trying to create is a small presentation-program with the sole purpose of
displaying lyrics for songs at smaller concerts.
The system is required to have a database of some kind for storing the
lyrics, and this should most definitely be searchable.
Naturally, I need a backend for this of some sort. I've been thinking of
either XML, or full-blown MySQL.
XML is very versatile, but I wonder if it is fast enough to handle lyrics
for, say, 1000 songs - and then I would need to come up with some kind of
indexing-algorithm. MySQL on the other hand, requires a lot of effort and
adds to the complication of the installation.
Honestly, I have a hard time making my mind up. Also, there might be other
possibilities.
Feedback will be appreciated.
 1000 songs is not very many.  I doubt any solution you come up with will 
suffer from performance problems.  Your best bet will probably be to use the 
simplest possible data structure, perhaps pickled on disk, perhaps just written 
to lightly structured text files, and don't worry about using a database for 
now.  If you find yourself needing to handle a couple orders of magnitude more 
songs, at that point you may wish to revisit your data storage solution.
 Jp
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: A question about inheritance

2005-05-08 Thread Jp Calderone
On 8 May 2005 12:07:58 -0700, [EMAIL PROTECTED] wrote:
Hello I have a question about inheritance in Python. I'd like to do
something like this:

 class cl1:
  def __init__(self):
   self.a = 1

 class cl2(cl1):
  def __init__(self):
   self.b = 2

But in such a way that cl2 instances have atributes 'b' AND 'a'.
Obviously, this is not the way of doing it, because the __init__
definition in cl2 overrides cl1's __init__.

Is there a 'pythonic' way of achieving this?

class cl2(cl1):
def __init__(self):
cl1.__init__(self)
self.b = 2

  Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Clueless with cPickle

2005-05-08 Thread Jp Calderone
On Sun, 08 May 2005 21:27:35 GMT, les [EMAIL PROTECTED] wrote:
I am working on a homework assignment and trying to use cPickle to store
the answers from questor.py I believe I have the syntax correct but am not
sure if I am placing everything where it needs to be.  Any help would be
greatly appreciated.  When I attempt to run what I have I end up with the
following:

Traceback (most recent call last):
  File /home/les/workspace/Module 2/questor.py, line 18, in ?
f = file(questorlistfile)
NameError: name 'questorlistfile' is not defined

I thought that I had defined questorlistfile on the 4th line below

# define some constants for future use

import cPickle as p
#import pickle as p

questorfile = 'questor.data' # the name of the file where we will 
 ^^^

  Note this variable name

 # store the object

questorlist = []

# Write to the file
f = file(questorfile, 'w')
p.dump(questorlist, f) # dump the object to a file
f.close()

del questorlist # remove the shoplist

# Read back from the storage
f = file(questorlistfile)
  ^^^

  Compare it with this variable name.

 [snip]

  Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: __brace__ (PEP?)

2005-05-08 Thread Jp Calderone
On Sun, 8 May 2005 16:29:03 -0700, James Stroud [EMAIL PROTECTED] wrote:
Hello All,

If __call__ allows anobject() and __getitem__ allows anobject[arange], why
not have __brace__ (or some other, better name) for anobject{something}.
Such braces might be useful for cross-sectioning nested data structures:


  See Numeric Python, which uses index slices in multiple dimensions to satisfy 
this use case.

  While a new syntactic construct could be introduced to provide this feature, 
the minimal core, rich library school of language design suggests that doing 
so would not be a great idea.

  Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: how to calc the difference between two datetimes?

2005-05-08 Thread Jp Calderone
On Sun, 8 May 2005 19:06:31 -0600, Stewart Midwinter [EMAIL PROTECTED] wrote:
After an hour of research, I'm more confused than ever. I don't know
if I should use the time module, or the eGenix datetime module. Here's
what I want to do:  I want to calculate the time difference (in
seconds would be okay, or minutes), between two date-time strings.

so: something like this:
time0 = 2005-05-06 23:03:44
time1 = 2005-05-07 03:03:44

timedelta = someFunction(time0,time1)
print 'time difference is %s seconds' % timedelta.

Which function should I use?

  The builtin datetime module:

 import datetime
 x = datetime.datetime(2005, 5, 6, 23, 3, 44)
 y = datetime.datetime(2005, 5, 8, 3, 3, 44)
 x - y
datetime.timedelta(-2, 72000)
 y - x
datetime.timedelta(1, 14400)
 

  Parsing the time string is left as an exercise for the reader (hint: see the 
time module's strptime function).

  Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Sockets

2005-05-05 Thread Jp Calderone
On Thu, 05 May 2005 17:11:08 +0800, Dan [EMAIL PROTECTED] wrote:


I have a problem and I don't quite know how to implement the solution.

I'll have a server application that will listen on a tcp port and make
many similtaneous connections to remote clients.  From time to time,
I'll need to write a small amount of data on one of those sockets.  A
notification to write to one of the sockets will come from another
program/process.

I think that the best way to send the notification to this server
application is via a udp message.  Alternatively, I could use tcp, but
I don't think I'll need the extra complexity for what I want to do.
(Other suggestions welcome.)

  UDP is actually more complex than TCP.  I recommend sticking with TCP until 
you have a better reason (eg, you need to communicate simultaneously with tens 
of thousands of clients).


The server application will multiplex the connections using 'select',
so much of the time it will be blocked on 'select'.

My problem is how to also listen on a udp port while the process is
blocked by 'select'.  Should I run a separate thread?  And if so can I
share the socket connection across the two threads?  (Thread 1 will be
accepting client connections, thread 2 will we writing data to it.)
Or should I simply let 'select' time out after some period?

I'm a bit lost as to how to do this, I hope someone can put me on the
right track.  Any solution that I use should be applicable on Linux
and Windows platforms.

  I recommend using Twisted.  Here's a sample application that accepts 
connections, waits on messages from each client, and then transmits another 
message to all other clients in response to receiving one (untested code):

from twisted.internet import reactor, protocol
from twisted.protocols import basic

# Define the protocol with which we will handle all 
# incoming connections
class ClientMessageProtocol(protocol.LineReceiver):

# When a connection is established, append this 
# instance to the factory's list of connected 
# clients, so it can send messages to this client 
# when necessary.
def connectionMade(self):
self.factory.clients.append(self)

# Likewise, remove the instance when the connection 
# is lost.
def connectionLost(self):
self.factory.clients.remove(self)

# Every time a whole line is received, tell the factory 
# about it.
def lineReceiver(self, line):
self.factory.lineReceived(self, line)

class ClientMessageFactory(protocol.ServerFactory):
# Indicate the protocol to be instantiated for each
# connection to this factory.
protocol = ClientMessageProtocol

# At startup, make an empty clients list.
def startFactory(self):
self.clients = []

# Whenever a client tells us they received a line, send
# a short message to every other connection client.
def lineReceived(self, client, line):
for cl in self.clientz:
if cl is not client:
cl.sendLine(%s sent me a message: %r % (client, line))

# Start the server on TCP port 54321
reactor.listenTCP(54321, ClientMessageFactory())

# Run the main event loop
reactor.run()

  Learn more about Twisted on its website: http://www.twistedmatrix.com/

  Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Bandwith Shaping

2005-05-04 Thread Jp Calderone
On 4 May 2005 10:48:41 -0700, flamesrock [EMAIL PROTECTED] wrote:
Just curious - is there an easy way to shape bandwith in python. If I
wanted to have a max download speed for instance


  Twisted includes an HTB implementation.

  http://twistedmatrix.com/documents/current/api/twisted.protocols.htb.html

  Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: BitKeeper for Python?

2005-05-02 Thread Jp Calderone
On 02 May 2005 09:30:05 GMT, Nick Craig-Wood [EMAIL PROTECTED] wrote:
John Smith [EMAIL PROTECTED] wrote:
  I am going to be working with some people on a project that is going to be
  done over the internet. I am looking for a good method of keeping everyone's
  code up to date and have everyone be able to access all the code including
  all the changes and be able to determine what parts of the code were
  changed.

  If it were opensource that would be even better.

You could try Mercurial

  http://www.selenic.com/mercurial/

which aims at being a true bk replacement.  Its also written in
python.  Its being developed at the moment...


  Being developed at the moment is something of an understatement.  It's less 
than a month old.  Less than a month old.  Compare this to such systems as CVS 
(more than 16 years old) and I think everyone can agree that Mercurial may need 
a teensy bit more work before it is interesting to people looking for an RCS.

  Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pop3proxy

2005-05-02 Thread Jp Calderone
On Mon, 02 May 2005 16:05:13 +0200, BrokenClock [EMAIL PROTECTED] wrote:
Hello every body,

Here is a python newbie! I've choose it to make a pop3 proxy - I want to
filter content between a pop3 client and a pop3 server, and I have no
control on the server...
First, I wanted to do an non-filtering, just logging, mono-thread proxy
to make some test..
Based on the RFC 1939 (http://www.faqs.org/rfcs/rfc1939.html) (in
particular the item 5), I was expecting some output, but did not get it...
In fact, I expected to see the message, but I did not see it... only the
  command to retrieve it. On the other hand, the message is well receipt
in my mail client.
An other point is that all this seems to work well with short messages,
but difficulties appear when messages go bigger. I think it is due to
the parameter of recv, but I don't know how to set it.
So here are my two questions:
1-why do not I see the message in my output, and how could I do to see
and handle it?
2-how should I set the parameter of recv to handle big messages?

  Regardless of how large you set the parameter, you will still have to issue 
multiple recv() calls.  TCP is stream oriented, not packet oriented.  There are 
no guarantees about how many bytes recv() will return.  You may get the entire 
chunk of bytes written by your peer, or you may get a small fraction of them, 
or you may get two byte chunks written separately merged together.

  The only way to be sure you've received everything is to include logic in 
your application which can examine the bytes and determine based on the 
protocol whether or not more bytes should be coming.

  For this reason, you cannot simply recv() on one socket and then send() on 
the next, alternating.  You need some actual POP3 logic to determine whether a 
recv() returned all the expected bytes, or whether you need to recv() on the 
same socket again.

  Another approach is to use non-blocking sockets, select(), and issue a recv() 
on either socket whenever select() indicates there are bytes to be read.  This 
is the approach taken by various libraries, including asyncore in the stdlib, 
as well as by Twisted.  As a bonus, Twisted (http://www.twistedmatrix.com) 
also includes client and server POP3 protocol implementations.


Any help would we very appreciate.

Cheers,

Brokenclock

Here is the code:
 
import socket

LOCALHOST = '192.168.31.202'   # This is me
REMOTEHOST = 'pop.fr.oleane.com'   # The remote host
PORT = 110 # pop3 port
while 1:
   SocketServer = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
   SocketServer.bind((LOCALHOST, PORT))
   SocketServer.listen(1)
   Connexion2Client, ClientAddress = SocketServer.accept()
   print '#', ClientAddress,' connected'
   ClientSocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
   ClientSocket.connect((REMOTEHOST, PORT))
   print '#', REMOTEHOST, ' connected'
   while 1:
   DataFromServer = ClientSocket.recv(5896230)
   print REMOTEHOST,' ',DataFromServer

   Connexion2Client.send(DataFromServer)

  You can't just call send() like this.  It may not write all the bytes you 
pass it.  It will return the number of bytes written, and you must then call 
send again with the remaining bytes (and again test the return value, etc).

   DataFromClient = Connexion2Client.recv(5896230)
   print ClientAddress,' ',DataFromClient
   if DataFromClient== QUIT: print 'QUIT received from client'

  Since you may receive Q, U, I, and T in separate recv() results, the 
above will not work reliably.  You will need to buffer bytes and test for the 
presence of QUIT in the buffer.  Additionally, you need to maintain information 
about the state of connection, since QUIT is perfectly legitimate as part of 
a message body and should not be interpreted the same as a QUIT outside a 
message body.

   ClientSocket.send(DataFromClient)

  Same comments about send() as above.

   if not DataFromClient: break
   ClientSocket.close()
   Connexion2Client.close()
 eof


 [snip output]

  Jp
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Multiple threads in a GUI app (wxPython), communication between worker thread and app?

2005-05-01 Thread Jp Calderone
On 01 May 2005 10:09:56 -0700, Paul Rubin http://phr.cx@nospam.invalid wrote:
fo [EMAIL PROTECTED] writes:
How would I get the worker thread to open a GUI window in the main GUI
thread? After that GUI window is open, how can I send and recv messages
from/to the GUI window?
First of all the favorite Pythonic way to communicate between threads
is with synchronized queues--see the Queue module.  Have the worker
thread put stuff on a queue and have the main GUI thread read from it.
 Yea, this is a good way.  Also, it is essentially what you describe below 
:)  The Queues are just created and drained by the GUI library instead of the 
user's application code.
Secondly, I don't know about wxPython, but in tkinter you have to
resort to a kludge in order for the gui thread to handle gui events
and also notice stuff on a queue.  There's a tkinter command to run
some function after a specified time (say 50 msec).  So you'd set that
timeout to check the queue and restart the timer, which means the gui
would check 20x a second for updates from the worker threads.  When it
got such an update, it would create a new window or whatever.
 You can do better than this.  Tkinter has an after_idle function, which lets you 
post an event to the Tkinter thread _immediately_.  This is basically the Queue 
put operation.
It could be that wxPython has a cleaner way of doing this, or you
might have to do something similar.
 It has essentially the same thing, PostEvent().
Python thread support seems to have been something of an afterthought 
and there's a lot of weirdness like this to deal with.
 I'm not sure what you see as weird about this.
 Jp
--
http://mail.python.org/mailman/listinfo/python-list


Re: BitKeeper for Python?

2005-05-01 Thread Jp Calderone
On Sun, 01 May 2005 20:16:40 GMT, John Smith [EMAIL PROTECTED] wrote:
I am going to be working with some people on a project that is going to be
done over the internet. I am looking for a good method of keeping everyone's
code up to date and have everyone be able to access all the code including
all the changes and be able to determine what parts of the code were
changed.
If it were opensource that would be even better.
 Have you checked out this page?
 http://www.google.com/search?hl=enlr=q=version+control+systembtnG=Search
 Jp
--
http://mail.python.org/mailman/listinfo/python-list


Re: Whats the best Python Book for me

2005-05-01 Thread Jp Calderone
On Sun, 01 May 2005 20:18:39 GMT, John Smith [EMAIL PROTECTED] wrote:
I already know C/C++ programming and now wish to learn Python to do
scripting for some games that are coming out. What book would you recommend.
I am thinking Teach Your Self Python in 24 Hours is probably the best place
to start...
 http://www.norvig.com/21-days.html is a nice site.
 For people who already have programming experience, I find 
http://docs.python.org/ref/ref.html and http://docs.python.org/lib/lib.html to be 
rather appropriate.  Another reference, 
http://www.ibiblio.org/obp/thinkCSpy/, while geared more towards newer 
programmers, is also a nice introduction to the Python language.
 Jp
--
http://mail.python.org/mailman/listinfo/python-list


Re: Debugging threaded Python code

2005-05-01 Thread Jp Calderone
On Mon, 02 May 2005 12:52:30 +1000, Derek Thomson [EMAIL PROTECTED] wrote:
Hi,

I frequently have to debug some fairly tricky Python multi-threaded
code, and I need some help using the debugger to help me diagnose the
problems when they occur. Yes, I know that the best option with threaded
code that is problematic is to rewrite it, but that's not really an
option that I have (right now).

What would really help me is to be able to see a list of all the
currently active threads, and the Python stack trace for each. At least
then I could see where the failures are happening - deadlocks in
particular. I have to spend a lot of time right now just to reach that
point.

I spent some time trying to achieve this with the Python debugger and
couldn't. This has been bugging me for quite a while now, and I'm
probably just missing the obvious as usual. Is there some simple way I
can do this?


  I saw an awesome demo of Komodo's debugger at Linux World this year.  I still 
haven't had an excuse to mess around with its support of threads myself, but it 
seemed to handle them quite niecly.

http://aspn.activestate.com/ASPN/Downloads/Komodo/RemoteDebugging

  Jp

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Reusing object methods?

2005-04-29 Thread Jp Calderone
On Fri, 29 Apr 2005 19:45:54 +0200, Ivan Voras [EMAIL PROTECTED] wrote:
Can this be done: (this example doesn't work)

class A:
def a_lengthy_method(self, params):
 # do some work depending only on data in self and params
class B:
def __init__(self):
self.a_lengthy_method = A.a_lengthy_method
# I know that data in self of class B object is compatible
# with that of class A object

I want to 'reuse' the same method from class A in class B, and without 
introducing a common ancestor for both of them - is this possible in an 
elegant, straightforward way?
 This is what sub-classing is for.  If it makes you feel better, call it mixing 
in instead.  You can share this method between two classes without inheritance (by 
defining a free function and then ...), but using a common base class really is just the 
right way to do it.
 Jp
--
http://mail.python.org/mailman/listinfo/python-list


  1   2   >