Re: High water Memory fragmentation still a thing?

2014-10-08 Thread bryanjugglercryptographer
Chris Angelico wrote:
 Sure, and that's all well and good. But what I just cited there *is* a
 shipping product. That's a live server that runs a game that I'm admin
 of. So it's possible to do without the resource safety net of periodic
 restarts.

Nice that the non-Python server you administer stayed up for 88 weeks, but that 
doesn't really have anything to do with the issue here. The answer to the OP's 
title question is yes, high-water memory fragmentation is a real thing, in 
most platforms including CPython.

The cited article tells of Celery hitting the problem, and the working solution 
was to roll the celery worker processes. That doesn't mean to tell a human 
administrator to regularly restart the server. It's programmatic and it's a 
reasonably simple and well-established design pattern.

  For an example see the Apache HTTP daemon, particularly the classic 
  pre-forking server. There's a configuration parameter, 
  MaxRequestsPerChild, that sets how many requests a process should answer 
  before terminating.
 
 That assumes that requests can be handled equally by any server
 process - and more so, that there are such things as discrete
 requests. That's true of HTTP, but not of everything. 

It's true of HTTP and many other protocols because they were designed to 
support robust operation even as individual components may fail.

 And even with
 HTTP, if you do long polls [1] then clients might remain connected
 for arbitrary lengths of time; either you have to cut them off when
 you terminate the server process (hopefully that isn't too often, or
 you lose the benefit of long polling), or you retain processes for
 much longer.

If you look at actual long-polling protocols, you'll see that the server 
occasionally closing connections is no problem at all. They're actually 
designed to be robust even against connections that drop without proper 
shutdown.

 Restarting isn't necessary. It's like rebooting a computer: people get
 into the habit of doing it, because it fixes problems, but all that
 means is that it allows you to get sloppy with resource management.

CPython, and for that matter malloc/free, have known problems in resource 
management, such as the fragmentation issue noted here. There are more. Try a 
Google site search for memory leak on http://bugs.python.org/. Do you think 
the last memory leak is fixed now?

From what I've seen, planned process replacement is the primary techniques to 
support long-lived mission-critical services in face of resource management 
flaws. Upon process termination the OS recovers the resources. I love CPython, 
but on this point I trust the Linux kernel much more.

-- 
--Bryan
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: High water Memory fragmentation still a thing?

2014-10-06 Thread bryanjugglercryptographer
Chris Angelico wrote:
 This is why a lot of long-duration processes are built to be restarted
 periodically. It's not strictly necessary, but it can be the most
 effective way of solving a problem. I tend to ignore that, though, and
 let my processes just keep on running... for 88 wk 4d 23:56:27 so far,
 on one of those processes. It's consuming less than half a gig of
 virtual memory, quarter gig resident, and it's been doing a fair bit [...]

A shipping product has to meet a higher standard. Planned process mortality is 
a reasonably simple strategy for building robust services from tools that have 
flaws in resource management. It assumes only that the operating system 
reliably reclaims resources from dead processes. 

The usual pattern is to have one or two parent processes that keep several 
worker processes running but do not themselves directly serve clients. The 
workers do the heavy lifting and are programmed to eventually die, letting 
younger workers take over.

For an example see the Apache HTTP daemon, particularly the classic pre-forking 
server. There's a configuration parameter, MaxRequestsPerChild, that sets how 
many requests a process should answer before terminating.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: High water Memory fragmentation still a thing?

2014-10-06 Thread bryanjugglercryptographer
dieter wrote:
 As you see from the description, memory compaction presents a heavy burden
 for all extension writers.

Particularly because many CPython extensions are actually interfaces to 
pre-existing libraries. To leverage the system's facilities CPython has to 
follow the system's conventions, which memory compaction would break.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Prime number generator

2013-07-30 Thread bryanjugglercryptographer
Chris Angelico wrote:
 Bas wrote:
  Still trying to figure out your algorithm ...
 
 It's pretty simple. (That's a bad start, I know!) Like the Sieve of
 Eratosthenes, it locates prime numbers, then deems every multiple of
 them to be composite. Unlike the classic sieve, it does the deem
 part in parallel. Instead of marking all the multiples of 2 first,
 then picking three and marking all the multiples of 3, then 5, etc,
 this function records the fact that it's up to (say) 42 in marking
 multiples of 2, and then when it comes to check if 43 is prime or not,
 it moves to the next multiple of 2. This requires memory to store the
 previously-known primes, similarly to other methods, but needs no
 multiplication or division.

Knuth points to the method, using a priority queue, in exercise 15 of section 
5.2.3 of /Sorting and Searching/, and credits it to B. A. Chartres.

-- 
-Bryan
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Best python web framework to build university/academic website

2013-07-30 Thread bryanjugglercryptographer
b.kris...@gmail.com wrote:

 I got a chance to build an university website, within very short period of 
 time.
 I know web2py, little bit of Django, so please suggest me the best to build 
 rapidly.

Web2py rocks like nothing else for getting up fast. If you already know it, 
problem solved.

That said, Django has more available production-quality (free) utility apps. 
You might might want to check whether someone else has already done your work 
for you.

I'm leaning toward Django because it's ahead of Web2py in Python 3 support.

-- 
--Bryan
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Memory usage per top 10x usage per heapy

2012-09-27 Thread bryanjugglercryptographer
MrsEntity wrote:
 Based on heapy, a db based solution would be serious overkill.

I've embraced overkill and my life is better for it. Don't confuse overkill 
with cost. Overkill is your friend.

The facts of the case: You need to save some derived strings for each of 2M 
input lines. Even half the input runs over the 2GB RAM in your (virtual) 
machine. You're using Ubuntu 12.04 in Virtualbox on Win7/64, Python 2.7/64.

That screams sqlite3. It's overkill, in a good way. It's already there for 
the importing.

Other approaches? You could try to keep everything in RAM, but use less. Tim 
Chase pointed out the memory-efficiency of named tuples. You could save some 
more by switching to Win7/32, Python 2.7/32; VirtualBox makes trying such 
alternatives quick and easy.

Or you could add memory. Compared to good old 32-bit, 64-bit operation consumes 
significantly more memory and supports vastly more memory. There's a bit of a 
mis-match in a 64-bit system with just 2GB of RAM. I know, sounds weird, just 
two billion bytes of RAM. I'll rephrase: just ten dollars worth of RAM. Less if 
you buy it where I do.

I don't know why the memory profiling tools are misleading you. I can think of 
plausible explanations, but they'd just be guesses. There's nothing all that 
surprising in running out of RAM, given what you've explained. A couple K per 
line is easy to burn. 

-Bryan
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to send a var to stdin of an external software

2008-03-14 Thread bryanjugglercryptographer
Floris Bruynooghe wrote:
 Benjamin Watine wrote:
  Could you give me more information / examples about the two solutions
  you've proposed (thread or asynchronous I/O) ?

 The source code of the subprocess module shows how to do it with
 select IIRC.  Look at the implementation of the communicate() method.

And here's a thread example, based on Benjamin's code:

import subprocess
import thread

def readtobox(pipe, box):
box.append(pipe.read())

cat = subprocess.Popen('cat', shell=True, stdin=subprocess.PIPE,
stdout=subprocess.PIPE)

myVar = str(range(100)) # arbitrary test data.

box = []
thread.start_new_thread(readtobox, (cat.stdout, box))
cat.stdin.write(myVar)
cat.stdin.close()
cat.wait()
myNewVar = box[0]

assert myNewVar == myVar
print len(myNewVar), bytes piped around.


--
--Bryan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to send a var to stdin of an external software

2008-03-14 Thread bryanjugglercryptographer
I wrote:
 And here's a thread example, based on Benjamin's code:
[...]

Doh! Race condition. Make that:

import subprocess
import thread
import Queue

def readtoq(pipe, q):
q.put(pipe.read())

cat = subprocess.Popen('cat', shell=True, stdin=subprocess.PIPE,
stdout=subprocess.PIPE)

myVar = str(range(100)) # arbitrary test data.

q = Queue.Queue()
thread.start_new_thread(readtoq, (cat.stdout, q))
cat.stdin.write(myVar)
cat.stdin.close()
cat.wait()
myNewVar = q.get()

assert myNewVar == myVar
print len(myNewVar), bytes piped around.


--
--Bryan



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: subprocess.Popen pipeline bug?

2008-03-13 Thread bryanjugglercryptographer
Marko Rauhamaa wrote:
 This tiny program hangs:

 
 #!/usr/bin/env python
 import subprocess
 a = subprocess.Popen('cat',shell = True,stdin = subprocess.PIPE,
  stdout = subprocess.PIPE)
 b = subprocess.Popen('cat /dev/null',shell = True,stdin = a.stdout)
 a.stdin.close()
 b.wait() # hangs
 a.wait() # never reached
 

To make it work, add close_fds=True in the Popen that creates b.

 It shouldn't, should it?

Not sure. I think what's happening is that the second cat subprocess
never gets EOF on its stdin, because there are still processes with
an open file descriptor for the other end of the pipe.

The Python program closes a.stdin, and let's suppose that's file
descriptor 4. That's not enough, because the subshell that ran cat and
the cat process itself inherited the open file descriptor 4 when they
forked off.

It looks like Popen is smart enough to close the extraneous
descriptors for pipes it created in the same Popen call, but that
one was created in a previous call and passed in.


--
--Bryan
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Saving parameters between Python applications?

2007-09-17 Thread bryanjugglercryptographer
On Sep 17, 6:39 am, Laurent Pointal
 May use simple file in known place:
 $HOME/.myprefs
 $HOME/.conf/myprefs

 Or host specific configuration API:
 WindowsRegistry HKEY_CURRENT_USER\Software\MySociety\MyApp\myprefs

 See os.getenv, and _winreg Windows specific module.
 See also standard ConfigParser module


Also, os.path offers expanduser(). The following is reasonably
portable:

import os

user_home_dir = os.path.expanduser(~)


--
--Bryan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: BCD List to HEX List

2006-08-03 Thread bryanjugglercryptographer
ohn Machin wrote:
 [EMAIL PROTECTED] wrote:
  For each nibble n of x means to take each 4 bit piece of the BCD
  integer as a value from zero to sixteen (though only 0 through 9
  will appear), from most significant to least significant.
 The OP's input, unvaryingly through the whole thread, even surviving to
 his Javacard implementation of add() etc, is a list/array of decimal
 digits (0 = value = 9). Extracting a nibble is so simple that
 mentioning a subroutine might make the gentle reader wonder whether
 there was something deeper that they had missed.

Yes, it's simple; that was the point. The most complex routine I
assumed is integer addition, and it's not really hard. I'll
present an example below.

  Adding
  integers and shifting binary integers is well-defined
  terminology.

 Yes, but it's the *representation* of those integers that's been the
 problem throughout.

Right. To solve that problem, I give the high-level algorithm and
deal with the representation in the shift and add procedures.

  I already posted the three-line algorithm. It
  appeared immediately under the phrase To turn BCD x to binary
  integer y, and that is what it is intended to achieve.

 Oh, that algorithm. The good ol' num = num * base + digit is an
 algorithm???

You lost me. The algorithm I presented didn't use a multiply
operator. It could have, and of course it would still be an
algorithm.

 The problem with that is that the OP has always maintained that he has
 no facility for handling a binary integer (num) longer than 16 bits
 -- no 32-bit long, no bignum package that didn't need long, ...

No problem. Here's an example of an add procedure he might use in
C. It adds modestly-large integers, as base-256 big-endian
sequences of bytes. It doesn't need an int any larger than 8 bits.
Untested:

typedef unsigned char uint8;
#define SIZEOF_BIGINT 16

uint8 add(uint8* result, const uint8* a, const uint8* b)
/* Set result to a+b, returning carry out of MSB. */
{
uint8 carry = 0;
unsigned int i = SIZEOF_BIGINT;
while (i  0) {
--i;
result[i] = (a[i] + b[i] + carry)  0xFF;
carry = carry ? result[i] = a[i] : result[i]  a[i];
}
return carry;
}

 Where I come from, a normal binary integer is base 2. It can be
 broken up into chunks of any size greater than 1 bit, but practically
 according to the wordsize of the CPU: 8, 16, 32, 64, ... bits. Since
 when is base 256 normal and in what sense of normal?

All the popular CPU's address storage in byte. In C all variable
sizes are in units of char/unsigned char, and unsigned char must
hold zero through 255.

 The OP maintained the line that he has no facility for handling a
 base-256 number longer than 2 base-256 digits.

So he'll have to build what's needed. That's why I showed the
problem broken down to shifts and adds; they're easy to build.

 The dialogue between Dennis and the OP wasn't the epitome of clarity:

Well, I found Dennis clear.

[...]

 I was merely wondering whether you did in fact
 have a method of converting from base b1 (e.g. 10) to base b2 (e.g. 16)
 without assembling the number in some much larger base  b3 (e.g. 256).

I'm not sure what that means.

-- 
--Bryan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: BCD List to HEX List

2006-08-02 Thread bryanjugglercryptographer

John Machin wrote:
 [EMAIL PROTECTED] wrote:
  For each nibble n of x means to take each 4 bit piece of the BCD
  integer as a value from zero to sixteen (though only 0 through 9
  will appear), from most significant to least significant.

 The OP's input, unvaryingly through the whole thread, even surviving to
 his Javacard implementation of add() etc, is a list/array of decimal
 digits (0 = value = 9). Extracting a nibble is so simple that
 mentioning a subroutine might make the gentle reader wonder whether
 there was something deeper that they had missed.

Yes, it's simple; that was the point. The most complex routine I
assumed is integer addition, and it's not really hard. I'll
present an example below.

  Adding
  integers and shifting binary integers is well-defined
  terminology.

 Yes, but it's the *representation* of those integers that's been the
 problem throughout.

Right. To solve that problem, I give the high-level algorithm and
deal with the representation in the shift and add procedures.

  I already posted the three-line algorithm. It
  appeared immediately under the phrase To turn BCD x to binary
  integer y, and that is what it is intended to achieve.

 Oh, that algorithm. The good ol' num = num * base + digit is an
 algorithm???

You lost me. The algorithm I presented didn't use a multiply
operator. It could have, and of course it would still be an
algorithm.

 The problem with that is that the OP has always maintained that he has
 no facility for handling a binary integer (num) longer than 16 bits
 -- no 32-bit long, no bignum package that didn't need long, ...

No problem. Here's an example of an add procedure he might use in
C. It adds modestly-large integers, as base-256 big-endian
sequences of bytes. It doesn't need an int any larger than 8 bits.
Untested:

typedef unsigned char uint8;
#define SIZEOF_BIGINT 16

uint8 add(uint8* result, const uint8* a, const uint8* b)
/* Set result to a+b, returning carry out of MSB. */
{
uint8 carry = 0;
unsigned int i = SIZEOF_BIGINT;
while (i  0) {
--i;
result[i] = (a[i] + b[i] + carry)  0xFF;
carry = carry ? result[i] = a[i] : result[i]  a[i];
}
return carry;
}

 Where I come from, a normal binary integer is base 2. It can be
 broken up into chunks of any size greater than 1 bit, but practically
 according to the wordsize of the CPU: 8, 16, 32, 64, ... bits. Since
 when is base 256 normal and in what sense of normal?

All the popular CPU's address storage in byte. In C all variable
sizes are in units of char/unsigned char, and unsigned char must
hold zero through 255.

 The OP maintained the line that he has no facility for handling a
 base-256 number longer than 2 base-256 digits.

So he'll have to build what's needed. That's why I showed the
problem broken down to shifts and adds; they're easy to build.

 The dialogue between Dennis and the OP wasn't the epitome of clarity:

Well, I found Dennis clear.

[...]
 I was merely wondering whether you did in fact
 have a method of converting from base b1 (e.g. 10) to base b2 (e.g. 16)
 without assembling the number in some much larger base  b3 (e.g. 256).

I'm not sure what that means.


-- 
--Bryan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Windows vs. Linux

2006-08-02 Thread bryanjugglercryptographer

Sybren Stuvel wrote:
 John Salerno enlightened us with:
  But of course I still agree with you that in either case it's not a
  judgment you can fairly make 30 years after the fact.

 I don't see Microsoft changing it the next 30 years either... Apple
 moved from \r to \n as EOL character. I'm sure the folks at mickeysoft
 are smart enough to change from \ to /.

They dis-allow '/' in filenames, and many Microsoft products now
respect
'/' as an alternate to '\'.

From a WinXP command prompt:

C:\
C:\cd /windows/system32

C:\WINDOWS\system32


For a Windows vs. Linux thread, this one has been remarkably
rant-free.

-- 
--Bryan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Using Python for my web site

2006-08-01 Thread bryanjugglercryptographer

northband wrote:
 Hi, I am interested in re-writing my website in Python vs PHP but have
 a few questions. Here are my specs, please advise as to which
 configuration would be best:

 1.Dell Poweredge Server, w/IIS, currently Windows but considering
 FreeBSD
 2. Site consists of result pages for auctions and items for sale (100
 per page)
 3. MySQL (Dell Poweredge w/AMD) database server connected to my web
 server
 4. Traffic, 30 million page loads/month

 I am trying to have the fastest page loads, averaging 100 items per
 result page.  I have read about using Apache's mod_python so I could
 use PSP.  Any help or tips are appreciated.

So if I'm reading this correctly: you have a system that's
working, and the main purpose of the re-write is faster page
responses to users. Do I have that right?

Have you determined where the time is spent? For some web apps,
speed is all about the database, and if that's true of your system
then changing scripting language isn't going to provide the
performance boost you seek.

Another interesting question is how response time changes with
increasing load. Of course with the real website, those 30 million
page loads per month are not uniformly distributed. What is your
peak rate? Is rush-hour speed mostly what motivates the project?


-- 
--Bryan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: BCD List to HEX List

2006-08-01 Thread bryanjugglercryptographer

John Machin wrote:
 [EMAIL PROTECTED] wrote:
  John Machin wrote:
   [EMAIL PROTECTED] wrote:
To turn BCD x to binary integer y,
   
  set y to zero
  for each nibble n of x:
y = (((y shifted left 2) + y) shifted left 1) + n
  
   Yeah yeah yeah
   i.e. y = y * 10 + n
   he's been shown that already.
  
   Problem is that the OP needs an 8-decimal-digit (32-bits) answer, but
   steadfastly maintains that he doesn't have access to long (32-bit)
   arithmetic in his C compiler!!!
 
  And he doesn't need one. He might need the algorithms for shift and
  add.

 I hate to impose this enormous burden on you but you may wish to read
 the whole thread. He was given those algorithms.

Quite some longwinded code and arguing about platforms in the rest
of the thread. My version assumes three subroutines: extracting
nibbles, shifting, and adding, Those are pretty simple, so I asked
if he needed them rather than presenting them. Assuming we have
them, the algorithm is three lines long. Don't know why people
have to make such a big deal of a BCD converter.

 He then upped the
 ante to 24 decimal digits and moved the goalposts to some chip running
 a cut-down version of Java ...

He took a while to state the problem, but was clear from the start
that he had lists of digits rather than an integer datatype.


-- 
--Bryan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: fast pythonic algorithm question

2006-08-01 Thread bryanjugglercryptographer

Guyon Morée wrote:
 i have a big list of tuples like this:

 [ (host, port, protocol, startime, endtime), .. ] etc

 now i have another big(ger) list of tuples like this:

 [(src_host, src_port, dest_src, dest_port, protocol, time), ... ] etc

 now i need to find all the items in the second list where either
 src_host/src_port or dest_host/dest_port matches, protocol matches and
 time is between starttime and end time.

 After trynig some stuff out i actually found dictionary lookup pretty
 fast. Putting the first list in a dict like this:

 dict[(host,port,protocol)] = (starttime, endtime)

That only works if each (host,port,protocol) can appear with only
one (starttime, endtime) in your first big list. Do the variable
names mean what they look like? There's nothing unusual about
connecting to the same host and port with the same protocol, at
multiple times.

You might want your dict to associate (host,port,protocol) with a
list, or a set, of tuples of the form (starttime, endtime). If the
lists can be long, there are fancier methods for keeping the set
of intervals and searching them for contained times or overlapping
intervals. Google up interval tree for more.


-- 
--Bryan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: BCD List to HEX List

2006-08-01 Thread bryanjugglercryptographer

John Machin wrote:
 [EMAIL PROTECTED] wrote:

 My version assumes three subroutines: extracting
  nibbles, shifting, and adding, Those are pretty simple, so I asked
  if he needed them rather than presenting them.
  Assuming we have
  them, the algorithm is three lines long.

 Perhaps you could enlighten us by publishing (a) the spec for each of
 the get_nibble(s), shift, and add subroutines (b) the three-line
 algorithm (c) what the algorithm is intended to achieve ...

For each nibble n of x means to take each 4 bit piece of the BCD
integer as a value from zero to sixteen (though only 0 through 9
will appear), from most significant to least significant. Adding
integers and shifting binary integers is well-defined
terminology. I already posted the three-line algorithm. It
appeared immediately under the phrase To turn BCD x to binary
integer y, and that is what it is intended to achieve.

  He took a while to state the problem, but was clear from the start
  that he had lists of digits rather than an integer datatype.

 Yes, input was a list [prototyping a byte array] of decimal digits. The
 OUTPUT was also a list of something. A few messages later, it became
 clear that the output desired was a list of hexadecimal digits. Until
 he revealed that the input was up to 24 decimal digits,  I was pursuing
 the notion that a solution involving converting decimal to binary (in a
 32-bit long) then to hexadecimal was the way to go.

 What is apparently needed is an algorithm for converting a large
 number from a representation of one base-10 digit per storage unit to
 one of a base-16  digit per storage unit,  when the size of the number
 exceeds the size (8, 16, 32, etc bits) of the registers available.

I read his Yes I realized that after writing it. response to
Dennis Lee Bieber to mean Bieber was correct and what he wanted
was to go from BCD to a normal binary integer, which is base 256.

The point of posting the simple high-level version of the
algorithm was to show a general form that works regardless of
particular languages, register sizes and storage considerations.
Those matters can effect the details of how one shifts a binary
integer left one bit, but shifting is not complicated in any
plausible case.

 Is that what you have?

I'm sorry my post so confused, and possibly offended you.


-- 
--Bryan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: BCD List to HEX List

2006-07-31 Thread bryanjugglercryptographer

Philippe Martin wrote:
 Yes, I came here for the algorithm question, not the code result.

To turn BCD x to binary integer y,

  set y to zero
  for each nibble n of x:
y = (((y shifted left 2) + y) shifted left 1) + n

Do you need instruction on extracting nibbles, and shifting and
adding integers?

A problem this small and simple does not call for a prototype.


-- 
--Bryan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: BCD List to HEX List

2006-07-31 Thread bryanjugglercryptographer

John Machin wrote:
 [EMAIL PROTECTED] wrote:
  Philippe Martin wrote:
   Yes, I came here for the algorithm question, not the code result.
 
  To turn BCD x to binary integer y,
 
set y to zero
for each nibble n of x:
  y = (((y shifted left 2) + y) shifted left 1) + n

 Yeah yeah yeah
 i.e. y = y * 10 + n
 he's been shown that already.

 Problem is that the OP needs an 8-decimal-digit (32-bits) answer, but
 steadfastly maintains that he doesn't have access to long (32-bit)
 arithmetic in his C compiler!!!

And he doesn't need one. He might need the algorithms for shift and
add.


-- 
--Bryan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: how to make python socket server work with the app.MainLoop() in wxpython?

2006-07-30 Thread bryanjugglercryptographer

Philippe Martin wrote:
 Philippe Martin wrote:
  You need to have you server in a separate thread.
 PS:

 http://wiki.wxpython.org/index.cgi/LongRunningTasks


And here's an important bit from the wxWindows doc:

  For communication between secondary threads and the main thread,
  you may use wxEvtHandler::AddPendingEvent or its short version
  wxPostEvent. These functions have thread safe implementation
  [...]
  http://www.wxwindows.org/manuals/2.6.3/wx_wxthreadoverview.html

Calling various wxWindows functions from threads other than the
one that runs the GUI, can cause a crash. Use only those that the
authoritative documentation states to be thread-safe, such as
wxPostEvent. The Wiki page that Pilippe cited says that
wxCallAfter uses wxPostEvent internally, so it should also be
thread-safe. I still wouldn't use it; internals are subject to
change.


-- 
--Bryan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Client/Server Question

2006-07-29 Thread bryanjugglercryptographer

[EMAIL PROTECTED] wrote:
 My server.py looks like this

 -CODE--
 #!/usr/bin/env python
 import socket
 import sys
 import os

 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
 host = ''
 port = 2000

 s.bind((host,port))
 s.listen(1)
 conn, addr = s.accept()
 print 'client is at', addr

 while True:
   data = conn.recv(100)
   if (data == 'MaxSim'):
   print 'MaxiSim'
   os.system('notepad')
   elif (data == 'Driving Sim'):
   print 'Driving Sim'
   os.system('explorer')
   elif (data == 'SHUTDOWN'):
   print 'Shutting down...'
   os.system('shutdown -s')
   conn.close()
   break
 ---CODE
 END-

 I am running this above program on a windows machine. My client is a
 Linux box. What I want to achieve is that server.py should follows
 instructions till I send a 'SHUTDOWN' command upon which it should shut
 down.

 When I run this program and suppose send 'MaxSim' to it, it launches
 notepad.exe fine, but then after that it doesn't accept subsequent
 command.

As others noted, that's because os.system() blocks.
You have more bugs than that.

The recv() might return MaxiSimDriving Sim. It could return
MaxiS on one call, and im on the next. If the remote side
closes the connection, recv() will keep returning the empty
string, and your program will be stuck in an infinite loop.

Did you understand Faulkner's suggustion? Anyone who connects to
TCP port 2000 can invoke shutdown -s (which I assume shuts down
your host).


-- 
--Bryan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: SocketServer and timers

2006-07-28 Thread bryanjugglercryptographer

Simon Forman wrote:
 alf wrote:
  Hi,
 
  I have one thread app using SocketServer and use server_forever() as a
  main loop. All works fine, but now I need certain timer checking let's
  say every 1 second something and stopping the main loop. So questions are:
  -how to stop serve_forever
  -how to implement timers
 
  thx, alf

 Do you really need a timer, or do you just want to check something
 every second or so?

 If the latter, then something like this would do it:

 from time import time

 INTERVAL = 1.0

 RUN = True

 while RUN:

 # Get a time in the future.
 T = time() + INTERVAL

 # Serve requests until then.
 while time()  T:
 server.handle_request()
 # Check whatever.
 if check():
 # Do something, for example, stop running.
 RUN = False

That alone does not work. If server.handle_request() blocks,
you don't get to the check(). You need some kind of timeout
in handle_request().


-- 
--Bryan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Threads vs Processes

2006-07-28 Thread bryanjugglercryptographer

sturlamolden wrote:
 A noteable exception is a toy OS from a manufacturer in Redmond,
 Washington. It does not do COW fork. It does not even fork.

 To make a server system scale well on Windows you need to use threads,
 not processes.

Here's one to think about: if you have a bunch of threads running,
and you fork, should the child process be born running all the
threads? Neither answer is very attractive. It's a matter of which
will probably do the least damage in most cases (and the answer
the popular threading systems choose is 'no'; the child process
runs only the thread that called fork).

MS-Windows is more thread-oriented than *nix, and it avoids this
particular problem by not using fork() to create new processes.

 That is why the global interpreter lock sucks so badly
 on Windows.

It sucks about he same on Windows and *nix: hardly at all on
single-processors, moderately on multi-processors.


-- 
--Bryan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to force a thread to stop

2006-07-27 Thread bryanjugglercryptographer

Carl J. Van Arsdall wrote:
 [EMAIL PROTECTED] wrote:
  Carl J. Van Arsdall wrote:
 
  [EMAIL PROTECTED] wrote:
 
  Carl J. Van Arsdall wrote:
 
  I don't get what threading and Twisted would to do for
  you. The problem you actually have is that you sometimes
  need terminate these other process running other programs.
  Use spawn, fork/exec* or maybe one of the popens.
 
 
  I have a strong need for shared memory space in a large distributed
  environment.
 
 
  Distributed shared memory is a tough trick; only a few systems simulate
  it.
 
 Yea, this I understand, maybe I chose some poor words to describe what I
 wanted.

Ya' think?  Looks like you have no particular need for shared
memory, in your small distributed system.

 I think this conversation is getting hairy and confusing so  I'm
 going to try and paint a better picture of what's going on.  Maybe this
 will help you understand exactly what's going on or at least what I'm
 trying to do, because I feel like we're just running in circles.
[...]

So step out of the circles already. You don't have a Python thread
problem. You don't have a process overhead problem.

[...]
 So, I have a distributed build system. [...]

Not a trivial problem, but let's not pretend we're pushing the
state of the art here.

Looks like the system you inherited already does some things
smartly: you have ssh set up so that a controller machine can
launch various build steps on a few dozen worker machines.

[...]
 The threads invoke a series
 of calls that look like

 os.system(ssh host command)

 or for more complex operations they would just spawn a process that ran
 another python script)

 os.system(ssh host script)
[...]
 Alright, so this scheme that was first put in place kind of worked.
 There were some problems, for example when someone did something like
 os.system(ssh host script)  we had no good way of knowing what the
 hell happened in the script.

Yeah, that's one thing we've been telling you. The os.system()
function doesn't give you enough information nor enough control.
Use one of the alternatives we've suggested -- probably the
subprocess.Popen class.

[...]
 So, I feel like I have a couple options,

  1) try moving everything to a process oriented configuration - we think
 this would be bad, from a resource standpoint as well as it would make
 things more difficult to move to a fully distributed system later, when
 I get my army of code monkeys.

 2) Suck it up and go straight for the distributed system now - managers
 don't like this, but maybe its easier than I think its going to be, I dunno

 3) See if we can find some other way of getting the threads to terminate.

 4) Kill it and clean it up by hand or helper scripts - we don't want to
 do this either, its one of the major things we're trying to get away from.

The more you explain, the sillier that feeling looks -- that those
are your options. Focus on the problems you actually have. Track
what build steps worked as expected; log what useful information
you have about the ones that did not.

That resource standpoint thing doesn't really make sense. Those
os.system() calls launch *at least* one more process. Some
implementations will launch a process to run a shell, and the
shell will launch another process to run the named command. Even
so, efficiency on the controller machine is not a problem given
the scale you have described.


-- 
--Bryan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Threads vs Processes

2006-07-27 Thread bryanjugglercryptographer

Carl J. Van Arsdall wrote:
 Ah, alright, I think I understand, so threading works well for sharing
 python objects.  Would a scenario for this be something like a a job
 queue (say Queue.Queue) for example.  This is a situation in which each
 process/thread needs access to the Queue to get the next task it must
 work on.  Does that sound right?

That's a reasonable and popular technique. I'm not sure what this
refers to in your question, so I can't say if it solves the
problem of which you are thinking.

  Would the same apply to multiple
 threads needed access to a dictionary? list?

The Queue class is popular with threads because it already has
locking around its basic methods. You'll need to serialize your
operations when sharing most kinds of objects.

 Now if you are just passing ints and strings around, use processes with
 some type of IPC, does that sound right as well?

Also reasonable and popular. You can even pass many Python objects
by value using pickle, though you lose some safety.

  Or does the term
 shared memory mean something more low-level like some bits that don't
 necessarily mean anything to python but might mean something to your
 application?

Shared memory means the same memory appears in multiple processes,
possibly at different address ranges. What any of them writes to
the memory, they can all read. The standard Python distribution
now offers shared memory via os.mmap(), but lacks cross-process
locks.

Python doesn't support allocating objects in shared memory, and
doing so would be difficult. That's what the POSH project is
about, but it looks stuck in alpha.


-- 
--Bryan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Threads vs Processes

2006-07-27 Thread bryanjugglercryptographer

Carl J. Van Arsdall wrote:
[...]
 I actually do use pickle (not for this, but for other things), could you
 elaborate on the safety issue?

From http://docs.python.org/lib/node63.html :

Warning: The pickle module is not intended to be secure
against erroneous or maliciously constructed data. Never
unpickle data received from an untrusted or unauthenticated
source.

A corrupted pickle can crash Python. An evil pickle could probably
hijack your process.


-- 
--Bryan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Threads vs Processes

2006-07-27 Thread bryanjugglercryptographer
mark wrote:
 The debate should not be about threads vs processes, it should be
 about threads vs events.

We are so lucky as to have both debates.

 Dr. John Ousterhout (creator of Tcl,
 Professor of Comp Sci at UC Berkeley, etc), started a famous debate
 about this 10 years ago with the following simple presentation.

 http://home.pacbell.net/ouster/threads.pdf

The Ousterhout school finds multiple lines of execution
unmanageable, while the Tannenbaum school finds asynchronous I/O
unmanageable.

What's so hard about single-line-of-control (SLOC) event-driven
programming? You can't call anything that might block. You have to
initiate the operation, store all the state you'll need in order
to pick up where you left off, then return all the way back to the
event dispatcher.

 That sentiment has largely been ignored and thread usage dominates but,
 if you have been programming for as long as I have, and have used both
 thread based architectures AND event/reactor/callback based
 architectures, then that simple presentation above should ring very
 true. Problem is, young people merely equate newer == better.

Newer? They're both old as the trees. That can't be why the whiz
kids like them. Threads and process rule because of their success.

 On large systems and over time, thread based architectures often tend
 towards chaos.

While large SLOC event-driven systems surely tend to chaos. Why?
Because they *must* be structured around where blocking operations
can happen, and that is not the structure anyone would choose for
clarity, maintainability and general chaos avoidance.

Even the simplest of modular structures, the procedure, gets
broken. Whether you can encapsulate a sequence of operations in a
procedure depends upon whether it might need to do an operation
that could block.

Going farther, consider writing a class supporting overriding of
some method. Easy; we Pythoneers do it all the time; that's what
O.O. inheritance is all about. Now what if the subclass's version
of the method needs to look up external data, and thus might
block? How does a method override arrange for the call chain to
return all the way back to the event loop, and to and pick up
again with the same call chain when the I/O comes in?

 I have seen a few thread based systems where the
 programmers become so frustrated with subtle timing issues etc, and they
 eventually overlay so many mutexes etc, that the implementation becomes
 single threaded in practice anyhow(!), and very inefficient.

While we simply do not see systems as complex as modern DBMS's
written in the SLOC event-driven style.

 BTW, I am fairly new to python but I have seen that the python Twisted
 framework is a good example of the event/reactor design alternative to
 threads. See

 http://twistedmatrix.com/projects/core/documentation/howto/async.html .

And consequently, to use Twisted you rewrite all your code as
those 'deferred' things.


-- 
--Bryan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to force a thread to stop

2006-07-27 Thread bryanjugglercryptographer

Paul Rubin wrote:
 Actually I don't understand the need for SSH.

Who are you and what have you done with the real Paul Rubin?

 This is traffic over a
 LAN, right?  Is all of the LAN traffic encrypted?  That's unusual; SSH
 is normally used to secure connections over the internet, but the
 local network is usually trusted.  Hopefully it's not wireless.

I think not running telnet and rsh daemons is a good policy anyway.


-- 
--Bryan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: SocketServer and timers

2006-07-27 Thread bryanjugglercryptographer

alf wrote:
 I have one thread app using SocketServer and use server_forever() as a
 main loop. All works fine, but now I need certain timer checking let's
 say every 1 second something and stopping the main loop. So questions are:
   -how to stop serve_forever

Override serve_forever() and replace while 1: with something like
while self.continue_flag:.

   -how to implement timers

You could override get_request(), and use select.select() to wait for
either the socket to become readable (so the accept() won't block), or
a timeout to expire. If you are not already, use the threading or
forking
mixin so that the request handler won't stop everthing if it blocks.


-- 
--Bryan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to force a thread to stop

2006-07-26 Thread bryanjugglercryptographer

Carl J. Van Arsdall wrote:
 [EMAIL PROTECTED] wrote:
  Carl J. Van Arsdall wrote:
 
  I don't get what threading and Twisted would to do for
  you. The problem you actually have is that you sometimes
  need terminate these other process running other programs.
  Use spawn, fork/exec* or maybe one of the popens.
 
 I have a strong need for shared memory space in a large distributed
 environment.

Distributed shared memory is a tough trick; only a few systems simulate
it.

  How does spawn, fork/exec allow me to meet that need?

I have no idea why you think threads or fork/exec will give you
distributed
shared memory.

 I'll look into it, but I was under the impression having shared memory
 in this situation would be pretty hairy.  For example, I could fork of a
 50 child processes, but then I would have to setup some kind of
 communication mechanism between them where the server builds up a queue
 of requests from child processes and then services them in a FIFO
 fashion, does that sound about right?

That much is easy. What it has to with what you say you require
remains a mystery.


  Threads have little to do with what you say you need.
 
  [...]
 
  I feel like this is something we've established multiple times.  Yes, we
  want the thread to kill itself.  Alright, now that we agree on that,
  what is the best way to do that.
 
 
  Wrong. In your examples, you want to kill other processes. You
  can't run external programs such as ssh as Python threads. Ending
  a Python thread has essentially nothing to do with it.
 
 There's more going on than ssh here.  Since I want to run multiple
 processes to multiple devices at one time and still have mass shared
 memory I need to use threads.

No, you would need to use something that implements shared
memory across multiple devices. Threads are multiple lines of
execution in the same address space.

  There's a mass distributed system that
 needs to be controlled, that's the problem I'm trying to solve.  You can
 think of each ssh as a lengthy IO process that each gets its own
 device.  I use the threads to allow me to do IO to multiple devices at
 once, ssh just happens to be the IO. The combination of threads and ssh
 allowed us to have a *primitive* distributed system (and it works too,
 so I *can* run external programs in python threads).

No, you showed launching it from a Python thread using os.system().
It's not running in the thread; it's running in a separate process.

 I didn't say is
 was the best or the correct solution, but it works and its what I was
 handed when I was thrown into this project.  I'm hoping in fifteen years
 or when I get an army of monkeys to fix it, it will change.  I'm not
 worried about killing processes, that's easy, I could kill all the sshs
 or whatever else I want without batting an eye.

After launching it with os.sytem()? Can you show the code?


-- 
--Bryan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: why is this not working? (nested scope question)

2006-07-26 Thread bryanjugglercryptographer

[EMAIL PROTECTED] wrote:
[...]
 def f1() :
 x=88
 f2()
 def f2() :
 print 'x=',x
 f1()

 that returns an error saying that NameError: global name 'x' is not
 defined. I expected f2 to see the value of x defined in f1 since it
 is nested at runtime.

Ah, no, Python uses static scoping. Google the term for more.


-- 
--Bryan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to force a thread to stop

2006-07-26 Thread bryanjugglercryptographer

Paul Rubin wrote:
 Have you looked at POSH yet?   http://poshmodule.sf.net

Paul, have you used POSH? Does it work well? Any major
gotchas?

I looked at the paper... well, not all 200+ pages, but I checked
how they handle a couple parts that I thought hard and they
seem to have good ideas. I didn't find the SourceForge project
so promising. The status is alpha, the ToDo's are a little scary,
and project looks stalled.  Also it's *nix only.


-- 
--Bryan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Threads vs Processes

2006-07-26 Thread bryanjugglercryptographer
Carl J. Van Arsdall wrote:
 Alright, based a on discussion on this mailing list, I've started to
 wonder, why use threads vs processes.

In many cases, you don't have a choice. If your Python program
is to run other programs, the others get their own processes.
There's no threads option on that.

If multiple lines of execution need to share Python objects,
then the standard Python distribution supports threads, while
processes would require some heroic extension. Don't confuse
sharing memory, which is now easy, with sharing Python
objects, which is hard.


 So, If I have a system that has a
 large area of shared memory, which would be better?  I've been leaning
 towards threads, I'm going to say why.

 Processes seem fairly expensive from my research so far.  Each fork
 copies the entire contents of memory into the new process.

As others have pointed out, not usually true with modern OS's.

 There's also
 a more expensive context switch between processes.  So if I have a
 system that would fork 50+ child processes my memory usage would be huge
 and I burn more cycles that I don't have to.

Again, not usually true. Modern OS's share code across
processes. There's no way to tell the size of 100
unspecified processes, but the number is nothing special.

 So threads seems faster and more efficient for this scenario.  That
 alone makes me want to stay with threads, but I get the feeling from
 people on this list that processes are better and that threads are over
 used.  I don't understand why, so can anyone shed any light on this?

Yes, someone can, and that someone might as well be you.
How long does it take to create and clean up 100 trivial
processes on your system? How about 100 threads? What
portion of your user waiting time is that?


-- 
--Bryan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to force a thread to stop

2006-07-26 Thread bryanjugglercryptographer

Gerhard Fiedler wrote:
 Carl J. Van Arsdall wrote:
  Well, I guess I'm thinking of an event driven mechanism, kinda like
  setting up signal handlers.  I don't necessarily know how it works under
  the hood, but I don't poll for a signal.  I setup a handler, when the
  signal comes, if it comes, the handler gets thrown into action.  That's
  what I'd be interesting in doing with threads.

 What you call an event handler is a routine that gets called from a message
 queue polling routine. You said a few times that you don't want that.

I think he's refering to Unix signal handlers. These really are called
asynchronously. When the signal comes in, the system pushes some
registers on the stack, calls the signal handler, and when the signal
handler returns it pops the registers off the stack and resumes
execution where it left off, more or less. If the signal comes while
the
process is in certain system calls, the call returns with a value or
errno setting that indicated it was interrupted by a signal.

Unix signals are an awkward low-level relic. They used to be the only
way to do non-blocking but non-polling I/O, but current systems offer
much better ways. Today the sensible things to do upon receiving a
signal are ignore it or terminate the process. My opinion, obviously.


-- 
--Bryan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to force a thread to stop

2006-07-25 Thread bryanjugglercryptographer

Carl J. Van Arsdall wrote:
[...]
 My problem with the fact that python doesn't have some type of thread
 killer is that again, the only solution involves some type of polling
 loop.

A polliing loop is neither required nor helpful here.

[...]
 #Just pretend for the sake of arguement that 'op' actually means
 something and is a lengthy operation
 def func_to_thread():
   os.system('op 1')
   os.system('op 2')
   os.system('op 3')

What good do you think killing that thread would do? The
process running 'op n' has no particular binding to the thread
that called os.system(). If 'op n' hangs, it stays hung.

The problem here is that os.system doesn't give you enough
control. It doesn't have a timeout and doesn't give you a
process ID or handle to the spawned process.

Running os.system() in multiple threads strikes me as
kind of whacked. Won't they all compete to read and write
stdin/stdout simultaneously?

 #In order to make this killable with reasonable response time we have to
 organize each of our ops into a function or something equally annoying

 op_1():
   os.system('op 1')

 op_2():
   os.system('op 2')

 op_3():
   os.system('op 3')

 opList(op_1, op_2, op_3)
 def to_thread():
   for op in opList:
 checkMessageQueue()
 op()

Nonsense. If op() hangs, you never get to checkMessageQueue().

Now suppose op has a timeout. We could write

  def opcheck(thing):
  result = op(thing)
  if result == there_was_a_timeout:
  raise some_timeout_exception

How is:

  def func_to_thread():
  opcheck('op 1')
  opcheck('op 2')
  opcheck('op 3')

any less managable than your version of func_to_thread?

 So with this whole hey mr. nice thread, please die for me concept gets
 ugly quickly in complex situations and doesn't scale well at all.
 Furthermore, say you have a complex systems where users can write
 pluggable modules.  IF a module gets stuck inside of some screwed up
 loop and is unable to poll for messages there's no way to kill the
 module without killing the whole system.  Any of you guys thought of a
 way around this scenario?

Threadicide would not solve the problems you actually have, and it
tends to create other problems. What is the condition that makes
you want to kill the thread? Make the victim thread respond to that
condition itself.


-- 
--Bryan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to force a thread to stop

2006-07-25 Thread bryanjugglercryptographer

Dennis Lee Bieber wrote:
 On Mon, 24 Jul 2006 10:27:08 -0700, Carl J. Van Arsdall
  My problem with the fact that python doesn't have some type of thread
  killer is that again, the only solution involves some type of polling
  loop.  I.e. if your thread of execution can be written so that it

   And that is because the control of a thread, once started, is
 dependent upon the underlying OS...

No; it's because killing a thread from another thread fundamentally
sloppy.

 The process of creating a thread can
 be translated into something supplied by pretty much all operating
 systems: an Amiga task, posix thread, etc.

   But ending a thread is then also dependent upon the OS -- and not
 all OSs have a way to do that that doesn't run the risk of leaking
 memory, leaving things locked, etc. until the next reboot.

No operating system has a good way to do it, at least not for
the kind of threads Python offers.


   The procedure for M$ Windows to end a task basically comes down to
 send the task a 'close window' event; if that doesn't work, escalate...
 until in the end it throw its hands up and says -- go ahead and leave
 memory in a mess, just stop running that thread.

The right procedure in MS Windows is the same as under POSIX:
let the thread terminate on its own.

  module without killing the whole system.  Any of you guys thought of a
  way around this scenario?

   Ask Bill Gates... The problem is part of the OS.

Or learn how to use threads properly. Linux is starting to get good
threading. Win32 has had it for quite a while.


-- 
--Bryan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to force a thread to stop

2006-07-25 Thread bryanjugglercryptographer

Carl J. Van Arsdall wrote:
 Unfortunately this is due to the nature of the problem I am tasked with
 solving.  I have a large computing farm, these os.system calls are often
 things like ssh that do work on locations remote from the initial python
 task.  I suppose eventually I'll end up using a framework like twisted
 but, as with many projects, I got thrown into this thing and threading
 is where we ended up.  So now there's the rush to make things work
 before we can really look at a proper solution.

I don't get what threading and Twisted would to do for
you. The problem you actually have is that you sometimes
need terminate these other process running other programs.
Use spawn, fork/exec* or maybe one of the popens.


 Again, the problem I'm trying to solve doesn't work like this.  I've
 been working on a framework to be run across a large number of
 distributed nodes (here's where you throw out the duh, use a
 distributed technology in my face).  The thing is, I'm only writing the
 framework, the framework will work with modules, lots of them, which
 will be written by other people.  Its going to be impossible to get
 people to write hundreds of modules that constantly check for status
 messages.  So, if I want my thread to give itself up I have to tell it
 to give up.

Threads have little to do with what you say you need.

[...]
 I feel like this is something we've established multiple times.  Yes, we
 want the thread to kill itself.  Alright, now that we agree on that,
 what is the best way to do that.

Wrong. In your examples, you want to kill other processes. You
can't run external programs such as ssh as Python threads. Ending
a Python thread has essentially nothing to do with it.

 Right now people keep saying we must  send the thread a message.

Not me. I'm saying work the problem you actually have.


-- 
--Bryan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: httplib, threading, wx app freezing after 4 hours

2006-07-23 Thread bryanjugglercryptographer

Mark rainess wrote:
[...]
 It runs perfectly for about 4 hours, then freezes.
 I'm stuck. How do I debug this?
[...]
 Can anyone suggest techniques to help me learn what is going on.

By inspection: errcode is undefined; I expect you stripped the
example
a bit too far. If it is set to something other 200, it looks like you
loop out.

You are calling wx.CallAfter() from a different thread than runs the
GUI.
Is that documented to be safe? I've read that wxPostEvent() is is the
call to
use for this.

Next thing to try is adding enough logging to tell exactly what
statement
hangs.


-- 
--Bryan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to force a thread to stop

2006-07-22 Thread bryanjugglercryptographer

Hans wrote:
 Hi all,

 Is there a way that the program that created and started a thread also stops
 it.
 (My usage is a time-out).

 E.g.

 thread = threading.Thread(target=Loop.testLoop)
 thread.start() # This thread is expected to finish within a second
 thread.join(2)# Or time.sleep(2) ?

No, Python has no threadicide method, and its absence is not an
oversight. Threads often have important business left to do, such
as releasing locks on shared data; killing them at arbitrary times
tends to leave the system in an inconsistent state.

 if thread.isAlive():
 # thread has probably encountered a problem and hangs
 # What should be here to stop thread  ??

At this point, it's too late. Try to code so your threads don't hang.

Python does let you arrange for threads to die when you want to
terminate the program, with threading's Thread.setDaemon().


-- 
--Bryan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: random shuffles

2006-07-21 Thread bryanjugglercryptographer

Boris Borcic wrote:
 does

 x.sort(cmp = lambda x,y : cmp(random.random(),0.5))

 pick a random shuffle of x with uniform distribution ?

 Intuitively, assuming list.sort() does a minimal number of comparisons to
 achieve the sort, I'd say the answer is yes.

You would be mistaken (except for the trivial case of 2 elements).

In m uniform, independant, random 2-way choices, there are 2**m
equally probable outcomes. We can map multiple random outcomes
to the same final output, but each will still have probability of the
form
n/2**m, where n is an integer.

A random permutation requires that we generate outputs with
probability 1/(n!). For n2, we cannot reach the probability using a
limited number of 2-way choices.


Have you ever looked at the problem of making a perfectly uniform
1-in-3 choice when the only source of randomness is a perfect random
bit generator? The algorithms terminate with probability 1, but are
non-terminators in that there is no finite number of steps in which
they must terminate.


-- 
--Bryan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: how to know if socket is still connected

2006-07-19 Thread bryanjugglercryptographer

Grant Edwards wrote:
 If the server has closed the connection, then a recv() on the
 socket will return an empty string ,

after returning all the data the remote side had sent, of course.

 and a send() on the
 socket will raise an exception.

Send() might, and in many cases should, raise an exception
after the remote side has closed the connection, but the behavior
is unreliable.


-- 
--Bryan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Code run from IDLE but not via double-clicking on its *.py

2005-09-02 Thread bryanjugglercryptographer

I wrote:
 I prefer the tread solution. You can see my exmaple
 in message [EMAIL PROTECTED].

http://groups.google.com/group/comp.lang.python/msg/ffd0159eb52c1b49
[...]

 you should send the shutdown
 across, much like you copy data across: shutdown writing on the
 other socket.

Which, incidentally, I had forgotten to do in my code. See
the follow-up for the fix.


-- 
--Bryan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: List of string

2005-08-18 Thread bryanjugglercryptographer

BranoZ wrote:
 132443 is a 'subsubstring' 0134314244133 because:

For the record, that's called a subsequence.

  http://www.google.com/search?hl=enq=subsequence


-- 
--Bryan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: base64.encode and decode not correct

2005-08-16 Thread bryanjugglercryptographer

Damir Hakimov wrote:

 I found a strange bug in base64.encode and decode, when I try to encode
 - decode a file 1728512 bytes lenth.
 Is somebody meet with this? I don't attach the file because it big, but
 can send to private.

I agree the file is too big, but can you show a small
Python program that does the encode, decode, and
problem detect?


-- 
--Bryan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: __del__ pattern?

2005-08-16 Thread bryanjugglercryptographer

Tom Anderson wrote:
 On Mon, 15 Aug 2005, Chris Curvey wrote:

  Is there a better pattern to follow than using a __del__ method?  I just
  need to be absolutely, positively sure of two things:

 An old hack i've seen before is to create a server socket - ie, make a
 socket and bind it to a port:

 import socket

 class SpecialClass:
   def __init__(self):
   self.sock = socket.socket()
   self.sock.bind((, 4242))
   def __del__(self):
   self.sock.close()

 Something like that, anyway.

 Only one socket can be bound to a given port at any time, so the second
 instance of SpecialClass will get an exception from the bind call, and
 will be stillborn. This is a bit of a crufty hack, though - you end up
 with an open port on your machine for no good reason.

Much worse, it's a bug. That pattern is for programs that need to
respond at a well-known port. In this case it doesn't work; the
fact that *someone* has a certain socket open does not mean that
this particular program is running.


-- 
--Bryan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: __del__ pattern?

2005-08-16 Thread bryanjugglercryptographer
Chris Curvey wrote:
 I need to ensure that there is only one instance of my python class on
 my machine at a given time.  (Not within an interpreter -- that would
 just be a singleton -- but on the machine.)  These instances are
 created and destroyed, but there can be only one at a time.

 So when my class is instantiated, I create a little lock file, and I
 have a __del__ method that deletes the lock file.  Unfortunately, there
 seem to be some circumstances where my lock file is not getting
 deleted.  Then all the jobs that need that special class start
 queueing up requests, and I get phone calls in the middle of the night.

For a reasonably portable solution, leave the lock file open.
On most systems, you cannot delete an open file, and if the
program terminates, normally or abnormally, the file will be
closed.

When the program starts, it looks for the lock file, and if
it's there, tries to delete it; if the delete fails, another
instance is probably running. It then tries to create the
lock file, leaving it open; if the create fails, you probably
lost a race with another instance. When exiting cleanly, the
program closes the file and deletes it.

If the program crashes without cleaning up, the file will still
be there, but a new instance can delete it, assuming
permissions are right.


There are neater solutions that are Unix-only or Windows-only.
See BranzoZ's post for a Unix method.

-- 
--Bryan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: socket setdefaulttimeout

2005-08-15 Thread bryanjugglercryptographer

Michael P. Soulier wrote:
 On 13/08/05 Bryan Olson said:

  The seperate thread-or-process trick should work. Start a deamon
  thread to do the gethostbyname, and have the main thread give up
  on the check if the deamon thread doesn't report (via a lock or
  another socket) within, say, 8 seconds.

 Wouldn't an alarm be much simpler than a whole thread just for this?

You mean a Unix-specific signal? If so that would be much less
portable. As for simpler, I'd have to see your code.


-- 
--Bryan

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Bug in slice type

2005-08-15 Thread bryanjugglercryptographer

Michael Hudson wrote:
 Bryan Olson writes:
 In some sense; it certainly does what I intended it to do.

[...]
 I'm not going to change the behaviour.  The docs probably aren't
 especially clear, though.

The docs and the behavior contradict:

[...] these are the /start/ and /stop/ indices and the
/step/ or stride length of the slice [emphasis added].


I'm fine with your favored behavior. What do we do next to get
the doc fixed?


-- 
--Bryan

-- 
http://mail.python.org/mailman/listinfo/python-list