Re: reimport module every n seconds

2011-02-18 Thread Santiago Caracol
  Don't do that.  ;-)  I suggest using exec instead.  However, I would be
  surprised if import worked faster than, say, JSON (more precisely, I
  doubt that it's enough faster to warrnat this kludge).

 I'm with Aahz.  Don't do that.

 I don't know what you're doing, but I suspect an even better solution
 would be to have your program run a reconfigure thread which listens
 on a UDP socket and reads a JSON object from it.  Or, at the very least,
 which listens for a signal which kicks it to re-read some JSON from a
 disk file.  This will be more responsive when the data changes quickly
 and more efficient when it changes slowly.  As opposed to just polling
 for changes every 10 seconds.

Somehow I guessed that I would be told not to do it. But it's my
fault. I should have been more explicit. The data is not just data. It
is a large set of Django objects I call ContentClassifiers that have
each certain methods that calculate from user input very large regular
expressions. They they have a method classify that is applied to
messages and uses the very large regular expressions. To classify a
message I simply apply the classify method of all ContentClassifiers.
There are very many Contentclassifiers.  I compile the
ContentClassifiers, which are Django objects, to pure Python objects
in order to not have to fetch them from the database each time I need
them and in order to be able to compile the large regular expressions
offline. In Django, I generated and compiled each regular expressions
of each ContentClassifier each time I needed it to classify a message.
I didn't find a good way to calculate and compile the regular
expressions once and store them.

I already tested the automatically generated module: Before,
classifying a message took about 10 seconds, now it takes miliseconds.

My only problem is reloading the module: At the time being, I simply
restart the web server manually from time to time in order to load the
freshly compiled module.






-- 
http://mail.python.org/mailman/listinfo/python-list


reimport module every n seconds

2011-02-17 Thread Santiago Caracol
Hello,

a server program of mine uses data which are compiled to a Python
module for efficiency reasons. In some module of the server program I
import the data:

from data import data

As the data often changes, I would like to reimport it every n (e.g.
10) seconds.

Unfortunately, it is rather difficult and probably not a good idea to
manipulate the main function of the server program, as it comes from
Django. (Already asked in the Django newsgroup, but I got no answer
there.) Is there a way to do the reimport in a module that is used by
the main program?

Santiago
-- 
http://mail.python.org/mailman/listinfo/python-list


do something every n seconds

2010-11-25 Thread Santiago Caracol
Hello,

how can I do something (e.g. check if new files are in the working
directory) every n seconds in Python?

Santiago
-- 
http://mail.python.org/mailman/listinfo/python-list


sys.setrecursionlimit() and regular expressions

2010-09-30 Thread Santiago Caracol
Hello,

in my program I use recursive functions. A recursion limit of 10 would
be by far sufficient. Yet, I also use some (not very complicated)
regular expressions, which only compile if I set the recursion limit
to 100, or so. This means, of course, an unnecessary loss of speed.

Can I use one recursion limit for my functions and another one for the
re module?

Santiago
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: sys.setrecursionlimit() and regular expressions

2010-09-30 Thread Santiago Caracol

 Why do you think so? The recursion limit has no effect on the speed of your
 script. It's just a number that the interpreter checks against.

Yes, sorry. I was just about to explain that. The 'of course' in my
post was silly.

In MY program, the recursion limit is relevant for performance,
because I use constructs of the following kind:

def analyze(sentence):
try:
...
except RuntimeError:
...

I.e. I try to apply certain recursive rules to a sentence. If this
doesn't lead to any results (up to recursion limit N),
then I skip the analysis.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: freeze function calls

2010-08-11 Thread Santiago Caracol
Peter,

thanks again for all this code. You helped me a lot.

 Didn't you say you weren't interested in the web specific aspects?

I thought that, although my problem had to do with client-server
stuff, it wasn't really web-specific. But now I think that that was
part of my problem. I failed to see that using sessions could be the
solution.

Santiago
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: freeze function calls

2010-08-10 Thread Santiago Caracol
 Python offers an elegant mechanism to calculate values on demand: the
 generator function:

  def calculate_answers():

 ...     for i in range(100):
 ...             print calculating answer #%d % i
 ...             yield i * i
 ...


Thanks for pointing this out. I was aware of the yield statement.

My problem is this:

(1) The user submits a query:

-
| What is the answer? |
-
Submit

(2) The web server gives the user N answers and a button saying More
answers:

. answer 1
. answer 2
. answer 3

More answers

At this stage the function that writes html to the user has been
called. The call must be terminated, or else, the user doesn't get any
html. This means that the call of the answer-calculating function,
whether it uses yield or not, is also terminated. This means, when the
user presses the More answers-button, the calculation has to start
at the beginning.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: freeze function calls

2010-08-10 Thread Santiago Caracol
 Run the above with

 $ python wsgi_demo.py
 Serving on port 8000...


Thanks a lot for this code. The problem with it is that the whole
application IS a generator function. That means that if I run the code
at, say foo.org, then any user that visits the site will augment the
answer number of the server running at foo.org. What I am trying to do
is to process specific queries for different users. Each user is
supposed to get his very own answers to his very own questions. And if
a user doesn't hit the More answers-button during a certain period
of time, the generator or function call reponsible for answering his
question is supposed to be killed or thrown-away or forgotten. If the
user asks a new question (while the answers to the old question are
still being displayed), then the generator or function call is also
supposed to be forgotten and a new generator or function call -- one
that matches the user's new question -- is supposed to be initiated.

Santiago


-- 
http://mail.python.org/mailman/listinfo/python-list


run subprocesses in parallel

2010-08-02 Thread Santiago Caracol
Hello,

I want to run several subprocesses. Like so:

p1 = Popen(mycmd1 +  myarg, shell=True)
p2 = Popen(mycmd2 +  myarg, shell=True)
...
pn = Popen(mycmdn +  myarg, shell=True)

What would be the most elegant and secure way to run all n
subprocesses in parallel?

Santiago
-- 
http://mail.python.org/mailman/listinfo/python-list