> > Don't do that. ;-) I suggest using exec instead. However, I would be > > surprised if import worked faster than, say, JSON (more precisely, I > > doubt that it's enough faster to warrnat this kludge). > > I'm with Aahz. Don't do that. > > I don't know what you're doing, but I suspect an even better solution > would be to have your program run a "reconfigure" thread which listens > on a UDP socket and reads a JSON object from it. Or, at the very least, > which listens for a signal which kicks it to re-read some JSON from a > disk file. This will be more responsive when the data changes quickly > and more efficient when it changes slowly. As opposed to just polling > for changes every 10 seconds.
Somehow I guessed that I would be told not to do it. But it's my fault. I should have been more explicit. The data is not just data. It is a large set of Django objects I call "ContentClassifiers" that have each certain methods that calculate from user input very large regular expressions. They they have a method "classify" that is applied to messages and uses the very large regular expressions. To classify a message I simply apply the classify method of all ContentClassifiers. There are very many Contentclassifiers. I compile the ContentClassifiers, which are Django objects, to pure Python objects in order to not have to fetch them from the database each time I need them and in order to be able to compile the large regular expressions offline. In Django, I generated and compiled each regular expressions of each ContentClassifier each time I needed it to classify a message. I didn't find a good way to calculate and compile the regular expressions once and store them. I already tested the automatically generated module: Before, classifying a message took about 10 seconds, now it takes miliseconds. My only problem is reloading the module: At the time being, I simply restart the web server manually from time to time in order to load the freshly compiled module. -- http://mail.python.org/mailman/listinfo/python-list