Shane Hathaway wrote:
Jim Fulton wrote:

I also think there is a real opportunity in allowing reload to fail.
That is, it should be possible for reload to visibly fail so the user
knows that they have to restart.  Then we only reload when we *know*
we can make changes safely and fail otherwise. For example, in the common
case of updating a class, we can update the class in place.  If there
aren't any other changes, then we know the reload is safe.

That's insightful. Zope 2's refresh really should refuse to reload sometimes. Right now it just trusts whoever wrote the "refresh.txt" file.

Here's an idea: When we do a new-improved reload, we:

1. Reevaluate the code in the pyc, getting the original dictionary.

2. Recompile and evaluate the code without writing a new pyc.

Reloadable modules better not cause side effects at import time!

That's a very good point.  It's another example of the way that
Python modules were not designed for reload.  I'm not at all
sure what to do about this.  We really should move this discussion
to python-dev, but I doubt I have the bandwidth for that.

3. Compare the old and new dictionaries to find changes.  If we
   don't know how to compare 2 items, we assume a change.  Note
   removing a variable is considered an unsafe change.  Adding a
   variable is assumed to be a safe change as long as a variable of
   the same name hasn't been added to the module dynamically.

4. We consider whether each of the changes is safe.  If any change
   is unsafe, we raise and error, aborting the reload.  A change is
   safe if the original and new variables are of the same type, the
   values are mutable and if we know how to update the old value
   based on the new value.  In addition, for a change to be safe,
   the original value and the value currently in the module must be
   the same and have the same value.  That is, it can't have been
   changed dynamically.

It sounds like populating of any sort of registry in a module would prevent the module from being reloaded. Take this for example:

# module ""
content_types = {}
def add_content_type(name, extension):
    content_types[name] = extension

As soon as anyone calls add_content_type(), including the module itself, the state of the content_type dict changes from the original value. That's fine by me if that's what you intended. Reloading modules containing registries never seemed like a good idea to me anyway.

But this wouldn't prevent a reload.  BTW, we might as well start calling
changes that prevent reload what they are, "conflicts".   If the new
version of the module initialized content_types to an empty dictionary
as before, then this would *not* be a change and thus would not trigger
a conflict.

5. We apply the changes and write a new pyc.

The server might not have write access to its code directory. Maybe we can't reload if the server can't write the .pyc, since writing the .pyc is required to perform further reloads.


This boils down to merging differences at the Python level.
We fail if we don't know how to apply the diff. At that point,
the user knows they need to restart to get the change.

Hm. This feels kind of workable.  It might even make a good PEP
for a "safe reload".

It's certainly an improvement. It's still possible for other modules to retain state based on a reloadable module's old state. Should we worry about that? Is it something that programmers understand intuitively enough that when they run into it, they won't be baffled?

You should give specific cases that you are worried about.  I think
that this idea addresses a lot of potential problems.

Also, any time someone caches something, they are taking a risk.
Of course, if we do something like this, we'd be changing the rules
in ways that might undermine someone's reasoning about what was
safe to cache.


Jim Fulton           mailto:[EMAIL PROTECTED]       Python Powered!
CTO                  (540) 361-1714  
Zope Corporation
Zope3-dev mailing list

Reply via email to