Le mardi 19 janvier 2010 à 20:45 +0100, Piotr Ożarowski a écrit :
[Josselin Mouette, 2010-01-19]
Le vendredi 15 janvier 2010 à 11:58 +0100, Piotr Ożarowski a écrit :
- starting/stopping daemons problems (very long downtimes due to byte
compilation via triggers) in pysupport,
This is only a problem for daemons using namespace packages, which you
can count with one hand’s fingers.
it takes too long even when I run `dpkg -i foo.deb`, IMHO
What takes too long? The time between stopping and restarting the
daemon? In this case, the package needs to be adapted so that the daemon
is restarted - or even better, told to restart itself through a signal -
in the postinst without being stopped in the prerm.
This is another big regression for packages that don’t use distutils /
setuptools / whatever crap upstreams invent. Currently, for
architecture-independent packages, you can use whatever build system is
shipped, close your eyes, run dh_py* and get done with it. With your
proposal, it means that *all* archirecture-independent packages would
require adaptation to loop over supported Python versions.
Since binary packages will have hardcoded list of supported Python
versions, why not additionally test these versions at build time instead
of install time? If it really is a problem to install for all
supported Python versions, I can add something like -V 2.5+ in order to
create symlinks for missing .py files and bytecompile for installed
versions only, but I'd strongly encourage developers to add python-all to
Build-Depends-Indep anyway (so that I could test them and FTBFS if
needed).
I don’t think you understand the problem here. For example, if a package
uses the autotools instead of distutils, it will require lots of changes
in debian/rules. Multiply them by all packages not using distutils and
have fun.
This is a huge task, and a complete waste of time. Don’t forget that
most people that were motivated to work on Python packages are tired of
endless changes that had to be done in hundreds of packages because of
thoughtless changes in core Python packages.
I'll do my best to make it work even without changes in packages that
currently use pycentral or pysupport.
How would it work? Currently the proposal doesn’t describe that.
- create simple maintainer scripts with pycompile and pyclean
commands and (if needed) rtupdate script (for private directories that
use default Python version and will most probably work with next
default
version). Private modules that cannot be used with default Python
version
will get additional pycompile command with all needed arguments, like
minimum required Python version or hardcoded version in both,
maintainer
and rtupdate scripts,
This part looks worrysome to me. Putting the complexity of the
installation/cleanup actions in auto-generated maintainer scripts (and
now auto-generated rtupdate scripts) sounds like reproducing one of the
core design errors of python-central. The more complexity you put in
auto-generated scripts, the less chances you leave for the system to
clean itself up by just upgrading the helper scripts in case of a bug.
I see it the other way around: short scripts will allow me to fix things
in /usr/bin/py* commands instead of requiring to rebuild all packages
(problem we currently have with Lenny's pycentral based packages).
How do you fix autogenerated .rtupdate scripts without rebuilding all
packages?
Please note that even if something will fail in maintainer script, the
package will still be usable (.py files are already in the final
location).
That’s until you discover a new way for the scripts to fail - although I
agree that in theory, there should be less problems.
ok, `pycompile -p python-foo` and `pycompile -p mypackage \
/usr/share/privatedir1` it is.
Much better indeed.
Or even better, always pycompile mypackage, and ship the list of
directories to byte-compile in a file.
I want to avoid shipping additional files or hardcoding list of files in
maintainer scripts. I see no problem in merging package contents with
pycompile command line options.
The problem will come if you ever change the exclusion lists of what
needs and needs not to be byte-compiled.
For architecture: all packages, I’m still not convinced. Having to
rebuild these Python packages means roughly 3 times as many packages to
rebuild during transitions as there are now. This is going to make
transitions a *lot* more complicated. This means much more work for
everyone, including the RT, at times of changing the list of supported
Python versions. We also lose the current potential to have a large
subset of the archive immediately available for a new Python version
when it becomes ready.
that's the main disadvantage, I agree. It's not that we change the list
of supported Python versions that often, though...