(Hugely trimmed, because I coldn't find an easy way to pick out the important bits of context, sorry!)
On 29 October 2015 at 23:23, Nathaniel Smith <[email protected]> wrote: > None of this affects correctness -- it's purely an optimization. But > maybe it's an important optimization in certain specific cases. One concern I have is that it's *not* just an optimisation in some cases. If a build being used to get metadata fails, what will happen then? If you fail the whole install process, then using your scikit-learn case, suppose there are wheels available for older versions of scipy, but none for the latest version (a very common scenario, in my experience, for a period after a new release appears). Then the dependency resolution tries to build the latest version to get the metadata, fails, and things stop. But the older version is actually fine, because the wheel can be used. You could treat build failures as "assume not suitable", but that could result in someone getting an older version when a compile fails, rather than getting the error (which in less complex cases than the above, they might want so they can fix it - e.g. by setting an environment variable they'd forgotten, or by downloading a wheel from a non-PyPI repository like Christoph Gohlke's). So while I follow your explanation for the cases where builds always succeed but might take forever, I'm not so sure your conclusions are right for a mix of wheels for some versions, failing builds and other partially-working scenarios. This case concerns me far more in practice than complex dependency graphs do. Paul _______________________________________________ Distutils-SIG maillist - [email protected] https://mail.python.org/mailman/listinfo/distutils-sig
