On Wed, Apr 14, 2010 at 04:20:18PM +0200, Michaaa GGGrny wrote: > On Tue, 13 Apr 2010 23:10:16 -0700 > Brian Harring <[email protected]> wrote: > > > Running multiple emerges in parallel is already a bad idea. The > > solution for that case is for the new/second emerge to feed the > > request into the original emerge (or a daemon). > > Although such solution will be useful in many cases indeed, there are > still many advantages of having few separate emerge calls running > in parallel.
The examples you give are fine and dandy, but if done via parallel emerge you can run into situations where PM 1 just added pkg A as a dep for PKG B, and PM 2 is removing pkg A due to a blocker for pkg C. Running multiple emerge's in parallel is unsafe due to the fact they've got two potentially very different plans as to what is being done, and that there is no possibility to ensure that pkg D that PM-2 is building isn't affected by PM-1 building something (upgrading a dependency of pkg D for example). Yes you can get away with it occasionally, that doesn't mean it's safe however. > The next thing is aborting merges. When running multiple emerges, > aborting one of them is as simple as pressing ^c. With daemon, we would > have to implement an ability of aborting/removing packages in runtime > -- and that would be another example of dependency tree mangling. Aborting merges is a very, very bad idea. Consider a pkg that has dlopen'd plugins, and just went through an ABI change for that interface. If you interupt that merge it's entirely possible you'll get just the lib merged (meaning a segfault on loadup of the plugins), or vice versa (old lib is still in place, but new plugins are there). Don't abort merges- that really should be effectively an atomic OP, not interuptible. ~harring
pgplH1kqIYI0i.pgp
Description: PGP signature
