On Wed, Mar 17, 2010 at 11:10:59AM -0500, Eric Haszlakiewicz wrote:
 > On Tue, Mar 16, 2010 at 08:01:31PM +0000, David Holland wrote:
 > > But recompiling things isn't a complex unautomated procedure, it's a
 > > complex automated procedure, and not really that much different from
 > > other complex automated procedures like binary updates.
 > 
 >  The difference here is that a binary update is changing one particular
 > machine and updating some other machine obviously won't have the intended
 > effect, but recompiling things does the exact same thing regardless of
 > where you do it, so having multiple people do it seems like a waste of
 > time.

That's a red herring; applying a binary patch does the same thing
everywhere, and recompiling updates one particular machine in exactly
the same sense too. The difference is in what material is distributed
and how and where it's processed. This is not something end users are
going to care about much - at most they'll care about how long it
takes.

Which is a valid concern, of course, especially in extreme examples
like building firefox on a sun3, but it's *not* a usability issue in
the same sense that e.g. incomprehensible error messages are.

Admittedly, neither CVS nor our build system is quite robust enough to
make this really work, but in practice tools like apt-get and yum
aren't quite, either.

Anyhow, it seems to me that a blanket statement "nobody should ever
have to recompile anything" requires some justification; however,
people have been taking it as an axiom lately and that concerns me a
bit.

 > > Nor is it necessarily slow; building a kernel doesn't take any longer
 > > than booting Vista...
 > 
 > Maybe on your machine.  On mine it's still quite a bit slower than just
 > editing a config file.

Well sure, but that just means we're way ahead of the competition,
since in Windows editing that config file generally requires a
reboot. :-)

-- 
David A. Holland
dholl...@netbsd.org

Reply via email to