kashani wrote: > I am curious about how other people do updates once they've decided > there is no issue and how hardware/load plays a part in the process.
Generally I am more concerned with a stable, predictable build process than compile times. What wastes more time than anything else is getting 10 hours into the build of a server and hit a brick wall that takes 4 times the amount of time to work around. An example, yesterday I was building a hardened installation from stage1. Generated a stage3, got all of my binary packages built, installed in a vm to test. Apache would not start, libarutil.so was looking for gdbm_errno, which appears to be missing. I did not have gdbm in my use flags so I was scratching my head. Ran portageq envvar USE in my stage1 and stage3, for some reason gdbm is getting included in the use flags in the stage3. Exact same make.conf, exact same use flags, exact same profile, different results. I can't figure out why it is happening, my best guess is that something is poisoning the environment from the host server in my stage3 chroot. I changed my use flags to USE="-* ....." and started back from scratch. Another wasted day. The moral to my story is that compile times are a pointless benchmark if the process used to build the packages is corrupt or inconsistent in some way. Get the process nailed down and you can completely automate your compiles, start them at 5:00 pm as you head out the door ;) To answer your question ;) At work I have one server that does all of the compiling, a dog slow dual 600Mhz with 2G of ram. At home, I use my mythtv pvr ;) It is a P4 3Ghz with 2G ram, I often watch a recording while recording another one, while building in the background (unless compiling qt or kde packages with kdeenablefinal, will produces burps in video). It tends to take about 1/5th of the time as a dual 600Mhz ;) -- [email protected] mailing list
