Re: [gentoo-user] Managing multiple systems with identical hardware

2013-12-13 Thread Neil Bothwick
On Thu, 12 Dec 2013 18:06:40 -0800, Grant wrote:

 I may end up using portage instead of rsync but I think I'd like to
 try rsync first.  Am I setting myself up for failure?

Tried and tested system maintenance tool vs. home brewed modification of
critical files... I'd say a definite possibility of a yes.

Just let portage do what it was designed to do. Set up a build host,
which doesn't have to be your laptop, it can be a chroot on a desktop or
server. Set FEATURES=buildpkg on this, add usepkg to emerge default opts
on everything else and have them all use the same PKGDIR over NFS.


-- 
Neil Bothwick

C:\BELFRY is where I keep my .BAT files ^^^oo^^^


signature.asc
Description: PGP signature


Re: [gentoo-user] Managing multiple systems with identical hardware

2013-12-13 Thread Neil Bothwick
On Thu, 12 Dec 2013 17:49:01 -0800, Grant wrote:

 What if the push is done while no one is logged in to the system(s)
 being updated?  I could also exclude /dev, /sys, /proc, and /run and
 reboot after the update.  If that's not good enough, what if I boot
 the systems being updated into read-only mode before updating them?
 I'm hoping to keep the process as simple as possible.

The problem with such home brewed systems is that they may start simple
but you keep adding more conditions, like these, making them anything but
simple. Now you're considering rebooting just to make an update, even
Windows doesn't need that (well, not all the time).

Alan's point about network connectivity is also important. What happens
if you unplug your laptop and leave the office while someone is mid-sync?
With emerge -b, the package is downloaded before anything is installed,
so losing contact with the package host would be less critical. It could
still cause problems if it happened while updating some inter-dependent
packages, which is why I would prefer to use a static package repository.


-- 
Neil Bothwick

If Yoda so strong in force is, why words in right order he cannot put?


signature.asc
Description: PGP signature


Re: [gentoo-user] Managing multiple systems with identical hardware

2013-12-12 Thread Grant
I'm about to embark on this (perilous?) journey and I'm wondering if
anyone would make a comment on any of the questions in the last
paragraph below.  This is basically my plan for setting up a bunch of
systems (laptops) in an office which are hardware-identical to my own
laptop and creating a framework to manage them all with a bare minimum
of time and effort.

Thanks,
Grant


 I see what you desire now - essentially you want to clone your laptop
 (or big chunks of it) over to your other workstations.

 I've been working on this and I think I have a good and simple plan.

 My laptop roams around with me and is the master system.  The office
 router is the submaster system.  All of the other office systems are
 minion systems.  All of the systems are 100% hardware-identical
 laptops.  All of the minions are 100% software-identical.

 I install every package that any system needs on the master and create
 an SSH keypair.  The only config files that change from their state on
 the master are: /etc/conf.d/hostname, /etc/conf.d/net,
 /etc/ssh/sshd_config, /etc/shorewall/*.  I write comments in those
 files which serve as flags for scripted changes.

 I write a script that is run from the master to the submaster, or from
 the submaster to a minion.  If it's the former, rsync / is run with
 exceptions (/usr/portage, /usr/local/portage, /var/log, /tmp, /home,
 /root but /root/.ssh/id_rsa_script* is included), my personal user is
 removed, a series of workstation users are created with useradd -m,
 services are added or removed from /etc/runlevels/default, and config
 files are changed according to comment flags.  If it's the latter,
 rsync / is run without exceptions, services are added or removed from
 /etc/runlevels/default, and config files are changed according to
 comment flags.

 All user info on the submaster and minions would be effectively reset
 whenever the script is run and that's fine.  Root logins would have to
 be allowed on the submaster and minions but only with the SSH key.
 There are probably more paths to exclude when rsyncing master to
 submaster.

 That's it.  No matter how numerous the minions become, this should
 allow me to keep everything running by administrating only my own
 system, pushing that to the submaster, and having the submaster push
 to the minions.  I've been going over the nitty-gritty and everything
 looks good.

 What do you think?  Is there anything inherently wrong with rsyncing /
 onto a running system?  If there are little or no changes to make,
 about how much data would actually be transferred?  Is there a better
 tool for this than rsync?  I know Funtoo uses git for syncing with
 their portage tree.

 - Grant



Re: [gentoo-user] Managing multiple systems with identical hardware

2013-12-12 Thread Poison BL.
On Thu, Dec 12, 2013 at 6:54 PM, Grant emailgr...@gmail.com wrote:
 I'm about to embark on this (perilous?) journey and I'm wondering if
 anyone would make a comment on any of the questions in the last
 paragraph below.  This is basically my plan for setting up a bunch of
 systems (laptops) in an office which are hardware-identical to my own
 laptop and creating a framework to manage them all with a bare minimum
 of time and effort.

 Thanks,
 Grant


 I see what you desire now - essentially you want to clone your laptop
 (or big chunks of it) over to your other workstations.

 I've been working on this and I think I have a good and simple plan.

 My laptop roams around with me and is the master system.  The office
 router is the submaster system.  All of the other office systems are
 minion systems.  All of the systems are 100% hardware-identical
 laptops.  All of the minions are 100% software-identical.

 I install every package that any system needs on the master and create
 an SSH keypair.  The only config files that change from their state on
 the master are: /etc/conf.d/hostname, /etc/conf.d/net,
 /etc/ssh/sshd_config, /etc/shorewall/*.  I write comments in those
 files which serve as flags for scripted changes.

 I write a script that is run from the master to the submaster, or from
 the submaster to a minion.  If it's the former, rsync / is run with
 exceptions (/usr/portage, /usr/local/portage, /var/log, /tmp, /home,
 /root but /root/.ssh/id_rsa_script* is included), my personal user is
 removed, a series of workstation users are created with useradd -m,
 services are added or removed from /etc/runlevels/default, and config
 files are changed according to comment flags.  If it's the latter,
 rsync / is run without exceptions, services are added or removed from
 /etc/runlevels/default, and config files are changed according to
 comment flags.

 All user info on the submaster and minions would be effectively reset
 whenever the script is run and that's fine.  Root logins would have to
 be allowed on the submaster and minions but only with the SSH key.
 There are probably more paths to exclude when rsyncing master to
 submaster.

 That's it.  No matter how numerous the minions become, this should
 allow me to keep everything running by administrating only my own
 system, pushing that to the submaster, and having the submaster push
 to the minions.  I've been going over the nitty-gritty and everything
 looks good.

 What do you think?  Is there anything inherently wrong with rsyncing /
 onto a running system?  If there are little or no changes to make,
 about how much data would actually be transferred?  Is there a better
 tool for this than rsync?  I know Funtoo uses git for syncing with
 their portage tree.

 - Grant


Only thing that comes immediately to mind in rsyncing an overwrite of
/ is that any process that's running that goes looking for libraries
or other data after the rsync pulls the rug out from beneath it might
behave erratically, crash, kick a puppy, write arbitrary data all over
your drive. Also, it's somewhat important to be careful about the
various not-really-there mounts, /dev, /sys, /proc... /run's probably
touchy too, and /var has a few pieces that might be in use mid-sync
and choke something along the way. My idea on that would be... build
an initramfs that:

1) boots to a script
  a) warns the user that it's hungry and that feeding it will be
dangerous to any non-backed-up data, with prompt
  b) warns the user again, with prompt ('cause watching an rsync roll
by that eats that document you just spent 3 weeks on isn't fun)
2) mounts / in a working directory
3) rsyncs the new data from the sub-master
4) kicks off a script to update a hardware keyed (mac address is good
for this) set of settings (hostname, etc)
5) reboots into the new system.

For extra credit... sync /home back to the sub-master to prevent
overfeeding the beast.

-- 
Poison [BLX]
Joshua M. Murphy



Re: [gentoo-user] Managing multiple systems with identical hardware

2013-12-12 Thread wraeth
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 13/12/13 11:16, Poison BL. wrote:
 On Thu, Dec 12, 2013 at 6:54 PM, Grant emailgr...@gmail.com wrote:
 I'm about to embark on this (perilous?) journey and I'm wondering if 
 anyone would make a comment on any of the questions in the last paragraph
 below.  This is basically my plan for setting up a bunch of systems
 (laptops) in an office which are hardware-identical to my own laptop and
 creating a framework to manage them all with a bare minimum of time and
 effort.
 
 Thanks, Grant
 
 
 I see what you desire now - essentially you want to clone
 your laptop (or big chunks of it) over to your other
 workstations.
 
 I've been working on this and I think I have a good and simple plan.
 
 My laptop roams around with me and is the master system.  The office 
 router is the submaster system.  All of the other office systems are 
 minion systems.  All of the systems are 100% hardware-identical 
 laptops.  All of the minions are 100% software-identical.
 
 I install every package that any system needs on the master and create 
 an SSH keypair.  The only config files that change from their state on 
 the master are: /etc/conf.d/hostname, /etc/conf.d/net, 
 /etc/ssh/sshd_config, /etc/shorewall/*.  I write comments in those 
 files which serve as flags for scripted changes.
 
 I write a script that is run from the master to the submaster, or from 
 the submaster to a minion.  If it's the former, rsync / is run with 
 exceptions (/usr/portage, /usr/local/portage, /var/log, /tmp, /home, 
 /root but /root/.ssh/id_rsa_script* is included), my personal user is 
 removed, a series of workstation users are created with useradd -m, 
 services are added or removed from /etc/runlevels/default, and config 
 files are changed according to comment flags.  If it's the latter, 
 rsync / is run without exceptions, services are added or removed from 
 /etc/runlevels/default, and config files are changed according to 
 comment flags.
 
 All user info on the submaster and minions would be effectively reset 
 whenever the script is run and that's fine.  Root logins would have to 
 be allowed on the submaster and minions but only with the SSH key. 
 There are probably more paths to exclude when rsyncing master to 
 submaster.
 
 That's it.  No matter how numerous the minions become, this should 
 allow me to keep everything running by administrating only my own 
 system, pushing that to the submaster, and having the submaster push to
 the minions.  I've been going over the nitty-gritty and everything 
 looks good.
 
 What do you think?  Is there anything inherently wrong with rsyncing / 
 onto a running system?  If there are little or no changes to make, 
 about how much data would actually be transferred?  Is there a better 
 tool for this than rsync?  I know Funtoo uses git for syncing with 
 their portage tree.
 
 - Grant
 
 
 Only thing that comes immediately to mind in rsyncing an overwrite of / is
 that any process that's running that goes looking for libraries or other
 data after the rsync pulls the rug out from beneath it might behave
 erratically, crash, kick a puppy, write arbitrary data all over your drive.
 Also, it's somewhat important to be careful about the various
 not-really-there mounts, /dev, /sys, /proc... /run's probably touchy too,
 and /var has a few pieces that might be in use mid-sync and choke something
 along the way. My idea on that would be... build an initramfs that:
 
 1) boots to a script a) warns the user that it's hungry and that feeding it
 will be dangerous to any non-backed-up data, with prompt b) warns the user
 again, with prompt ('cause watching an rsync roll by that eats that
 document you just spent 3 weeks on isn't fun) 2) mounts / in a working
 directory 3) rsyncs the new data from the sub-master 4) kicks off a script
 to update a hardware keyed (mac address is good for this) set of settings
 (hostname, etc) 5) reboots into the new system.
 
 For extra credit... sync /home back to the sub-master to prevent 
 overfeeding the beast.
 

I'm also somewhat skeptical of rsyncing binaries and libraries on a running
system - it seems needlessly dangerous, particularly for things that have
complex deps.

A mixed alternative to this would be:

use rsync to manage distributing the system-wide configuration files for all
relevant packages (similar to what you're doing at the moment).  This could
include just the /etc directory (and/or other system-wide config directories)
leaving the user files untouched

instead of trying to rsync any binaries or libraries, use the master to build
a binary package (--buildpkg) of whatever software is to be installed, with
the package directory shared over NFS or similar.  Then, on the slaves, set
emerge default opts to --usepkg or --usepkgonly with a cron job, leaving
the actual updating of applications on the slave systems to portage.

- -- 
wraeth
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)


Re: [gentoo-user] Managing multiple systems with identical hardware

2013-12-12 Thread Grant
 I'm about to embark on this (perilous?) journey and I'm wondering if
 anyone would make a comment on any of the questions in the last
 paragraph below.  This is basically my plan for setting up a bunch of
 systems (laptops) in an office which are hardware-identical to my own
 laptop and creating a framework to manage them all with a bare minimum
 of time and effort.

 Thanks,
 Grant


 I see what you desire now - essentially you want to clone your laptop
 (or big chunks of it) over to your other workstations.

 I've been working on this and I think I have a good and simple plan.

 My laptop roams around with me and is the master system.  The office
 router is the submaster system.  All of the other office systems are
 minion systems.  All of the systems are 100% hardware-identical
 laptops.  All of the minions are 100% software-identical.

 I install every package that any system needs on the master and create
 an SSH keypair.  The only config files that change from their state on
 the master are: /etc/conf.d/hostname, /etc/conf.d/net,
 /etc/ssh/sshd_config, /etc/shorewall/*.  I write comments in those
 files which serve as flags for scripted changes.

 I write a script that is run from the master to the submaster, or from
 the submaster to a minion.  If it's the former, rsync / is run with
 exceptions (/usr/portage, /usr/local/portage, /var/log, /tmp, /home,
 /root but /root/.ssh/id_rsa_script* is included), my personal user is
 removed, a series of workstation users are created with useradd -m,
 services are added or removed from /etc/runlevels/default, and config
 files are changed according to comment flags.  If it's the latter,
 rsync / is run without exceptions, services are added or removed from
 /etc/runlevels/default, and config files are changed according to
 comment flags.

 All user info on the submaster and minions would be effectively reset
 whenever the script is run and that's fine.  Root logins would have to
 be allowed on the submaster and minions but only with the SSH key.
 There are probably more paths to exclude when rsyncing master to
 submaster.

 That's it.  No matter how numerous the minions become, this should
 allow me to keep everything running by administrating only my own
 system, pushing that to the submaster, and having the submaster push
 to the minions.  I've been going over the nitty-gritty and everything
 looks good.

 What do you think?  Is there anything inherently wrong with rsyncing /
 onto a running system?  If there are little or no changes to make,
 about how much data would actually be transferred?  Is there a better
 tool for this than rsync?  I know Funtoo uses git for syncing with
 their portage tree.

 - Grant


 Only thing that comes immediately to mind in rsyncing an overwrite of
 / is that any process that's running that goes looking for libraries
 or other data after the rsync pulls the rug out from beneath it might
 behave erratically, crash, kick a puppy, write arbitrary data all over
 your drive. Also, it's somewhat important to be careful about the
 various not-really-there mounts, /dev, /sys, /proc... /run's probably
 touchy too, and /var has a few pieces that might be in use mid-sync
 and choke something along the way. My idea on that would be... build
 an initramfs that:

What if the push is done while no one is logged in to the system(s)
being updated?  I could also exclude /dev, /sys, /proc, and /run and
reboot after the update.  If that's not good enough, what if I boot
the systems being updated into read-only mode before updating them?
I'm hoping to keep the process as simple as possible.

- Grant


 1) boots to a script
   a) warns the user that it's hungry and that feeding it will be
 dangerous to any non-backed-up data, with prompt
   b) warns the user again, with prompt ('cause watching an rsync roll
 by that eats that document you just spent 3 weeks on isn't fun)
 2) mounts / in a working directory
 3) rsyncs the new data from the sub-master
 4) kicks off a script to update a hardware keyed (mac address is good
 for this) set of settings (hostname, etc)
 5) reboots into the new system.

 For extra credit... sync /home back to the sub-master to prevent
 overfeeding the beast.

 --
 Poison [BLX]
 Joshua M. Murphy



Re: [gentoo-user] Managing multiple systems with identical hardware

2013-12-12 Thread Grant
 I'm about to embark on this (perilous?) journey and I'm wondering if
 anyone would make a comment on any of the questions in the last paragraph
 below.  This is basically my plan for setting up a bunch of systems
 (laptops) in an office which are hardware-identical to my own laptop and
 creating a framework to manage them all with a bare minimum of time and
 effort.

 Thanks, Grant


 I see what you desire now - essentially you want to clone
 your laptop (or big chunks of it) over to your other
 workstations.

 I've been working on this and I think I have a good and simple plan.

 My laptop roams around with me and is the master system.  The office
 router is the submaster system.  All of the other office systems are
 minion systems.  All of the systems are 100% hardware-identical
 laptops.  All of the minions are 100% software-identical.

 I install every package that any system needs on the master and create
 an SSH keypair.  The only config files that change from their state on
 the master are: /etc/conf.d/hostname, /etc/conf.d/net,
 /etc/ssh/sshd_config, /etc/shorewall/*.  I write comments in those
 files which serve as flags for scripted changes.

 I write a script that is run from the master to the submaster, or from
 the submaster to a minion.  If it's the former, rsync / is run with
 exceptions (/usr/portage, /usr/local/portage, /var/log, /tmp, /home,
 /root but /root/.ssh/id_rsa_script* is included), my personal user is
 removed, a series of workstation users are created with useradd -m,
 services are added or removed from /etc/runlevels/default, and config
 files are changed according to comment flags.  If it's the latter,
 rsync / is run without exceptions, services are added or removed from
 /etc/runlevels/default, and config files are changed according to
 comment flags.

 All user info on the submaster and minions would be effectively reset
 whenever the script is run and that's fine.  Root logins would have to
 be allowed on the submaster and minions but only with the SSH key.
 There are probably more paths to exclude when rsyncing master to
 submaster.

 That's it.  No matter how numerous the minions become, this should
 allow me to keep everything running by administrating only my own
 system, pushing that to the submaster, and having the submaster push to
 the minions.  I've been going over the nitty-gritty and everything
 looks good.

 What do you think?  Is there anything inherently wrong with rsyncing /
 onto a running system?  If there are little or no changes to make,
 about how much data would actually be transferred?  Is there a better
 tool for this than rsync?  I know Funtoo uses git for syncing with
 their portage tree.

 - Grant

 I'm also somewhat skeptical of rsyncing binaries and libraries on a running
 system - it seems needlessly dangerous, particularly for things that have
 complex deps.

 A mixed alternative to this would be:

 use rsync to manage distributing the system-wide configuration files for all
 relevant packages (similar to what you're doing at the moment).  This could
 include just the /etc directory (and/or other system-wide config directories)
 leaving the user files untouched

 instead of trying to rsync any binaries or libraries, use the master to build
 a binary package (--buildpkg) of whatever software is to be installed, with
 the package directory shared over NFS or similar.  Then, on the slaves, set
 emerge default opts to --usepkg or --usepkgonly with a cron job, leaving
 the actual updating of applications on the slave systems to portage.

I may end up using portage instead of rsync but I think I'd like to
try rsync first.  Am I setting myself up for failure?

- Grant



Re: [gentoo-user] Managing multiple systems with identical hardware

2013-12-12 Thread Alan McKinnon
On 13/12/2013 01:54, Grant wrote:
 I'm about to embark on this (perilous?) journey and I'm wondering if
 anyone would make a comment on any of the questions in the last
 paragraph below.  This is basically my plan for setting up a bunch of
 systems (laptops) in an office which are hardware-identical to my own
 laptop and creating a framework to manage them all with a bare minimum
 of time and effort.

There's nothing inherently wrong with rsyncing onto a running system,
that's what portage (and every make install in the world :-) ) does
anyway. Maybe the scale of what you want to do is bigger

This is Unix, and is knows how to deal with replaced files properly
(unlike our friends over in Redmond)

You will find app-admin/checkrestart very useful to run on the laptops
if you don't already have it. Essentially, it looks for all files that
are in use and have been deleted then tells you which process are
involved so you can restart them.

The only other issue that comes to mind is connectivity, do beware of
network connections going away while you're in the middle of updates.
Proper sensible error handling code in your scripts should take care of this





 
 Thanks,
 Grant
 
 
 I see what you desire now - essentially you want to clone your laptop
 (or big chunks of it) over to your other workstations.

 I've been working on this and I think I have a good and simple plan.

 My laptop roams around with me and is the master system.  The office
 router is the submaster system.  All of the other office systems are
 minion systems.  All of the systems are 100% hardware-identical
 laptops.  All of the minions are 100% software-identical.

 I install every package that any system needs on the master and create
 an SSH keypair.  The only config files that change from their state on
 the master are: /etc/conf.d/hostname, /etc/conf.d/net,
 /etc/ssh/sshd_config, /etc/shorewall/*.  I write comments in those
 files which serve as flags for scripted changes.

 I write a script that is run from the master to the submaster, or from
 the submaster to a minion.  If it's the former, rsync / is run with
 exceptions (/usr/portage, /usr/local/portage, /var/log, /tmp, /home,
 /root but /root/.ssh/id_rsa_script* is included), my personal user is
 removed, a series of workstation users are created with useradd -m,
 services are added or removed from /etc/runlevels/default, and config
 files are changed according to comment flags.  If it's the latter,
 rsync / is run without exceptions, services are added or removed from
 /etc/runlevels/default, and config files are changed according to
 comment flags.

 All user info on the submaster and minions would be effectively reset
 whenever the script is run and that's fine.  Root logins would have to
 be allowed on the submaster and minions but only with the SSH key.
 There are probably more paths to exclude when rsyncing master to
 submaster.

 That's it.  No matter how numerous the minions become, this should
 allow me to keep everything running by administrating only my own
 system, pushing that to the submaster, and having the submaster push
 to the minions.  I've been going over the nitty-gritty and everything
 looks good.

 What do you think?  Is there anything inherently wrong with rsyncing /
 onto a running system?  If there are little or no changes to make,
 about how much data would actually be transferred?  Is there a better
 tool for this than rsync?  I know Funtoo uses git for syncing with
 their portage tree.

 - Grant
 
 
 


-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] Managing multiple systems with identical hardware

2013-12-12 Thread Alan McKinnon
On 13/12/2013 03:49, Grant wrote:
 I'm about to embark on this (perilous?) journey and I'm wondering if
 anyone would make a comment on any of the questions in the last
 paragraph below.  This is basically my plan for setting up a bunch of
 systems (laptops) in an office which are hardware-identical to my own
 laptop and creating a framework to manage them all with a bare minimum
 of time and effort.

 Thanks,
 Grant


 I see what you desire now - essentially you want to clone your laptop
 (or big chunks of it) over to your other workstations.

 I've been working on this and I think I have a good and simple plan.

 My laptop roams around with me and is the master system.  The office
 router is the submaster system.  All of the other office systems are
 minion systems.  All of the systems are 100% hardware-identical
 laptops.  All of the minions are 100% software-identical.

 I install every package that any system needs on the master and create
 an SSH keypair.  The only config files that change from their state on
 the master are: /etc/conf.d/hostname, /etc/conf.d/net,
 /etc/ssh/sshd_config, /etc/shorewall/*.  I write comments in those
 files which serve as flags for scripted changes.

 I write a script that is run from the master to the submaster, or from
 the submaster to a minion.  If it's the former, rsync / is run with
 exceptions (/usr/portage, /usr/local/portage, /var/log, /tmp, /home,
 /root but /root/.ssh/id_rsa_script* is included), my personal user is
 removed, a series of workstation users are created with useradd -m,
 services are added or removed from /etc/runlevels/default, and config
 files are changed according to comment flags.  If it's the latter,
 rsync / is run without exceptions, services are added or removed from
 /etc/runlevels/default, and config files are changed according to
 comment flags.

 All user info on the submaster and minions would be effectively reset
 whenever the script is run and that's fine.  Root logins would have to
 be allowed on the submaster and minions but only with the SSH key.
 There are probably more paths to exclude when rsyncing master to
 submaster.

 That's it.  No matter how numerous the minions become, this should
 allow me to keep everything running by administrating only my own
 system, pushing that to the submaster, and having the submaster push
 to the minions.  I've been going over the nitty-gritty and everything
 looks good.

 What do you think?  Is there anything inherently wrong with rsyncing /
 onto a running system?  If there are little or no changes to make,
 about how much data would actually be transferred?  Is there a better
 tool for this than rsync?  I know Funtoo uses git for syncing with
 their portage tree.

 - Grant


 Only thing that comes immediately to mind in rsyncing an overwrite of
 / is that any process that's running that goes looking for libraries
 or other data after the rsync pulls the rug out from beneath it might
 behave erratically, crash, kick a puppy, write arbitrary data all over
 your drive. Also, it's somewhat important to be careful about the
 various not-really-there mounts, /dev, /sys, /proc... /run's probably
 touchy too, and /var has a few pieces that might be in use mid-sync
 and choke something along the way. My idea on that would be... build
 an initramfs that:
 
 What if the push is done while no one is logged in to the system(s)
 being updated?  I could also exclude /dev, /sys, /proc, and /run and
 reboot after the update.  If that's not good enough, what if I boot
 the systems being updated into read-only mode before updating them?
 I'm hoping to keep the process as simple as possible.


Consider what happens when you run emerge apache on a system running
apache. The universe doesn't explode and all kittens in the world don't
spontaneously die :-)

How is your scheme essentially any different? Presumably you have
already excluded virtual mounts using the appropriate magic switches.

Things like dbus might get upset by being updated, but you have to deal
with that on a portage machine anyway and one usually logs out and in to
fix that, or reboot if the system bus is affected.

You probably want to reboot each laptop after a full update



 
 - Grant
 
 
 1) boots to a script
   a) warns the user that it's hungry and that feeding it will be
 dangerous to any non-backed-up data, with prompt
   b) warns the user again, with prompt ('cause watching an rsync roll
 by that eats that document you just spent 3 weeks on isn't fun)
 2) mounts / in a working directory
 3) rsyncs the new data from the sub-master
 4) kicks off a script to update a hardware keyed (mac address is good
 for this) set of settings (hostname, etc)
 5) reboots into the new system.

 For extra credit... sync /home back to the sub-master to prevent
 overfeeding the beast.

 --
 Poison [BLX]
 Joshua M. Murphy
 
 
 


-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] Managing multiple systems with identical hardware

2013-10-02 Thread Grant
 I see what you desire now - essentially you want to clone your laptop
 (or big chunks of it) over to your other workstations.

I've been working on this and I think I have a good and simple plan.

My laptop roams around with me and is the master system.  The office
router is the submaster system.  All of the other office systems are
minion systems.  All of the systems are 100% hardware-identical
laptops.  All of the minions are 100% software-identical.

I install every package that any system needs on the master and create
an SSH keypair.  The only config files that change from their state on
the master are: /etc/conf.d/hostname, /etc/conf.d/net,
/etc/ssh/sshd_config, /etc/shorewall/*.  I write comments in those
files which serve as flags for scripted changes.

I write a script that is run from the master to the submaster, or from
the submaster to a minion.  If it's the former, rsync / is run with
exceptions (/usr/portage, /usr/local/portage, /var/log, /tmp, /home,
/root but /root/.ssh/id_rsa_script* is included), my personal user is
removed, a series of workstation users are created with useradd -m,
services are added or removed from /etc/runlevels/default, and config
files are changed according to comment flags.  If it's the latter,
rsync / is run without exceptions, services are added or removed from
/etc/runlevels/default, and config files are changed according to
comment flags.

All user info on the submaster and minions would be effectively reset
whenever the script is run and that's fine.  Root logins would have to
be allowed on the submaster and minions but only with the SSH key.
There are probably more paths to exclude when rsyncing master to
submaster.

That's it.  No matter how numerous the minions become, this should
allow me to keep everything running by administrating only my own
system, pushing that to the submaster, and having the submaster push
to the minions.  I've been going over the nitty-gritty and everything
looks good.

What do you think?  Is there anything inherently wrong with rsyncing /
onto a running system?  If there are little or no changes to make,
about how much data would actually be transferred?  Is there a better
tool for this than rsync?  I know Funtoo uses git for syncing with
their portage tree.

- Grant



Re: [gentoo-user] Managing multiple systems with identical hardware

2013-10-01 Thread Grant
 Keeping all of the laptops 100% identical as far as hardware is
 central to this plan.  I know I'm setting myself up for big problems
 otherwise.

 I'm hoping I can emerge every package on my laptop that every other
 laptop needs.  That way I can fix any build problems and update any
 config files right on my own system.  Then I would push config file
 differences to all of the other laptops.  Then each laptop could
 emerge its own stuff unattended.

 I see what you desire now - essentially you want to clone your laptop
 (or big chunks of it) over to your other workstations.

 That sounds about right.

 To get a feel for how it works, visit puppet's web site and download
 some of the test appliances they have there and run them in vm software.
 Set up a server and a few clients, and start experimenting in that
 sandbox. You'll quickly get a feel for how it all hangs together (it's
 hard to describe in text how puppet gets the job done, so much easier to
 do it for real and watch the results)

 Puppet seems like overkill for what I need.  I think all I really need
 is something to manage config file differences and user accounts.  At
 this point I'm thinking I shouldn't push packages themselves, but
 portage config files and then let each laptop emerge unattended based
 on those portage configs.  I'm going to bring this to the 'salt'
 mailing list to see if it might be a good fit.  It seems like a much
 lighter weight application.

 Two general points I can add:

 1. Sharing config files turns out to be really hard. By far the easiest
 way is to just share /etc but that is an all or nothing approach, and
 you just need one file to be different to break it. Like /etc/hostname

 You *could* create a share directory inside /etc and symlink common
 files in there, but that gets very tedious quickly.

 Rather go for a centralized repo solution that pushes configs out, you
 must just find the one that's right for you.

Does using puppet or salt to push configs from my laptop qualify as a
centralized repo solution?

 2. Binary packages are almost perfect for your needs IMHO, running
 emerge gets very tedious quickly, and your spec is that all workstations
 have the same USE. You'd be amazed how much time you save by doing this:

 emerge -b on your laptop and share your /var/packages
 emerge -K on the workstations when your laptop is on the network

 step 2 goes amazingly quickly - eyeball the list to be emerged, they
 should all be purple, press enter. About a minute or two per
 workstation, as opposed to however many hours the build took.

The thing is my laptop goes with me all over the place and is very
rarely on the same network as the bulk of the laptop clients.  Most of
the time I'm on a tethered and metered cell phone connection
somewhere.  Build time itself really isn't a big deal.  I can have the
clients update overnight.  Whether the clients emerge or emerge -K is
the same amount of admnistrative work I would think.

 3. (OK, three points). Share your portage tree over the network. No
 point in syncing multiple times when you actually just need to do it once.

Yep, I figure each physical location should designate one system to
host the portage tree and distfiles.

- Grant



Re: [gentoo-user] Managing multiple systems with identical hardware

2013-10-01 Thread Grant
  Puppet seems like overkill for what I need.  I think all I really need
  is something to manage config file differences and user accounts.  At
  this point I'm thinking I shouldn't push packages themselves, but
  portage config files and then let each laptop emerge unattended based
  on those portage configs.  I'm going to bring this to the 'salt'
  mailing list to see if it might be a good fit.  It seems like a much
  lighter weight application.

 Two general points I can add:

 1. Sharing config files turns out to be really hard. By far the easiest
 way is to just share /etc but that is an all or nothing approach, and
 you just need one file to be different to break it. Like /etc/hostname

 You *could* create a share directory inside /etc and symlink common
 files in there, but that gets very tedious quickly.

 How about using something like unison? I've been using it for a while
 now to sync a specific subset of ~ between three computers.
 It allows for exclude rules for host-specific stuff.

I think what I'd be missing with unison is something to manage the
differences in those host-specific files.

- Grant



Re: [gentoo-user] Managing multiple systems with identical hardware

2013-10-01 Thread Alan McKinnon
On 01/10/2013 08:07, Grant wrote:
 Keeping all of the laptops 100% identical as far as hardware is
 central to this plan.  I know I'm setting myself up for big problems
 otherwise.

 I'm hoping I can emerge every package on my laptop that every other
 laptop needs.  That way I can fix any build problems and update any
 config files right on my own system.  Then I would push config file
 differences to all of the other laptops.  Then each laptop could
 emerge its own stuff unattended.

 I see what you desire now - essentially you want to clone your laptop
 (or big chunks of it) over to your other workstations.

 That sounds about right.

 To get a feel for how it works, visit puppet's web site and download
 some of the test appliances they have there and run them in vm software.
 Set up a server and a few clients, and start experimenting in that
 sandbox. You'll quickly get a feel for how it all hangs together (it's
 hard to describe in text how puppet gets the job done, so much easier to
 do it for real and watch the results)

 Puppet seems like overkill for what I need.  I think all I really need
 is something to manage config file differences and user accounts.  At
 this point I'm thinking I shouldn't push packages themselves, but
 portage config files and then let each laptop emerge unattended based
 on those portage configs.  I'm going to bring this to the 'salt'
 mailing list to see if it might be a good fit.  It seems like a much
 lighter weight application.

 Two general points I can add:

 1. Sharing config files turns out to be really hard. By far the easiest
 way is to just share /etc but that is an all or nothing approach, and
 you just need one file to be different to break it. Like /etc/hostname

 You *could* create a share directory inside /etc and symlink common
 files in there, but that gets very tedious quickly.

 Rather go for a centralized repo solution that pushes configs out, you
 must just find the one that's right for you.
 
 Does using puppet or salt to push configs from my laptop qualify as a
 centralized repo solution?


yes



 
 2. Binary packages are almost perfect for your needs IMHO, running
 emerge gets very tedious quickly, and your spec is that all workstations
 have the same USE. You'd be amazed how much time you save by doing this:

 emerge -b on your laptop and share your /var/packages
 emerge -K on the workstations when your laptop is on the network

 step 2 goes amazingly quickly - eyeball the list to be emerged, they
 should all be purple, press enter. About a minute or two per
 workstation, as opposed to however many hours the build took.
 
 The thing is my laptop goes with me all over the place and is very
 rarely on the same network as the bulk of the laptop clients.  Most of
 the time I'm on a tethered and metered cell phone connection
 somewhere.  Build time itself really isn't a big deal.  I can have the
 clients update overnight.  Whether the clients emerge or emerge -K is
 the same amount of admnistrative work I would think.


I see. So you give up the efficiency of binpkgs to get a system that at
least works reliably.

Within those constraints that probably is the best option.

 
 3. (OK, three points). Share your portage tree over the network. No
 point in syncing multiple times when you actually just need to do it once.
 
 Yep, I figure each physical location should designate one system to
 host the portage tree and distfiles.


-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] Managing multiple systems with identical hardware

2013-10-01 Thread Grant
  I'm soaking up a lot of your time (again).  I'll return with any real
  Gentoo questions I run into and to run down the final plan before I
  execute it.  Thanks so much for your help.  Not sure what I'd do
  without you. :)

 I'm sure Neil would step in if I'm hit by a bus
 He'd say the same things, and use about 1/4 of the words it takes me ;-)

 So far in this thread, I've managed about 0/4 of the words you've used...
 Oh damn!

 But yes, a build host and adding --usepkg=y to EMERGE_DEFAULT_OPTS in
 make.conf gives a massive speed increase. Run the build host in an easily
 recovered environment, like a VM, and you don't even have to monitor the
 world update on it, just run a script in the early hours that does emerge
 --sync  emerge -uXX @world and check your mailbox for errors before
 running emerge on the clients. The use clusterssh or dsh to update them
 all at once.

I'm hoping to update everything on my own laptop before I have the
laptop clients update.  If I install everything on my own laptop that
any of the clients have installed, I should be able to avoid any
update trouble on the clients.  clusterssh or dsh sounds like a good
method for updating the clients.  Basically, once I update everything
on my laptop and it looks good, I want to be able to send the clients
a signal to update as well.

- Grant



Re: [gentoo-user] Managing multiple systems with identical hardware

2013-10-01 Thread Neil Bothwick
On Mon, 30 Sep 2013 23:07:14 -0700, Grant wrote:

 Build time itself really isn't a big deal.  I can have the
 clients update overnight.  Whether the clients emerge or emerge -K is
 the same amount of admnistrative work I would think.

I can think of one exception, the occasional ebuild that doesn't compile
for whatever reason. Using emerge -K will only install packages that have
successfully compiled on the build host. This won't protect against
install bug, because the portage build the package and then installs from
it, but it will catch most glitches.


-- 
Neil Bothwick

Those who live by the sword get shot by those who don't.


signature.asc
Description: PGP signature


Re: [gentoo-user] Managing multiple systems with identical hardware

2013-10-01 Thread Michael Orlitzky
Jumping in randomly:

With portage-2.2 stable, you can now put sets in overlays. This has
greatly simplified our shared configuration, because I can push out a
base set of packages to every system just by including it in our overlay
(which is configured on every machine).

If you can push out package sets, you can push out configuration. I've
recently started pushing out our Apache configs this way. I have a
package called apache2-macros, and an ebuild in our overlay for
apache2-macros-x.y.z. This is part of the set that gets installed on all
web servers, so when I update the apache2-macros package and ebuild, it
automatically gets pushed to the web servers during the next update.

You can use - instead for convenience, but then you'll have to
remember to re-emerge it now and then.

For the config files that /differ/, I just use a makefile. I have a git
repo laid out like the filesystem hierarchy, i.e. in the root of the
repo I have etc, and usr directories containing any config files
that need to be copied to /etc or /usr. The makefile isn't very
complicated, one magic rule handles most of the files which are plain
text and wind up in /etc. Here's a short example.

  FILES = /etc/conf.d/net \
  /etc/portage/env

  all: $(FILES)

  /%: ./%
cp $ $@
chmod 644 $@

You can add more files without touching the rules unless you need
different permissions.




Re: [gentoo-user] Managing multiple systems with identical hardware

2013-10-01 Thread Neil Bothwick
On Tue, 01 Oct 2013 10:04:54 -0400, Michael Orlitzky wrote:

 With portage-2.2 stable, you can now put sets in overlays.

Nice! I missed that.


-- 
Neil Bothwick

System halted - Press all keys at once to continue.


signature.asc
Description: PGP signature


Re: [gentoo-user] Managing multiple systems with identical hardware

2013-10-01 Thread joost
Alan McKinnon alan.mckin...@gmail.com wrote:
On 30/09/2013 19:31, Grant wrote:
 Keeping all of the laptops 100% identical as far as hardware is
 central to this plan.  I know I'm setting myself up for big
problems
 otherwise.

 I'm hoping I can emerge every package on my laptop that every other
 laptop needs.  That way I can fix any build problems and update any
 config files right on my own system.  Then I would push config file
 differences to all of the other laptops.  Then each laptop could
 emerge its own stuff unattended.

 I see what you desire now - essentially you want to clone your
laptop
 (or big chunks of it) over to your other workstations.
 
 That sounds about right.
 
 To get a feel for how it works, visit puppet's web site and download
 some of the test appliances they have there and run them in vm
software.
 Set up a server and a few clients, and start experimenting in that
 sandbox. You'll quickly get a feel for how it all hangs together
(it's
 hard to describe in text how puppet gets the job done, so much
easier to
 do it for real and watch the results)
 
 Puppet seems like overkill for what I need.  I think all I really
need
 is something to manage config file differences and user accounts.  At
 this point I'm thinking I shouldn't push packages themselves, but
 portage config files and then let each laptop emerge unattended based
 on those portage configs.  I'm going to bring this to the 'salt'
 mailing list to see if it might be a good fit.  It seems like a much
 lighter weight application.

Two general points I can add:

1. Sharing config files turns out to be really hard. By far the easiest
way is to just share /etc but that is an all or nothing approach, and
you just need one file to be different to break it. Like /etc/hostname

You *could* create a share directory inside /etc and symlink common
files in there, but that gets very tedious quickly.

Rather go for a centralized repo solution that pushes configs out, you
must just find the one that's right for you.

2. Binary packages are almost perfect for your needs IMHO, running
emerge gets very tedious quickly, and your spec is that all
workstations
have the same USE. You'd be amazed how much time you save by doing
this:

emerge -b on your laptop and share your /var/packages
emerge -K on the workstations when your laptop is on the network

step 2 goes amazingly quickly - eyeball the list to be emerged, they
should all be purple, press enter. About a minute or two per
workstation, as opposed to however many hours the build took.

3. (OK, three points). Share your portage tree over the network. No
point in syncing multiple times when you actually just need to do it
once.


 
 I'm soaking up a lot of your time (again).  I'll return with any real
 Gentoo questions I run into and to run down the final plan before I
 execute it.  Thanks so much for your help.  Not sure what I'd do
 without you. :)

I'm sure Neil would step in if I'm hit by a bus
He'd say the same things, and use about 1/4 of the words it takes me
;-)


-- 
Alan McKinnon
alan.mckin...@gmail.com

Grant,

Additionally. You might want to consider sharing /etc/portage and 
/var/lib/portage/world (the file)
I do that between my build host and the other machines. (Along with the portage 
tree, packages and distfiles)

That way all workstations end up with the same packages each time you run 
emerge -vauDk world on them.

And like Alan said, it goes really quick.

--
Joost

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

Re: [gentoo-user] Managing multiple systems with identical hardware

2013-09-30 Thread Grant
 Keeping all of the laptops 100% identical as far as hardware is
 central to this plan.  I know I'm setting myself up for big problems
 otherwise.

 I'm hoping I can emerge every package on my laptop that every other
 laptop needs.  That way I can fix any build problems and update any
 config files right on my own system.  Then I would push config file
 differences to all of the other laptops.  Then each laptop could
 emerge its own stuff unattended.

 I see what you desire now - essentially you want to clone your laptop
 (or big chunks of it) over to your other workstations.

That sounds about right.

 To get a feel for how it works, visit puppet's web site and download
 some of the test appliances they have there and run them in vm software.
 Set up a server and a few clients, and start experimenting in that
 sandbox. You'll quickly get a feel for how it all hangs together (it's
 hard to describe in text how puppet gets the job done, so much easier to
 do it for real and watch the results)

Puppet seems like overkill for what I need.  I think all I really need
is something to manage config file differences and user accounts.  At
this point I'm thinking I shouldn't push packages themselves, but
portage config files and then let each laptop emerge unattended based
on those portage configs.  I'm going to bring this to the 'salt'
mailing list to see if it might be a good fit.  It seems like a much
lighter weight application.

I'm soaking up a lot of your time (again).  I'll return with any real
Gentoo questions I run into and to run down the final plan before I
execute it.  Thanks so much for your help.  Not sure what I'd do
without you. :)

- Grant



Re: [gentoo-user] Managing multiple systems with identical hardware

2013-09-30 Thread thegeezer
On 09/30/2013 06:31 PM, Grant wrote:
 Keeping all of the laptops 100% identical as far as hardware is
 central to this plan.  I know I'm setting myself up for big problems
 otherwise.

 I'm hoping I can emerge every package on my laptop that every other
 laptop needs.  That way I can fix any build problems and update any
 config files right on my own system.  Then I would push config file
 differences to all of the other laptops.  Then each laptop could
 emerge its own stuff unattended.
 I see what you desire now - essentially you want to clone your laptop
 (or big chunks of it) over to your other workstations.
 That sounds about right.

 To get a feel for how it works, visit puppet's web site and download
 some of the test appliances they have there and run them in vm software.
 Set up a server and a few clients, and start experimenting in that
 sandbox. You'll quickly get a feel for how it all hangs together (it's
 hard to describe in text how puppet gets the job done, so much easier to
 do it for real and watch the results)
 Puppet seems like overkill for what I need.  I think all I really need
 is something to manage config file differences and user accounts.  At
 this point I'm thinking I shouldn't push packages themselves, but
 portage config files and then let each laptop emerge unattended based
 on those portage configs.  I'm going to bring this to the 'salt'
 mailing list to see if it might be a good fit.  It seems like a much
 lighter weight application.

 I'm soaking up a lot of your time (again).  I'll return with any real
 Gentoo questions I run into and to run down the final plan before I
 execute it.  Thanks so much for your help.  Not sure what I'd do
 without you. :)

 - Grant

maybe someone could chip in re: experience with distributed compilation
and cached compiles?
https://wiki.gentoo.org/wiki/Distcc
http://ccache.samba.org/

this may be closer to what you are looking for ?



Re: [gentoo-user] Managing multiple systems with identical hardware

2013-09-30 Thread Alan McKinnon
On 30/09/2013 19:31, Grant wrote:
 Keeping all of the laptops 100% identical as far as hardware is
 central to this plan.  I know I'm setting myself up for big problems
 otherwise.

 I'm hoping I can emerge every package on my laptop that every other
 laptop needs.  That way I can fix any build problems and update any
 config files right on my own system.  Then I would push config file
 differences to all of the other laptops.  Then each laptop could
 emerge its own stuff unattended.

 I see what you desire now - essentially you want to clone your laptop
 (or big chunks of it) over to your other workstations.
 
 That sounds about right.
 
 To get a feel for how it works, visit puppet's web site and download
 some of the test appliances they have there and run them in vm software.
 Set up a server and a few clients, and start experimenting in that
 sandbox. You'll quickly get a feel for how it all hangs together (it's
 hard to describe in text how puppet gets the job done, so much easier to
 do it for real and watch the results)
 
 Puppet seems like overkill for what I need.  I think all I really need
 is something to manage config file differences and user accounts.  At
 this point I'm thinking I shouldn't push packages themselves, but
 portage config files and then let each laptop emerge unattended based
 on those portage configs.  I'm going to bring this to the 'salt'
 mailing list to see if it might be a good fit.  It seems like a much
 lighter weight application.

Two general points I can add:

1. Sharing config files turns out to be really hard. By far the easiest
way is to just share /etc but that is an all or nothing approach, and
you just need one file to be different to break it. Like /etc/hostname

You *could* create a share directory inside /etc and symlink common
files in there, but that gets very tedious quickly.

Rather go for a centralized repo solution that pushes configs out, you
must just find the one that's right for you.

2. Binary packages are almost perfect for your needs IMHO, running
emerge gets very tedious quickly, and your spec is that all workstations
have the same USE. You'd be amazed how much time you save by doing this:

emerge -b on your laptop and share your /var/packages
emerge -K on the workstations when your laptop is on the network

step 2 goes amazingly quickly - eyeball the list to be emerged, they
should all be purple, press enter. About a minute or two per
workstation, as opposed to however many hours the build took.

3. (OK, three points). Share your portage tree over the network. No
point in syncing multiple times when you actually just need to do it once.


 
 I'm soaking up a lot of your time (again).  I'll return with any real
 Gentoo questions I run into and to run down the final plan before I
 execute it.  Thanks so much for your help.  Not sure what I'd do
 without you. :)

I'm sure Neil would step in if I'm hit by a bus
He'd say the same things, and use about 1/4 of the words it takes me ;-)


-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] Managing multiple systems with identical hardware

2013-09-30 Thread Frank Steinmetzger
On Mon, Sep 30, 2013 at 09:31:18PM +0200, Alan McKinnon wrote:

  (or big chunks of it) over to your other workstations.
  
  Puppet seems like overkill for what I need.  I think all I really need
  is something to manage config file differences and user accounts.  At
  this point I'm thinking I shouldn't push packages themselves, but
  portage config files and then let each laptop emerge unattended based
  on those portage configs.  I'm going to bring this to the 'salt'
  mailing list to see if it might be a good fit.  It seems like a much
  lighter weight application.
 
 Two general points I can add:
 
 1. Sharing config files turns out to be really hard. By far the easiest
 way is to just share /etc but that is an all or nothing approach, and
 you just need one file to be different to break it. Like /etc/hostname
 
 You *could* create a share directory inside /etc and symlink common
 files in there, but that gets very tedious quickly.

How about using something like unison? I've been using it for a while
now to sync a specific subset of ~ between three computers.
It allows for exclude rules for host-specific stuff.
-- 
Gruß | Greetings | Qapla’
Please do not share anything from, with or about me with any Facebook service.

No, you *can’t* call 999 now.  I’m downloading my mail.


signature.asc
Description: Digital signature


Re: [gentoo-user] Managing multiple systems with identical hardware

2013-09-30 Thread Neil Bothwick
On Mon, 30 Sep 2013 21:31:18 +0200, Alan McKinnon wrote:

  I'm soaking up a lot of your time (again).  I'll return with any real
  Gentoo questions I run into and to run down the final plan before I
  execute it.  Thanks so much for your help.  Not sure what I'd do
  without you. :)  
 
 I'm sure Neil would step in if I'm hit by a bus
 He'd say the same things, and use about 1/4 of the words it takes me ;-)

So far in this thread, I've managed about 0/4 of the words you've used...
Oh damn!

But yes, a build host and adding --usepkg=y to EMERGE_DEFAULT_OPTS in
make.conf gives a massive speed increase. Run the build host in an easily
recovered environment, like a VM, and you don't even have to monitor the
world update on it, just run a script in the early hours that does emerge
--sync  emerge -uXX @world and check your mailbox for errors before
running emerge on the clients. The use clusterssh or dsh to update them
all at once.


-- 
Neil Bothwick

Q. How many radical feminists does it take to change a light bulb?
A. Two - one to change the bulb and one to write a book about the passive
role of the socket.


signature.asc
Description: PGP signature


Re: [gentoo-user] Managing multiple systems with identical hardware

2013-09-29 Thread Grant
 I realized I only need two types of systems in my life.  One hosted
 server and bunch of identical laptops.  My laptop, my wife's laptop,
 our HTPC, routers, and office workstations could all be on identical
 hardware, and what better choice than a laptop?  Extremely
 space-efficient, portable, built-in UPS (battery), and no need to buy
 a separate monitor, keyboard, mouse, speakers, camera, etc.  Some
 systems will use all of that stuff and some will use none, but it's
 OK, laptops are getting cheap, and keyboard/mouse/video comes in handy
 once in a while on any system.

 Laptops are a good choice, desktops are almost dead out there, and thin
 clients nettops are just dead in the water for anything other than
 appliances and media servers

 What if my laptop is the master system and I install any application
 that any of the other laptops need on my laptop and push its entire
 install to all of the other laptops via rsync whenever it changes?
 The only things that would vary by laptop would be users and
 configuration.

 Could work, but don't push *your* laptop's config to all the other
 laptops. they end up with your stuff which might not be what them to
 have. Rather have a completely separate area where you store portage
 configs, tree, packages and distfiles for laptops/clients and push from
 there.

 I actually do want them all to have my stuff and I want to have all
 their stuff.  That way everything is in sync and I can manage all of
 them by just managing mine and pushing.  How about pushing only
 portage configs and then letting each of them emerge unattended?  I
 know unattended emerges are the kiss of death but if all of the
 identical laptops have the same portage config and I emerge everything
 successfully on my own laptop first, the unattended emerges should be
 fine.

 Within those constraints it could work fine. The critical stuff to share
 is make.conf and /etc/portage/*, everything else can be shared to
 greater or lesser degree and you can undo things on a whim if you wish.

 There's one thing that we haven't touched on, and that's the hardware.
 Are they all identical hardware items, or at least compatible? Kernel
 builds and hardware-sensitive apps like mplayer are the top reasons
 you'd want to centralize things, but those are the very apps that will
 make sure life miserable trying to fins commonality that works in all
 cases. So do keep hardware needs in mind when making purchases.

Keeping all of the laptops 100% identical as far as hardware is
central to this plan.  I know I'm setting myself up for big problems
otherwise.

 Personally, I wouldn't do the building and pushing on my own laptop,
 that turns me inot the central server and updates only happen when I'm
 in the office. I'd use a central build host and my laptop is just
 another client. Not all that important really, the build host is just an
 address from the client's point of view

I don't think I'm making the connection here.  The central server
can't do any unattended building and pushing, correct?  So I would
need to be around either way I think.

I'm hoping I can emerge every package on my laptop that every other
laptop needs.  That way I can fix any build problems and update any
config files right on my own system.  Then I would push config file
differences to all of the other laptops.  Then each laptop could
emerge its own stuff unattended.

 OK, I'm thinking over how much variation there would be from laptop to
 laptop:

 1. /etc/runlevels/default/* would vary of course.
 2. /etc/conf.d/net would vary for the routers and my laptop which I
 sometimes use as a router.
 3. /etc/hostapd/hostapd.conf under the same conditions as #2.
 4. Users and /home would vary but the office workstations could all be
 identical in this regard.

 Am I missing anything?  I can imagine everything else being totally
 identical.

 What could I use to manage these differences?

 I'm sure there are numerous files in /etc/ with small niggling
 differences, you will find these as you go along.

 In a Linux world, these files actually do not subject themselves to
 centralization very well, they really do need a human with clue to make
 a decision whilst having access to the laptop in question. Every time
 we've brain-stormed this at work, we end up with only two realistic
 options: go to every machine and configure it there directly, or put
 individual per-host configs into puppet and push. It comes down to the
 same thing, the only difference is the location where stuff is stored.

I'm sure I will need to carefully define those config differences.
Can I set up puppet (or similar) on my laptop and use it to push
config updates to all of the other laptops?  That way the package I'm
using to push will be aware of config differences per system and push
everything correctly.  You said not to use puppet, but does that apply
in this scenario?

 I'm slowly coming to conclsuion that you are trying to solve a problem
 with Gentoo that binary 

Re: [gentoo-user] Managing multiple systems with identical hardware

2013-09-29 Thread Neil Bothwick
On Sun, 29 Sep 2013 11:31:17 -0700, Grant wrote:

  Personally, I wouldn't do the building and pushing on my own laptop,
  that turns me inot the central server and updates only happen when I'm
  in the office. I'd use a central build host and my laptop is just
  another client. Not all that important really, the build host is just
  an address from the client's point of view  
 
 I don't think I'm making the connection here.  The central server
 can't do any unattended building and pushing, correct?  So I would
 need to be around either way I think.

If you ran the central server in a VM, you could have it run emerge
--sync  emerge -uDN @world from cron. You could do this without a VM,
but a VM allows you to take snapshots before each sync/build cycle, so
that you can roll back if an update breaks it.


-- 
Neil Bothwick

The severity of the itch is inversely proportional to the reach.


signature.asc
Description: PGP signature


Re: [gentoo-user] Managing multiple systems with identical hardware

2013-09-29 Thread Alan McKinnon
On 29/09/2013 20:31, Grant wrote:

[snip]

 There's one thing that we haven't touched on, and that's the hardware.
 Are they all identical hardware items, or at least compatible? Kernel
 builds and hardware-sensitive apps like mplayer are the top reasons
 you'd want to centralize things, but those are the very apps that will
 make sure life miserable trying to fins commonality that works in all
 cases. So do keep hardware needs in mind when making purchases.
 
 Keeping all of the laptops 100% identical as far as hardware is
 central to this plan.  I know I'm setting myself up for big problems
 otherwise.

OK


 
 Personally, I wouldn't do the building and pushing on my own laptop,
 that turns me inot the central server and updates only happen when I'm
 in the office. I'd use a central build host and my laptop is just
 another client. Not all that important really, the build host is just an
 address from the client's point of view
 
 I don't think I'm making the connection here.  The central server
 can't do any unattended building and pushing, correct?  So I would
 need to be around either way I think.
 
 I'm hoping I can emerge every package on my laptop that every other
 laptop needs.  That way I can fix any build problems and update any
 config files right on my own system.  Then I would push config file
 differences to all of the other laptops.  Then each laptop could
 emerge its own stuff unattended.

I see what you desire now - essentially you want to clone your laptop
(or big chunks of it) over to your other workstations.

No problem, just share your laptop's stuff with the workstations. Either
share it directly, or upload your laptops configs and buildpks to a
central fileserver where the workstations can access them (it comes down
to the same thing really)

 
 OK, I'm thinking over how much variation there would be from laptop to
 laptop:

 1. /etc/runlevels/default/* would vary of course.
 2. /etc/conf.d/net would vary for the routers and my laptop which I
 sometimes use as a router.
 3. /etc/hostapd/hostapd.conf under the same conditions as #2.
 4. Users and /home would vary but the office workstations could all be
 identical in this regard.

 Am I missing anything?  I can imagine everything else being totally
 identical.

 What could I use to manage these differences?

 I'm sure there are numerous files in /etc/ with small niggling
 differences, you will find these as you go along.

 In a Linux world, these files actually do not subject themselves to
 centralization very well, they really do need a human with clue to make
 a decision whilst having access to the laptop in question. Every time
 we've brain-stormed this at work, we end up with only two realistic
 options: go to every machine and configure it there directly, or put
 individual per-host configs into puppet and push. It comes down to the
 same thing, the only difference is the location where stuff is stored.
 
 I'm sure I will need to carefully define those config differences.
 Can I set up puppet (or similar) on my laptop and use it to push
 config updates to all of the other laptops?  That way the package I'm
 using to push will be aware of config differences per system and push
 everything correctly.  You said not to use puppet, but does that apply
 in this scenario?

My warning about using Puppet on Gentoo should have come with a
disclaimer: don't use puppet to make a Gentoo machine to emerge packages
from source.

You intend to push binary packages always, where the workstation doesn't
have a choice in what it gets (you already decided that earlier). That
will work well and from your workstation's POV is almost identical to
how binary distros work.

 
 I'm slowly coming to conclsuion that you are trying to solve a problem
 with Gentoo that binary distros already solved a very long time ago. You
 are forcing yourself to become the sole maintainer of GrantOS and do all
 the heavy lifting of packaging. But, Mint and friends already did all
 that work already and frankly, they are much better at it than you or I.
 
 Interesting.  When I switched from Windows about 10 years ago I had
 only a very brief run with Mandrake before I settled on Gentoo so I
 don't *really* know what a binary distro is about.  How would this
 workflow be different on a binary distro?

A binary distro would be the same as I described above. How those
distros work is quite simple - their packages are archives like
quickpkgs with pre- and post- install/uninstall scripts. These script do
exactly the same thing as the various phase functions in portage - they
define where to move files to, ownerships and permissions of them, and
maybe a migration script if needed.

The distro's package manager deals with all the details - you just tell
it what you want installed and it goes ahead and does it.

What the Puppet server does is tell the workstation it needs to install
package XYZ. Code on the workstation then runs the package manager to do
just that.

For config 

Re: [gentoo-user] Managing multiple systems with identical hardware

2013-09-27 Thread Grant
 I realized I only need two types of systems in my life.  One hosted
 server and bunch of identical laptops.  My laptop, my wife's laptop,
 our HTPC, routers, and office workstations could all be on identical
 hardware, and what better choice than a laptop?  Extremely
 space-efficient, portable, built-in UPS (battery), and no need to buy
 a separate monitor, keyboard, mouse, speakers, camera, etc.  Some
 systems will use all of that stuff and some will use none, but it's
 OK, laptops are getting cheap, and keyboard/mouse/video comes in handy
 once in a while on any system.

 Laptops are a good choice, desktops are almost dead out there, and thin
 clients nettops are just dead in the water for anything other than
 appliances and media servers

 What if my laptop is the master system and I install any application
 that any of the other laptops need on my laptop and push its entire
 install to all of the other laptops via rsync whenever it changes?
 The only things that would vary by laptop would be users and
 configuration.

 Could work, but don't push *your* laptop's config to all the other
 laptops. they end up with your stuff which might not be what them to
 have. Rather have a completely separate area where you store portage
 configs, tree, packages and distfiles for laptops/clients and push from
 there.

I actually do want them all to have my stuff and I want to have all
their stuff.  That way everything is in sync and I can manage all of
them by just managing mine and pushing.  How about pushing only
portage configs and then letting each of them emerge unattended?  I
know unattended emerges are the kiss of death but if all of the
identical laptops have the same portage config and I emerge everything
successfully on my own laptop first, the unattended emerges should be
fine.

 I'd recommend if you have a decent-ish desktop lying around, you press
 that into service as your master build host. yeah, it takes 10% longer
 to build stuff, but so what? Do it overnight.

Well, my goal is to minimize the number of different systems I
maintain.  Hopefully just one type of laptop and a server.

  Maybe puppet could help with that?  It would almost be
 like my own distro.  Some laptops would have stuff installed that they
 don't need but at least they aren't running Fedora! :)

 DO NOT PROVISION GENTOO SYSTEMS FROM PUPPET.

OK, I'm thinking over how much variation there would be from laptop to laptop:

1. /etc/runlevels/default/* would vary of course.
2. /etc/conf.d/net would vary for the routers and my laptop which I
sometimes use as a router.
3. /etc/hostapd/hostapd.conf under the same conditions as #2.
4. Users and /home would vary but the office workstations could all be
identical in this regard.

Am I missing anything?  I can imagine everything else being totally identical.

What could I use to manage these differences?

 Rather keep your laptop as your laptop with it's own setup, and
 everything else as that own setup. You only need one small difference
 between what you want your laptop to have, and everything else to have,
 to crash that entire model.

I think it will work if I can find a way to manage the few differences
above.  Am I overlooking any potential issues?

- Grant



Re: [gentoo-user] Managing multiple systems with identical hardware

2013-09-27 Thread Alan McKinnon
On 27/09/2013 12:37, Grant wrote:
 I realized I only need two types of systems in my life.  One hosted
 server and bunch of identical laptops.  My laptop, my wife's laptop,
 our HTPC, routers, and office workstations could all be on identical
 hardware, and what better choice than a laptop?  Extremely
 space-efficient, portable, built-in UPS (battery), and no need to buy
 a separate monitor, keyboard, mouse, speakers, camera, etc.  Some
 systems will use all of that stuff and some will use none, but it's
 OK, laptops are getting cheap, and keyboard/mouse/video comes in handy
 once in a while on any system.

 Laptops are a good choice, desktops are almost dead out there, and thin
 clients nettops are just dead in the water for anything other than
 appliances and media servers

 What if my laptop is the master system and I install any application
 that any of the other laptops need on my laptop and push its entire
 install to all of the other laptops via rsync whenever it changes?
 The only things that would vary by laptop would be users and
 configuration.

 Could work, but don't push *your* laptop's config to all the other
 laptops. they end up with your stuff which might not be what them to
 have. Rather have a completely separate area where you store portage
 configs, tree, packages and distfiles for laptops/clients and push from
 there.
 
 I actually do want them all to have my stuff and I want to have all
 their stuff.  That way everything is in sync and I can manage all of
 them by just managing mine and pushing.  How about pushing only
 portage configs and then letting each of them emerge unattended?  I
 know unattended emerges are the kiss of death but if all of the
 identical laptops have the same portage config and I emerge everything
 successfully on my own laptop first, the unattended emerges should be
 fine.

Within those constraints it could work fine. The critical stuff to share
is make.conf and /etc/portage/*, everything else can be shared to
greater or lesser degree and you can undo things on a whim if you wish.

There's one thing that we haven't touched on, and that's the hardware.
Are they all identical hardware items, or at least compatible? Kernel
builds and hardware-sensitive apps like mplayer are the top reasons
you'd want to centralize things, but those are the very apps that will
make sure life miserable trying to fins commonality that works in all
cases. So do keep hardware needs in mind when making purchases.

Personally, I wouldn't do the building and pushing on my own laptop,
that turns me inot the central server and updates only happen when I'm
in the office. I'd use a central build host and my laptop is just
another client. Not all that important really, the build host is just an
address from the client's point of view



 
 I'd recommend if you have a decent-ish desktop lying around, you press
 that into service as your master build host. yeah, it takes 10% longer
 to build stuff, but so what? Do it overnight.
 
 Well, my goal is to minimize the number of different systems I
 maintain.  Hopefully just one type of laptop and a server.
 
  Maybe puppet could help with that?  It would almost be
 like my own distro.  Some laptops would have stuff installed that they
 don't need but at least they aren't running Fedora! :)

 DO NOT PROVISION GENTOO SYSTEMS FROM PUPPET.
 
 OK, I'm thinking over how much variation there would be from laptop to laptop:
 
 1. /etc/runlevels/default/* would vary of course.
 2. /etc/conf.d/net would vary for the routers and my laptop which I
 sometimes use as a router.
 3. /etc/hostapd/hostapd.conf under the same conditions as #2.
 4. Users and /home would vary but the office workstations could all be
 identical in this regard.
 
 Am I missing anything?  I can imagine everything else being totally identical.
 
 What could I use to manage these differences?

I'm sure there are numerous files in /etc/ with small niggling
differences, you will find these as you go along.

In a Linux world, these files actually do not subject themselves to
centralization very well, they really do need a human with clue to make
a decision whilst having access to the laptop in question. Every time
we've brain-stormed this at work, we end up with only two realistic
options: go to every machine and configure it there directly, or put
individual per-host configs into puppet and push. It comes down to the
same thing, the only difference is the location where stuff is stored.

I'm slowly coming to conclsuion that you are trying to solve a problem
with Gentoo that binary distros already solved a very long time ago. You
are forcing yourself to become the sole maintainer of GrantOS and do all
the heavy lifting of packaging. But, Mint and friends already did all
that work already and frankly, they are much better at it than you or I.

I would urge you to take a good long hard look at exactly why a binary
distro is not suitable, as I feel that would solve all your issues. Run
Gentoo on 

Re: [gentoo-user] Managing multiple systems with identical hardware

2013-09-26 Thread Grant
 I'm trying to reduce the number of systems I spend time managing.  My
 previous plan was to set up multiseat on a small number of systems.
 Now I'm wondering if it would be better to use multiple systems with
 identical hardware and manage them in some sort of an optimized way so
 that each set of identical hardware behaves as much like a single
 machine as possible for management.  I could use small SoC systems so
 I don't have to worry about sourcing components later.  Is there a
 good tool or framework for this sort of thing?

 The solution you pick depends heavily on how many of these identical
 machines you have.

 For some small-ish number (gut feel tells me up to around 10 or so), you
 could do what I do for my development vms[2]:

Yes, under 10.

 - have 1 decent spec'ed machine as the master and buildhost
 - share /etc/portage/, $PORTDIR, /var/packages and /var/distfiles to all
 clients from some central location (NFS works really well for this)
 - for each package you want to have on a client, emerge it on the
 buildhost with the -b option (create binary packages)
 - emerge stuff on the clients with the -k (or possibly -K) option to use
 binary packages. Everything should show up in purple. If anything is a
 different colour, emerge that package on the buildhost and remerge it on
 the client.
 - for awesome street cred geek-points, install clusterssh and do all
 your clients in parallel[1]

 As long as you share important directories to each client, things stay
 consistent. What you essentially achieve is build once-install many times

 However, and I'm likely to get shot down for this here, I think you
 *really* need to reconsider whether Gentoo is even what you should be
 using for this. Put aside emotional attachments to your fav distro and
 take a long hard critical look at your pain-gain ratio. If all you
 really need is standard user-type gui stuffs on each client, what is
 Gentoo really buying you (other than the thrill of watching gcc output
 scroll by over and over and over)

 Use gentoo by all means on your central server to get exactly the
 features you want (Gentoo's strong point), but ona bunch of regular
 clients... I dunno, Ubuntu or Fedora are hard to beat for that...

I'm thinking of a different approach and I'm getting pretty excited.

I realized I only need two types of systems in my life.  One hosted
server and bunch of identical laptops.  My laptop, my wife's laptop,
our HTPC, routers, and office workstations could all be on identical
hardware, and what better choice than a laptop?  Extremely
space-efficient, portable, built-in UPS (battery), and no need to buy
a separate monitor, keyboard, mouse, speakers, camera, etc.  Some
systems will use all of that stuff and some will use none, but it's
OK, laptops are getting cheap, and keyboard/mouse/video comes in handy
once in a while on any system.

What if my laptop is the master system and I install any application
that any of the other laptops need on my laptop and push its entire
install to all of the other laptops via rsync whenever it changes?
The only things that would vary by laptop would be users and
configuration.  Maybe puppet could help with that?  It would almost be
like my own distro.  Some laptops would have stuff installed that they
don't need but at least they aren't running Fedora! :)

If I can make this work I will basically only admin my laptop and
hosted server no matter how large the office grows.  Huge time savings
and huge scalability.  No multiseat required.  Please shoot this down!

- Grant



Re: [gentoo-user] Managing multiple systems with identical hardware

2013-09-26 Thread Alan McKinnon
On 26/09/2013 11:08, Grant wrote:
 I'm thinking of a different approach and I'm getting pretty excited.
 
 I realized I only need two types of systems in my life.  One hosted
 server and bunch of identical laptops.  My laptop, my wife's laptop,
 our HTPC, routers, and office workstations could all be on identical
 hardware, and what better choice than a laptop?  Extremely
 space-efficient, portable, built-in UPS (battery), and no need to buy
 a separate monitor, keyboard, mouse, speakers, camera, etc.  Some
 systems will use all of that stuff and some will use none, but it's
 OK, laptops are getting cheap, and keyboard/mouse/video comes in handy
 once in a while on any system.

Laptops are a good choice, desktops are almost dead out there, and thin
clients nettops are just dead in the water for anything other than
appliances and media servers


 What if my laptop is the master system and I install any application
 that any of the other laptops need on my laptop and push its entire
 install to all of the other laptops via rsync whenever it changes?
 The only things that would vary by laptop would be users and
 configuration.

Could work, but don't push *your* laptop's config to all the other
laptops. they end up with your stuff which might not be what them to
have. Rather have a completely separate area where you store portage
configs, tree, packages and distfiles for laptops/clients and push from
there.

I'd recommend if you have a decent-ish desktop lying around, you press
that into service as your master build host. yeah, it takes 10% longer
to build stuff, but so what? Do it overnight.

  Maybe puppet could help with that?  It would almost be
 like my own distro.  Some laptops would have stuff installed that they
 don't need but at least they aren't running Fedora! :)

Errr no. Do not do that. Do not use puppet for Gentoo systems. Let me
make that clear :-)

DO NOT PROVISION GENTOO SYSTEMS FROM PUPPET.

You will break things horribly and will curse the day you tried.
Basically, puppet and portage will get in each other's way and clobber
each other. Puppet has no concept of USE flags worth a damn, cannot
determine in advance what an ebuild will provide and the whole thing
breaks puppet's 100% deterministic model.

Puppet is designed to work awesomely well with binary distros, that is
where it excels. Keep within those constraints. Same goes for chef,
cfengine and various others things that accomplish the same end.


 If I can make this work I will basically only admin my laptop and
 hosted server no matter how large the office grows.  Huge time savings
 and huge scalability.  No multiseat required.  Please shoot this down!

Rather keep your laptop as your laptop with it's own setup, and
everything else as that own setup. You only need one small difference
between what you want your laptop to have, and everything else to have,
to crash that entire model.



-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] Managing multiple systems with identical hardware

2013-09-26 Thread Johann Schmitz
Hi Alan,

On 26.09.2013 22:42, Alan McKinnon wrote:
 You will break things horribly and will curse the day you tried.
 Basically, puppet and portage will get in each other's way and clobber
 each other. Puppet has no concept of USE flags worth a damn, cannot
 determine in advance what an ebuild will provide and the whole thing
 breaks puppet's 100% deterministic model.
 
 Puppet is designed to work awesomely well with binary distros, that is
 where it excels. Keep within those constraints. Same goes for chef,
 cfengine and various others things that accomplish the same end.

Did you try to combine one of these solutions with portage's binary
package feature? With --usepkgonly gentoo is more or less a binary
distro. I'm thinking of using a single use flag set for 20+ Gentoo
servers to get rid of compiling large packages in the live environment.

Regards,
Johann



Re: [gentoo-user] Managing multiple systems with identical hardware

2013-09-26 Thread Alan McKinnon
On 27/09/2013 06:33, Johann Schmitz wrote:
 Hi Alan,
 
 On 26.09.2013 22:42, Alan McKinnon wrote:
 You will break things horribly and will curse the day you tried.
 Basically, puppet and portage will get in each other's way and clobber
 each other. Puppet has no concept of USE flags worth a damn, cannot
 determine in advance what an ebuild will provide and the whole thing
 breaks puppet's 100% deterministic model.

 Puppet is designed to work awesomely well with binary distros, that is
 where it excels. Keep within those constraints. Same goes for chef,
 cfengine and various others things that accomplish the same end.
 
 Did you try to combine one of these solutions with portage's binary
 package feature? With --usepkgonly gentoo is more or less a binary
 distro. I'm thinking of using a single use flag set for 20+ Gentoo
 servers to get rid of compiling large packages in the live environment.


binpkgs don't turn gentoo into a binary distro, they turn it into
something resembling a Unix from the 90s with pkgadd - using dumb
tarballs with no metadata and no room to make choices. Puppet fails at
that as the intelligence cannot happen in puppet, it has to happen in
portage. If the binpkg doesn't match what package.* says, puppet is
stuck and portage falls back to building locally. The result is worse
than the worst binary distro.

By all means use a central use set, it's what I do for my dev VMs and it
works out well for me. Just remember to run emerge on each machine
individually.




-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] Managing multiple systems with identical hardware

2013-09-25 Thread Alan McKinnon
On 25/09/2013 23:18, Grant wrote:
 I'm trying to reduce the number of systems I spend time managing.  My
 previous plan was to set up multiseat on a small number of systems.
 Now I'm wondering if it would be better to use multiple systems with
 identical hardware and manage them in some sort of an optimized way so
 that each set of identical hardware behaves as much like a single
 machine as possible for management.  I could use small SoC systems so
 I don't have to worry about sourcing components later.  Is there a
 good tool or framework for this sort of thing?


The solution you pick depends heavily on how many of these identical
machines you have.

For some small-ish number (gut feel tells me up to around 10 or so), you
could do what I do for my development vms[2]:

- have 1 decent spec'ed machine as the master and buildhost
- share /etc/portage/, $PORTDIR, /var/packages and /var/distfiles to all
clients from some central location (NFS works really well for this)
- for each package you want to have on a client, emerge it on the
buildhost with the -b option (create binary packages)
- emerge stuff on the clients with the -k (or possibly -K) option to use
binary packages. Everything should show up in purple. If anything is a
different colour, emerge that package on the buildhost and remerge it on
the client.
- for awesome street cred geek-points, install clusterssh and do all
your clients in parallel[1]

As long as you share important directories to each client, things stay
consistent. What you essentially achieve is build once-install many times


However, and I'm likely to get shot down for this here, I think you
*really* need to reconsider whether Gentoo is even what you should be
using for this. Put aside emotional attachments to your fav distro and
take a long hard critical look at your pain-gain ratio. If all you
really need is standard user-type gui stuffs on each client, what is
Gentoo really buying you (other than the thrill of watching gcc output
scroll by over and over and over)

Use gentoo by all means on your central server to get exactly the
features you want (Gentoo's strong point), but ona bunch of regular
clients... I dunno, Ubuntu or Fedora are hard to beat for that...



[1] if you haven't played with clusterssh do yourself a favour and do
so. there's something hugely awe-inspiring about typing
cssh host1 host2 host3 host4 host5 host6 ...
and watching 6 xterms pop up and all 6 run the same commands that you
type into the controller window.

[2] this sounds like I should take my own advice... but oddly Gentoo is
ideal for how I use them. I can upgrade and downgrade almost any app to
whatever version the developer says is on prod, and enable/disable USE
to get the same feature set, and do it all in 10 minutes. No binary
distro lets me do that :-)

-- 
Alan McKinnon
alan.mckin...@gmail.com