>I'm a quite experienced Linux user and have installed/configured quite a few 
>Linux systems myself. What I haven't figured out is a smart way to set up a 
>complete network of linux boxes. The initial thought is to use a central nfs 
>server and have all /homes and /usr and such be on the nfs server. I have a 
>few doubts about this setup though.

hi. once i was complaining about a computer pool only running windows in my
university, so now i have 60 machines to admin ...

my server is a nis server, so all user informations are stored central,
and it is a nfs server, exports readonly /opt and /usr. it also exports
but read-write /home. 

clients use their local hard disk for :
 - booting with lilo, kernel is stored local.
 - swap partition
 - root partition : everything except /home(autofs mounted from server),
        /usr (nfs ro mount from server), /opt (autofs mounted from server).

client installation :
till now i used a debian install disk, to fdisk/mkext2/... the clients.
but it is a hack to mount /usr from some other machine, and execute ssh and
rsync from there, so i will try to create a disk with rsync and ssh on it.
the installation method is "rsync -aHoIxtzvv -e ssh --delete root@pc1:/ /target"
(pc1 is a sample installation, configures to work ok). then i have to edit
/etc/hostname /etc/init.d/network /etc/fstab /etc/lilo*, remove the ssh host
key (so a new will be created).

i'm updateing the clients via a rsync:
        rsync -aI -e ssh --delete /bin <targetmachine>:/
(same for /bin, /sbin, /lib (--exclude /lib/modules))
        rsync -aI -e ssh --delete /root/pc/root <targetmachine>:/
(the /root of the client machines is stored on the server in /root/pc/root.
 the content of my real /root/ may not be visible to anyone. 
 since all clients use the same kernel and modules, i rsync /root/pc/boot and
 /root/pc/lib/modules in a similiar way).

/etc is a bit tricky : first i'm rsyncing the /etc of the server to the
client, but with a long list of excludes (hostname, init.d/network,
X11/XF86Config, X11/Xserver, fstab, mtab, lilo*, exports, passwd*, group*,
shadow*, gshadow*, login.access, ssh/*, suid.conf, rc*.d/*, init.d/nis,
ypserv*, nsswitch.conf, inetd.conf, syslog.conf, smb.conf, exim.conf)

then i rsync /root/pc/etc to the client (it contains config files shared by all
clients, but different to the server config files), for example rc*.d/*,
exports, login.access, nsswitch.conf, passwd, group, g?shadow.

don't forget: as last step, run "ssh <targetmachine> /sbin/lilo", so the
machine will still boot (maybe files in /boot have changed).

local software copies:
software not provided as debian packet is installed in /opt/<software>.
for every executable there is a script in /opt/bin, for example netscape:
#!/bin/bash
if test -d /local/ns-4.5
then
        export MOZILLA_HOME=/local/ns-4.5
else
        export MOZILLA_HOME=/opt/ns-4.5
fi
exec $MOZILLA_HOME/netscape

this allows copy software to the clients in /local, so loading is faster.
and i don't need to set global environment variables for the software.

problem: i don't have a nice update strategy for /var. some files like
/var/log,tmp,run,lock are cleaned by cron scripts or bootup. but what to do
with /var/lib and /var/spool ? this is application dependent :-(((

Security consideration:
because my machines dual-boot linux and win95, it's trivial to hack the local
linux installation (e.g. download a mini linux installation, and install it
under win95).  once win95 is replaced with winNT, this big security hole is
closed.

most pc bios have a very weak password system, so it's easy to hack the bios.
then the system can boot from floppy disk, and again all security is lost.
for a secure setup, you might want to remove floppy drives after installation.
but bootable cdrom drives cause the same problem.
not allowing to boot might be one solution - one a machines is going down,
it will not be accepted any longer. but i don't know how to realize this idea.
(in my situation it can not be done - users must be allowed to boot, and
choose win or linux).

nfs readwrite exporting is a _large_ security whole. i squashed the user and
group id's 0-99, but that does not help much. you can fake any userid under
windows and access that users home.

the server does not allow to log in users, whose home dir is exported via
nfs. else it would be trivial to create a .rhost, .shost or
.ssh/authorized_keys in that users home and bypass all password systems.

handling:
the worst situation is, that you have to make manual changes on every machine.
on all costs, prevent that. 

in my situation i need some way to detect "machine xxx is up and running
linux", and then run some update script. any idea how i can detect this ?
since my update process is a push way, i don't want the clients to pull config
changes ...

performance is not as good as i wished, but useing 2.1 kernels doubles speed
(i made my "how long does it take to start netscape" test, and it was 17
seconds for 2.0 kernels, 8 seconds for 2.1 kernels.).

>can a linux nfs client cache stuff on the local disk?
i'm searching for that, too...

>I've been thinking of having /home/user on each users own box to increase
>typical performance and using the automounter (never used before) to make it
>show up on all boxes. Is that a good idea?

if your machines are linux only, run at night (so you can do backups), and
everyone has a fixed workplace, then it's ok.

look at the coda project www.coda.cs.cmu.edu.  they provide a distributed
filesystem, so you can have local cacheing of data. you can even support
disconnected work (e.g. someone with a notebook can use the network home dir),
and you can implement stuff like server fault tolerance: all data could be
keept on two servers, so if one is down, the other one will still work.

my problem with coda is, that its security system uses some ticket mechanism.
but i'm useing apache and cgi scripts, and the cgi scripts have to run as the
user who wrote them. and here the security system doesn't work, since the
suexec program of apache cannot give the required codafs privilege.
and i don't want to have some files world readable.
(any idea how to solve this ?)

>PS How do stuff like rpms cope in a networked environment?
i guess rpm works more or less like dpkg. 
the current modell works with "run them only on the server, and fix the local
parts with you own script magic".

if they had some "--ignore /usr" parameter, you could do the same rpm/dpkg
calls on the clients, too, and let dpkg/rpm handle much stuff.

but debian has no automatic installation mode, and many package ask silly
questions dureing installation, so i prefer the current rsync way.

windows nt has a way for unattended installation. this is something linux
distributions should have, too !

andreas

Reply via email to