Hi Yuji,
Just a general note about disks that I find myself suggesting to my customers
all the time;
the next server you install, make sure put *everything* outside the
unix-server cabinet. If you
do this then your next CPU-upgrade will be alot easier, just ;
---
-Shutdown the old CPU
- Detach its SCSI cables.
- Connect the new CPU to the (old) disksubsystem (Check out mulitplaform HDS
7700E disk ;-)
- Boot up the system, and you're done.
---
As everything is there and you do not need to deal with *anything* regarding
operating systems details,
IP adressing, patching, configuration et all.
I currently do alot of work along these lines; Multiplatform Disk
Consolidation, i.e
connecting several *different* Unix/NT servers to one disk-subsystem. There
are
alot to gain from doing it this way. It does however bring up a couple of
disk-availability and management issues that the the disk needs to adress;
Like once installed, *never* taken down for instance...
Having a Unix background, for me this is a new way to deal with disk and it
took quite some
time for me to get used to the thought of building systems this way. Data was
ok, but I wanted
to keep the operating system with all its patches and configurations locally,
just in case... Why?
Really? It's just a disk on a SCSI line in both cases. A real disk in one case
and a virtual disk
in the second case. To the host system there is absolutely no difference. To
the admin this means that you can attach disk from a pool of disks to the host
that currently needs it (on Solaris/AIX boxes sometimes
even without a reboot, provided the hardware supports this)
Basically this is the way mainiframes has been built for ages. The concept of
a local disk is not even on
the map of a traditional Mainframe; all disk are "outside". (Mind you, I don't
think it's possible
to buy e.g a Sun system without at least one single disk internally.)
By doing it this way, the unix-admin is able to make use of the mainframedisk
structure/availablity standard
without losing the challenge and benefits of distributed systems that the
Mainframe lacks.
Think about this for a while...
I'm sorry that this does not directly adress your current problem. I hoping
however to convince you to
put "everything outside", so that you the next time the issue comes up you are
in great shape and the CPU
upgrade will be made alot easier.
Thanks,
/peo
Yuji Shinozaki wrote:
> We are looking to "upgrade" the hardware on one of our AFS servers (a
> fileserver, volserver and ptserver). In other words we have a better CPU
> which we would like to substitute for one of our current servers.
>
> We see two approaches to this upgrade:
>
> Bring the new hardware up as a new server, update CellServDB everywhere,
> force elections, migrate volumes from the old server to the new server,
> bring down the old server, update CellServDB, force new elections and be
> off and running.
>
> OR
>
> Bring the old server down, copy all its information to the new hardware,
> assign the same IP and bring it back up as if it were the same old server.
> The server in question has some read-write and readonly volumes. The disk
> containing the vice partitions are on separate disks, so they are easily
> moved from the old to the new server intact.
>
> The second approach seems simpler, but are there pitfalls? Does the
> server only identify itself to the other servers and clients by its IP
> address? Is there some host authentication info that we have to be
> careful about migrating ? We are running kerberos 5, if that makes any
> difference (no kaserver to worry about), so we would probably be
> generating new K5 host principals ( or should we try to maintain the
> old ones on the new server?)...
>
> Could someone outline their method of doing this? Any gotchas in store
> for us?
>
> yuji
> ----
> Yuji Shinozaki Systems Administrator
> [EMAIL PROTECTED] Dept of Physics and Astronomy
> http://www.physics.unc.edu Univ. of North Carolina - CH
> (919)962-7214 (voice) CB 3255 Philips Hall
> (919)962-0480 (fax) Chapel Hill, NC 27599