Jan Harkes wrote:
That's a little strange, I thought you had 2 volumes, the root volume
and a triply replicated testvol.
No, the testvol was in the first new installation, because I wasn't sure what's going on, I decided to reinstall the whole thing without a testvol.
Ok, let's check if the volume location database is out of sync,

    # bldvldb.sh clusty1.mytest.de

This will fetch an up-to-date list of volumes from clusty1 and rebuild
the VLDB file. If clusty1 really doesn't have it's replica, then the
createvol should succeed.
[EMAIL PROTECTED] db]# bldvldb.sh clusty1.mytest.de
Fetching volume lists from servers:
V_BindToServer: binding to host clusty1.mytest.de
GetVolumeList finished successfully
clusty1.mytest.de - success
V_BindToServer: binding to host clusty1
VLDB completed.
[EMAIL PROTECTED] db]# createvol_rep / clusty1.mytest.de clusty2.mytest.de clusty3.mytest.de
/ already exists as a replica on clusty1.mytest.de

If it still fails, then clusty1 still thinks it has that volume replica,
we can remove it manually, We first need to know the internal id of the
volume replica and then we can tell the server to drop the volume.

    # volutil -h clusty1.mytest.de info /.0 | grep volume
    Volume header for volume 01000001 (/.0)

    # volutil -h clusty1.mytest.de purge 0x01000001 /.0
    # bldvldb.sh clusty1.mytest.de

At this point createvol_rep should work.

Jan
[EMAIL PROTECTED] db]# volutil -h clusty1.mytest.de info /.0 | grep volume
V_BindToServer: binding to host clusty1.mytest.de
Volume header for volume 01000001 (/.0)
[EMAIL PROTECTED] db]# volutil -h clusty1.mytest.de purge 0x01000001 /.0
V_BindToServer: binding to host clusty1.mytest.de
Volume 01000001 (/.0) successfully purged
[EMAIL PROTECTED] db]# bldvldb.sh clusty1.mytest.de
Fetching volume lists from servers:
V_BindToServer: binding to host clusty1.mytest.de
GetVolumeList finished successfully
clusty1.mytest.de - success
V_BindToServer: binding to host clusty1
VLDB completed.
[EMAIL PROTECTED] db]# createvol_rep / clusty1.mytest.de clusty2.mytest.de clusty3.mytest.de
Replicated volumeid is 7f000001
creating volume /.0 on clusty1.mytest.de (partition /vicepa)
V_BindToServer: binding to host clusty1.mytest.de
creating volume /.1 on clusty2.mytest.de (partition /vicepa)
V_BindToServer: binding to host clusty2.mytest.de
creating volume /.2 on clusty3.mytest.de (partition /vicepa)
V_BindToServer: binding to host clusty3.mytest.de
Fetching volume lists from servers:
V_BindToServer: binding to host clusty3.mytest.de
GetVolumeList finished successfully
clusty3.mytest.de - success
V_BindToServer: binding to host clusty2.mytest.de
GetVolumeList finished successfully
clusty2.mytest.de - success
V_BindToServer: binding to host clusty1.mytest.de
GetVolumeList finished successfully
clusty1.mytest.de - success
V_BindToServer: binding to host clusty1
VLDB completed.
<echo / 7f000001 3 01000002 02000001 03000001 0 0 0 0 0 >> /vice/db/VRList.new>
V_BindToServer: binding to host clusty1
VRDB completed.
Do you wish this volume to be Backed Up (y/n)? [n] n

On the client clusty4:
[EMAIL PROTECTED] ~]# cfs whereis /coda/mytest.de/
 clusty1.mytest.de  clusty2.mytest.de  clusty3.mytest.de

So it looks good. Thanks a lot...

Now I have to figure out how to put files on the replica and then I will shutdown some servers and see how coda acts...


Achim








Reply via email to