Hello ,

I solved my problem by restarting from the scratch again.

The stops are which may be important for others:
- stop coda on all servers
- erase all partitions needed for replications
- vice-setup on all servers
- Edit file '/vice/db/servers'
  to include my coda servers with the corresponding ID:
  (example:
  tarzan1  1
  tarzan2  2
  )
- Edit file '/usr/local/etc/coda/server.conf' to define hostname and ipaddress as in /etc/hosts
- Make sure that directory '/var/lock/subsys' is existing
- Activate 'vice-setup-srvdir' to set up  additional partitions
  (/vicepb, /vicepc in my case, beside /vicepa)

  File '/vice/db/vicetab' should have entries like:
   tarzan1   /vicepa   ftree   width=32,depth=4
   tarzan1   /vicepb   ftree   width=32,depth=4
   tarzan1   /vicepc   ftree   width=256,depth=3
 on first coda server

- Restart coda server in all nodes.
  Please check /vice/srv/SrvLog!!!!!

- Setup replicating partitions by activing the following scripts on SCM:
   createvol_rep iersdc tarzan1/vicepb tarzan2/vicepb
   createvol_rep iers2  tarzan1/vicepc tarzan2/vicepc

Now it did work.

Regards,
Reiner


Hello Jan,

thanks for your response.

Well, I made sure that
'vicepb' partition is setup (using /usr/local/sbin/vice-setup-srvdir)

I restartet the server and now I found in
/vice/srv/SrvLog thet the codaserver is not running:

Date: Fri 01/26/2007

09:36:39 Coda Vice, version 6.9.0 log started at Fri Jan 26 09:36:39 2007

09:36:39 RvmType is Rvm
09:36:39 Main process doing a LWP_Init()
09:36:39 Main thread just did a RVM_SET_THREAD_DATA

09:36:39 Setting Rvm Truncate threshhold to 5.

log_recover failed.
do_rvm_options failed
09:36:39 rvm_init failed RVM_EIO

What does this mean?
well codaserver was running after first setup, i.e. where only partition 'vicepa'
was defined.

Best regards, Reiner

 Harkes wrote:

On Thu, Jan 25, 2007 at 05:51:04PM +0100, Reiner Dassing wrote:

a new setup of coda-6.9.0 on Debian Linux did allow to createvol_rep
the default /vicepa but the command

"createvol_rep iersdc tarzan1/vicepb tarzan2/vicepb"
shows: "Failed to dump the current VRDB into /vice/db/VRList.new"



Did you set up /vice/db/vicetab to have a valid entry for /vicepb?
Also, you may have to restart the servers before the new partition is
picked up from that file.


+ volutil -h tarzan1 dumpvrdb /vice/db/VRList.new
+ '[' 255 -ne 0 ']'
+ echo 'Failed to dump the current VRDB into /vice/db/VRList.new'
Failed to dump the current VRDB into /vice/db/VRList.new
+ exit 1



If you run that volutil command from the prompt does it give an
explanation for the error? In the script we redirected stderr to
/dev/null to avoid too much clutter.


# more /vice/vol/BigVolumeList
P/vicepa Htarzan1 T8b3f8c F8b3f78
W/.0 I1000001 H1 P/vicepa m0 M0 U2 W1000001 C45b88f7a D45b88f7a B0 A0



This does look like we don't know about any vicepb 'partitions' on
either tarzan1 or tarzan2, as there would be a P/vicepb line in the
volume list. But maybe the script never got far enough to update this
information.

Jan



Reply via email to