Hi Petr,
Recreating the volumes with the same replicated and replica id's using
volutil create_rep should be the correct method.
fi. If I had a replicated volume:
volumename replicated volume-id
------------------------------------
repvol 7f000492
replicated across the following volume replicas:
servername replica-name replica-id
------------------------------------------
server1 repvol.0 c900007a
server2 repvol.1 dd000077
server3 repvol.2 c7000077
(this information can be gathered from the /vice/vol/VRList on the SCM)
Then, if server1 dies and is rebuilt I would issue the following command
to restore the group:
volutil -h server1 create_rep /vicepa repvol.0 7f000492 c900007a
If you did this and assertions still trigger, I'm very interested in
what assertion that is. Even better would be a stacktrace. To get that,
start the server with:
startserver -zombify &
Let it trigger the assertion at which point the server will freeze and
then attach gdb:
gdb codasrv `pidof codasrv`
gdb> bt
.....
.....
gdb> q
On Thu, Dec 28, 2000 at 08:25:01PM +0100, Petr Tuma wrote:
> Hello,
>
> what is the correct procedure to rebuild a replicated volume on one of
> three servers that happened to crash and loose data ? I tried just
> creating the volume with the same ID anew, and at first it looked like
> the server will obtain the data from the other two. After a while,
> however, it always crashes with an assertion ...
>
> Petr Tuma
>