Ok I found the answer for this issue in the following post:
http://www.opensolaris.org/jive/click.jspa?searchID=1455340&messageID=287631
I had to modify the /usr/sbin/dscfgadm ksh script. On line 1020
typeset svc=$1
should be:
typeset svc='$1'
But now I'm running in another issue.
We try to replicate a zvol from node: box3 to node box4 via AVS remote-mirror
functionality.
We run opensolaris 2008.11 b101a
On both nodes nws rdc is up and running:
[EMAIL PROTECTED]:~# dscfgadm -i
SERVICE STATE ENABLED
nws_scm online true
nws_sv online true
nws_ii online true
nws_rdc online true
nws_rdcsyncd online true
Availability Suite Configuration:
Local configuration database: valid
[EMAIL PROTECTED]:~# dscfgadm -i
SERVICE STATE ENABLED
nws_scm online true
nws_sv online true
nws_ii online true
nws_rdc online true
nws_rdcsyncd online true
Availability Suite Configuration:
Local configuration database: valid
Both nodes can talk to each other:
[EMAIL PROTECTED]:~# rpcinfo -p box4 | grep 121
100143 5 tcp 121
100143 6 tcp 121
100143 7 tcp 121
[EMAIL PROTECTED]:~# rpcinfo -p box4 | grep 121
100143 5 tcp 121
100143 6 tcp 121
100143 7 tcp 121
[EMAIL PROTECTED]:~# rpcinfo -T tcp box4 100143 5
program 100143 version 5 ready and waiting
[EMAIL PROTECTED]:~# rpcinfo -T tcp box4 100143 5
program 100143 version 5 ready and waiting
This is how we did setup remote mirroring:
- step 1:
zfs receive on box 3 , this is source (primary) volume:
[EMAIL PROTECTED]:~# zfs list storagepoola/mytestvol
NAME USED AVAIL REFER MOUNTPOINT
storagepoola/mytestvol 2.03G 911G 2.03G -
[EMAIL PROTECTED]:~# zfs get volsize storagepoola/mytestvol
NAME PROPERTY VALUE SOURCE
storagepoola/mytestvol volsize 10G
- step 2: zfs create empty zvol on box4:
zfs create -s -V 10G storagepoola/mytestvol
- Step 3:
calculated bitmap size:
[EMAIL PROTECTED]:~# dsbitmap -r /dev/zvol/rdsk/storagepoola/mytestvol
Remote Mirror bitmap sizing
Data volume (/dev/zvol/rdsk/storagepoola/mytestvol) size: 20971520 blocks
Required bitmap volume size:
Sync replication: 81 blocks
Async replication with memory queue: 81 blocks
Async replication with disk queue: 721 blocks
Async replication with disk queue and 32 bit refcount: 2641 blocks
step 4: create a 100 blocks softpartition on both nodes (we will use: Async
replication with memory queue)
[EMAIL PROTECTED]:~# metastat d20
d20: Soft Partition
Device: d1
State: Okay
Size: 100 blocks (50 KB)
Extent Start Block Block count
0 1024 100
[EMAIL PROTECTED]:~# metastat d20
d20: Soft Partition
Device: d1
State: Okay
Size: 100 blocks (50 KB)
Extent Start Block Block count
0 1024 100
step 5: Enabled Remote Mirror on both nodes:
sndradm -e box3 /dev/zvol/rdsk/storagepoola/mytestvol /dev/md/rdsk/d20 box4
/dev/zvol/rdsk/storagepoola/mytestvol /dev/md/rdsk/d20 ip async
step 6 : started a a full resynchronization
sndradm -m box4:/dev/zvol/rdsk/storagepoola/mytestvol
Then a few seconds later we get:
Nov 18 19:48:57 librdc: SNDR: Could not open file
box4:/dev/zvol/rdsk/storagepoola/mytestvol on remote node
Nov 18 19:48:58 sndr: SNDR: Could not open file
box4:/dev/zvol/rdsk/storagepoola/mytestvol on remote node
[EMAIL PROTECTED]:~# dsstat -m sndr
name t s pct role kps tps svt
poola/mytestvol P L 100.00 net 0 0 0
dev/md/rdsk/d20 bmp 0 0 0
In /var/adm/message we see:
Nov 18 19:55:03 box3 rdc: [ID 153032 kern.notice] NOTICE: SNDR client: err 26
RPC: Couldn't make connection
Does anyone knows what we are doing wrong, or what might be causing this issue?
Thanks in advance for you reply.
K
--
This message posted from opensolaris.org
_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss