> msl wrote:
>
> The information you posted below contains no
> information regarding
> the system crash.
>
Ok, i have just put that information, because was the only information that i
had. I mean, looking the standard system log files, all that i could see was
the "gap" between the log entries.
> Do you have system dumps enabled?
>
> dumpadm
# dumpadm
Dump content: kernel pages
Dump device: /dev/dsk/c0t5006048449AF62A7d19s1 (swap)
Savecore directory: /var/crash/node2
Savecore enabled: yes
>
> Is there a recent core file?
No.
>
> ls -lrt /var/crash/`uname -n`
total 0
>
> If yes, note the number. For example, if the last
> files are:
>
> vmcore.19
> unix.19
>
> then the number is 19
>
> Then dump out the message buffer, stack and registers
>
> mdb -k 19
> ::msgbuf
> $c
> $r
> ::quit
>
> The ds.log does not contain the reverse copy "
> /usr/sbin/sndradm -C
> local -g MYPOOL -n -r -m"
>
> There maybe multiple things wrong with your systems,
> you should
> acquire the Explorer package from SunSolve, run it on
> both nodes of
> your SNDR replica, then follow the instruction to
> make available the
> Explorer data to Sun.
Done.
>
> Jim
>
>
Thanks very much for your time mr. Dunham.
Leal.
>
> > Hello all,
> > Maybe i cannot use AVS opensource with solaris 10
> anyway... but,
> > think about compatilities problems with opensolaris
> and solaris is
> > a big problem.
> > I still have problems with server restart, and all
> avs
> > configurations gone. So, after i make some tests
> (reboot, unplug
> > cables...), i need to do a full re/sync. This
> morning (second
> > time), i issue the command:
> > /usr/sbin/sndradm -C local -g MYPOOL -n -r -m
> > And after a few minutes the second node crashes:
> > dsstat
> > name t s pct role ckps dkps
> tps svt
> dev/rdsk/c2d0s0 P RN 99.66 net - 0
> 0 0
> v/rdsk/c2d0s1 bmp 0 0 0
> 0
> ev/rdsk/c3d0s0 P RN 99.63 net - 0 0
> 0
> ev/rdsk/c3d0s1 bmp 0 0 0
> 0
> Here is the ds.log from node2:
> Nov 14 09:37:30 cfglockd: new lock daemon, pid 324
> > Nov 14 09:37:30 cfglockd: pid 324 unexpected signal
> 18, ignoring
> > Nov 14 09:37:34 scm: scmadm cache enable succeeded
> > Nov 14 09:37:35 sv: svboot: resume /dev/rdsk/c2d0s0
> > Nov 14 09:37:35 sv: svboot: resume /dev/rdsk/c2d0s1
> > Nov 14 09:37:35 sv: svboot: resume /dev/rdsk/c3d0s0
> > Nov 14 09:37:35 sv: svboot: resume /dev/rdsk/c3d0s1
> > Nov 14 09:37:35 ii: iiboot resume cluster tag
> <none>
> > Nov 14 09:52:07 cfglockd: daemon restarted on
> 10.244.0.81
> >
> > Nov 19 10:36:05 sv: svboot: suspend
> /dev/rdsk/c2d0s0
> > Nov 19 10:36:05 sv: svboot: suspend
> /dev/rdsk/c2d0s1
> > Nov 19 10:36:05 sv: svboot: suspend
> /dev/rdsk/c3d0s0
> > Nov 19 10:36:05 sv: svboot: suspend
> /dev/rdsk/c3d0s1
> > Nov 19 10:36:06 scm: scmadm cache disable succeeded
> > Nov 19 10:36:06 cfglockd: pid 324 terminate on
> signal 15
> > nov 19 10:37:44 dscfg: dscfg -i
> > nov 19 10:37:45 dscfg: dscfg -i -p
> /etc/dscfg_format
> > Nov 19 10:37:47 cfglockd: new lock daemon, pid
> 25729
> > Nov 19 10:37:47 cfglockd: pid 25729 unexpected
> signal 18, ignoring
> > Nov 19 10:37:53 scm: scmadm cache enable succeeded
> > Nov 19 10:37:55 ii: iiboot resume cluster tag
> <none>
> > nov 19 10:38:39 sndr: sndradm -E node1
> /dev/rdsk/c2d0s0 /dev/rdsk/
> > c2d0s1 node
> > 2 /dev/rdsk/c2d0s0 /dev/rdsk/c2d0s1
> > Successful
> > nov 19 10:38:39 sv: enabled /dev/rdsk/c2d0s0
> > nov 19 10:38:39 sv: enabled /dev/rdsk/c2d0s1
> > nov 19 10:38:52 sndr: sndradm -E node1
> /dev/rdsk/c3d0s0 /dev/rdsk/
> > c3d0s1 node
> > 2 /dev/rdsk/c3d0s0 /dev/rdsk/c3d0s1
> > Successful
> > nov 19 10:38:52 sv: enabled /dev/rdsk/c3d0s0
> > nov 19 10:38:52 sv: enabled /dev/rdsk/c3d0s1
> > Nov 19 10:44:21 cfglockd: new lock daemon, pid 323
> > Nov 19 10:44:21 cfglockd: pid 323 unexpected signal
> 18, ignoring
> > Nov 19 10:44:25 scm: scmadm cache enable succeeded
> > Nov 19 10:44:27 sv: svboot: resume /dev/rdsk/c2d0s0
> > Nov 19 10:44:27 sv: svboot: resume /dev/rdsk/c2d0s1
> > Nov 19 10:44:27 sv: svboot: resume /dev/rdsk/c3d0s0
> > Nov 19 10:44:27 sv: svboot: resume /dev/rdsk/c3d0s1
> > Nov 19 10:44:27 ii: iiboot resume cluster tag
> <none>
> >
> > And the /var/adm/messages:
> > Nov 19 10:36:04 node2 pseudo: [ID 129642 kern.info]
> pseudo-device: ii0
> > Nov 19 10:36:04 node2 genunix: [ID 936769
> kern.info] ii0 is /pseudo/
> > [EMAIL PROTECTED]
> > Nov 19 10:37:48 node2 pseudo: [ID 129642 kern.info]
> pseudo-device:
> > sdbc0
> > Nov 19 10:37:48 node2 genunix: [ID 936769
> kern.info] sdbc0 is /
> > pseudo/[EMAIL PROTECTED]
> > Nov 19 10:37:55 node2 pseudo: [ID 129642 kern.info]
> pseudo-device: ii0
> > Nov 19 10:37:55 node2 genunix: [ID 936769
> kern.info] ii0 is /pseudo/
> > [EMAIL PROTECTED]
> > Nov 19 10:37:55 node2 rdc: [ID 517869 kern.info]
> @(#) rdc: built
> > 11:04:01 Aug 3 2007
> > Nov 19 10:37:55 node2 pseudo: [ID 129642 kern.info]
> pseudo-device:
> > rdc0
> > Nov 19 10:37:55 node2 genunix: [ID 936769
> kern.info] rdc0 is /
> > pseudo/[EMAIL PROTECTED]
> > Nov 19 10:38:39 node2 sv: [ID 173014 kern.info] sv:
> rdev
> > 0x6600000000, nblocks 312478483
> > Nov 19 10:38:39 node2 sv: [ID 173014 kern.info] sv:
> rdev
> > 0x6600000001, nblocks 0
> > Nov 19 10:38:52 node2 sv: [ID 173014 kern.info] sv:
> rdev
> > 0x6600000040, nblocks 312478483
> > Nov 19 10:38:52 node2 sv: [ID 173014 kern.info] sv:
> rdev
> > 0x6600000041, nblocks 0
> > Nov 19 10:39:20 node2 rdc: [ID 455011 kern.notice]
> NOTICE: SNDR:
> > Interface 10.xxx.x.xx <==> 10.xxx.x.xx : Up
> > Nov 19 10:42:44 node2 genunix: [ID 540533
> kern.notice] ^MSunOS
> > Release 5.10 Version Generic_127112-0
> > 2 64-bit
> >
> >
> > Thanks a lot!
> >
> > Leal.
> >
> >
> > This message posted from opensolaris.org
> > _______________________________________________
> > storage-discuss mailing list
> > [email protected]
> >
> http://mail.opensolaris.org/mailman/listinfo/storage-d
> iscuss
>
>
> _______________________________________________
> storage-discuss mailing list
> [email protected]
> http://mail.opensolaris.org/mailman/listinfo/storage-d
> iscuss
This message posted from opensolaris.org
_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss