I seem to have got the same core dump, in a different way.
I had a zpool setup on a iscsi 'disk'. For details see:
http://mail.opensolaris.org/pipermail/storage-discuss/2007-May/001162.html
But after a reboot the iscsi target was not longer available, so the iscsi
initiator could not provide the
the path)
My sata drive is using the 'ahci' driver, connecting to the
ICH7 chipset on the motherboard.
And I have a scsi drive on a Adaptec card, plugged into a PCI slot.
Thanks
Nigel Smith
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t0d0 DEFAULT cyl 2229 alt 2 hd 255
You can see the status of bug here:
http://bugs.opensolaris.org/view_bug.do?bug_id=6566207
Unfortunately, it's showing no progress since 20th June.
This fix really could do to be in place for S10u4 and snv_70.
Thanks
Nigel Smith
This message posted from opensolaris.org
Yes, I'm not surprised. I thought it would be a RAM problem.
I always recommend a 'memtest' on any new hardware.
Murphy's law predicts that you only have RAM problems
on PC's that you don't test!
Regards
Nigel Smith
This message posted from opensolaris.org
Richard, thanks for the pointer to the tests in '/usr/sunvts', as this
is the first I have heard of them. They look quite comprehensive.
I will give them a trial when I have some free time.
Thanks
Nigel Smith
pmemtest- Physical Memory Test
ramtest - Memory DIMMs (RAM) Test
back to this forum, and on the 'Storage-discuss' forum where these sort
of questions are more usually discussed.
Thanks
Nigel Smith
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Please can you provide the source code for your test app.
I would like to see if I can reproduce this 'crash'.
Thanks
Nigel
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
upgrade to snv70 or latter.
Regards,
Nigel Smith
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/pipermail/onnv-notify/2007-October/012782.html
Regards
Nigel Smith
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
for a different hard disk controller card:
http://mail.opensolaris.org/pipermail/storage-discuss/2007-September/003399.html
Regards
Nigel Smith
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
And are you seeing any error messages in '/var/adm/messages'
indicating any failure on the disk controller card?
If so, please post a sample back here to the forum.
This message posted from opensolaris.org
___
zfs-discuss mailing list
(PCI-E)
and SiI-3124 (PCI-X) devices.
2. The AHCI driver, which supports the Intel ICH6 and latter devices, often
found on motherboard.
4. The NV_SATA driver which supports Nvidia ck804/mcp55 devices.
Regards
Nigel Smith
This message posted from opensolaris.org
Presumably the labels are some how confused,
especially for your USB drives :-(
Regards
Nigel Smith
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
to show the iscsi session has dropped out,
and the initiator is auto retrying to connect to the target,
but failing. It may help to get a packet capture at this stage
to try see why the logon is failing.
Regards
Nigel Smith
--
This message posted from opensolaris.org
Tano, based on the above, I would say you need
unique GUID's for two separate Targets/LUNS.
Best Regards
Nigel Smith
http://nwsmith.blogspot.com/
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
hard drives would not help,
as the bios update may cause an identical problem
with each drive.)
Good Luck
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
could check the Solaris iScsi target works ok under stress
from something other that ESX, like say the Windows iscsi initiator.
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
.
Following Eugene report, I'm beginning to fear that some sort of regression
has been introduced into the iscsi target code...
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
the snv_93 and snv_97 iscsi target to work
well with the Vmware ESX and Microsoft initiators.
So it is a surprise to see these problems occurring.
Maybe some of the more resent builds snv_98, 99 have
'fixes' that have cause the problem...
Regards
Nigel Smith
--
This message posted from
Hi Tano
I will have a look at your snoop file.
(Tomorrow now, as it's late in the UK!)
I will send you my email address.
Thanks
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
the capture.
You can then use Ethereal or WireShark to analyze the capture file.
On the 'Analyze' menu, select 'Expert Info'.
This will look through all the packets and will report
any warning or errors it sees.
Regards
Nigel Smith
--
This message posted from opensolaris.org
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
'prstat' to see if it gives any clues.
Presumably you are using ZFS as the backing store for iScsi, in
which case, maybe try with a UFS formatted disk to see if that is a factor.
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss
.
If your using Solaris, maybe try 'prtvtoc'.
http://docs.sun.com/app/docs/doc/819-2240/prtvtoc-1m?a=view
(Unless someone knows a better way?)
Thanks
Nigel Smith
# prtvtoc /dev/rdsk/c1t1d0
* /dev/rdsk/c1t1d0 partition map
*
* Dimensions:
* 512 bytes/sector
* 1465149168 sectors
* 1465149101 accessible
/2007/660/onepager/
http://bugs.opensolaris.org/view_bug.do?bug_id=5044205
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
a 'zpool status')
Thanks
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
'status' of your zpool on Server2?
(You have not provided a 'zpool status')
Thanks
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
'smartctl' (fully) working with PATA and
SATA drives on x86 Solaris.
I've done a quick search on PSARC 2007/660 and it was
closed approved fast-track 11/28/2007.
I did a quick search, but I could not find any code that had been
committed to 'onnv-gate' that references this case.
Regards
Nigel Smith
Hi Tano
Please check out my post on the storage-forum for another idea
to try which may give further clues:
http://mail.opensolaris.org/pipermail/storage-discuss/2008-October/006458.html
Best Regards
Nigel Smith
--
This message posted from opensolaris.org
method used for this file is 98.
Please can you check it out, and if necessary use a more standard
compression algorithm.
Download File Size was 8,782,584 bytes.
Thanks
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
, that's my thoughts conclusion for now.
Maybe you could get some more snoop captures with other clients, and
with a different switch, and do a similar analysis.
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
is closed source :-(
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
be interesting to do two separate captures - one on the client
and the one on the server, at the same time, as this would show if the
switch was causing disruption. Try to have the clocks on the client
server synchronised as close as possible.
Thanks
Nigel Smith
--
This message posted from opensolaris.org
' for the network card, just in case it turns out to be a driver bug.
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
any good while that is happening.
I think you need to try a different network card in the server.
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
/zfs-discuss/2008-May/047270.html
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
David Magda wrote:
This is also (theoretically) why a drive purchased from Sun is more
that expensive then a drive purchased from your neighbourhood computer
shop: Sun (and presumably other manufacturers) takes the time and
effort to test things to make sure that when a drive says I've
made,
or to actively help with code reviews or testing.
Best Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
held back on
announcing the work on deduplication, as it just seems to
have ramped up frustration, now that it seems no
more news is forthcoming. It's easy to be wise after the event
and time will tell.
Thanks
Nigel Smith
--
This message posted from opensolaris.org
that anyone
using raidz, raidz2, raidz3, should not upgrade to that release?
For the people who have already upgraded, presumably the
recommendation is that they should revert to a pre 121 BE.
Thanks
Nigel Smith
--
This message posted from opensolaris.org
___
zfs
the dev
repository will be updated to snv_128.
Then we see if any bugs emerge as we all rush to test it out...
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Hi Robert
I think you mean snv_128 not 126 :-)
6667683 need a way to rollback to an uberblock from a previous txg
http://bugs.opensolaris.org/view_bug.do?bug_id=6667683
http://hg.genunix.org/onnv-gate.hg/rev/8aac17999e4d
Regards
Nigel Smith
--
This message posted from opensolaris.org
Hi Gary
I will let 'website-discuss' know about this problem.
They normally fix issues like that.
Those pages always seemed to just update automatically.
I guess it's related to the website transition.
Thanks
Nigel Smith
--
This message posted from opensolaris.org
/src/uts/common/io/sata/adapters/
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
More ZFS goodness putback before close of play for snv_128.
http://mail.opensolaris.org/pipermail/onnv-notify/2009-November/010768.html
http://hg.genunix.org/onnv-gate.hg/rev/216d8396182e
Regards
Nigel Smith
--
This message posted from opensolaris.org
to raise the priority on
his todo list.
Thanks
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
high %b.
And strange that you have c7,c8,c9,c10,c11
which looks like FIVE controllers!
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
'.
If Native IDE is selected the ICH10 SATA interface should
appear as two controllers, the first for ports 0-3,
and the second for ports 4 5.
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss
-for-iscsi-and-nfs-over-1gb-ethernet
BTW, what sort of network card are you using,
as this can make a difference.
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
://www.cuddletech.com/blog/pivot/entry.php?id=820
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Another things you could check, which has been reported to
cause a problem, is if network or disk drivers share an interrupt
with a slow device, like say a usb device. So try:
# echo ::interrupts -d | mdb -k
... and look for multiple driver names on an INT#.
Regards
Nigel Smith
--
This message
Hi Robert
Have a look at these links:
http://delicious.com/nwsmith/opensolaris-nas
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
The iSCSI COMSTAR Port Provider is not installed by default.
What release of OpenSolaris are you running?
If pre snv_133 then:
$ pfexec pkg install SUNWiscsit
For snv_133, I think it will be:
$ pfexec pkg install network/iscsi/target
Regards
Nigel Smith
--
This message posted from
Hello Carsten
Have you examined the core dump file with mdb ::stack
to see if this give a clue to what happend?
Regards
Nigel
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
, there is little indication of any progress being made.
Maybe some other 'zfs-discuss' readers would try zdb on there pools,
if using a recent dev build and see if they get a similar problem...
Thanks
Nigel Smith
# mdb core
Loading modules: [ libumem.so.1 libc.so.1 libzpool.so.1 libtopo.so.1
libavl.so.1
?
And what device driver is the controller using?
Thanks
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
56 matches
Mail list logo