Well, when you leave out a bunch of relevant information you also leave
people guessing! :-)
Regardless, is it possibly that all of your testing was done with ZFS and not
just the raw disk? If so, it is possible that ZFS isn't noticing the hot
unplugging
of the disk until it tries to access the
1) I don't believe that any bug report has been generated despite various
e-mails
about this topic.
2) The marvell88sx driver has not been changed recently, so that if this problem
actually exists, it is probably related to the sata framework.
3) Is this problem simply that when a device
In the most recent code base (both OpenSolaris/Nevada and S10Ux with patches)
all the known marvell88sx problems have long ago been dealt with.
However, I've said this before. Solaris on 32-bit platforms has problems and
is not to be trusted. There are far, far too many places in the source
As far as I can tell from the patch web patches:
For Solaris 10 x86 138053-01 should have the fixes it does
depend on other earlier patches though). I find it very difficult
to tell what the story is with patches as the patch numbers
seem to have very little in them to correlate them to
code
Yes, there have been bugs with heavy I/O and ZFS running the system
out of memory. However, there was a contention in the thread
about it possibly being due to marvell88sx driver bugs (most likely not).
Further, my mention of 32-bit Solaris being unsafe at any speed is still
true. Without
It isn't a simple as getting an old stale value. You can get a totally
incorrect
value. Example:
Let us assume a monotonically increased 64-bit values which at the start
of this discussion is: 0x (32-bits 0, 32-bits 1).
The 32-bit kernel goes to read the 64-bit value and does
If you look at the contents of the CR it does say that. However there
are something like 200 instances and of those perhaps one or two
dozen are NOT statistics. A few examples from around the kernel
were pointed out. (interrupt handling, NIC driver, ZFS, ...)
This message posted from
eric kustarz wrote:
On Mar 6, 2008, at 7:58 AM, Brian D. Horn wrote:
Take a look at CR 6634371. It's worse than you probably thought.
The only place i see ZFS mentioned in that bug report is regarding
z_mapcnt. Its being atomically inc/dec in zfs_addmap()/zfs_delmap() -
so those are ok
ZFS is not 32-bit safe. There are a number of places in the ZFS code where
it is assumed that a 64-bit data object is being read atomically (or set
atomically). It simply isn't true and can lead to weird and bugs.
This message posted from opensolaris.org
Take a look at CR 6634371. It's worse than you probably thought.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Scott Lovenberg wrote:
eric kustarz wrote:
On Jul 9, 2007, at 11:21 AM, Scott Lovenberg wrote:
You sir, are a gentleman and a scholar!
Seriously, this is exactly
the information I was looking for, thank you very
much!
Would you happen to
eric kustarz wrote:
On Jul 9, 2007, at 11:21 AM, Scott Lovenberg wrote:
You sir, are a gentleman and a scholar! Seriously, this is exactly
the information I was looking for, thank you very much!
Would you happen to know if this has improved since build 63 or if
chipset has any
12 matches
Mail list logo