> ok, failed again...
> but this time it wasn't xenstored complaining.
>
> Mar 21 23:25:25 bti1 nv_sata: [ID 517869
> kern.warning] WARNING: nv_sata inst 0 port 1:
> excessive interrupt processing. Disabling port
> int_status=1 clear=0
>
> which kindly stopped all io to the zpool. so for the
> next test I'll try to either split up the zfs mirror
> or use a file on an ufs slice...
This seems to be a generic nv_sata driver problem,
and probably is unrelated to xvm/xen.
I got the same problem on metal a few times, and filed
bug 6642154 "nv_sata: excessive interrupt processing":
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6642154
Problem is that nv_sata's interrupt handler submits
the next command to the controller when it detects that
the previous command has completed, and before returning
from the nv_sata interrupt handler it checks if there is
another interrupt pending in the controller. I case the disk
is fast and the interrupt processing in the kernel is slow the
next command has already completed before nv_sata checks
for another pending interrupt, so it loops around in the interrupt
handler and repeats the whole process. After ten loops in the
interrupt handler the hardware is declared as defective with
the message "excessive interrupt processing. Disabling port..."
As a workaround I'm using a slightly modified nv_sata
module which allows 32 loops instead of 10; there have been
no more OS hangs with message "nv_sata: ... excessive interrupt
processing" so far.
diff --git a/usr/src/uts/common/sys/sata/adapters/nv_sata/nv_sata.h
/usr/src/uts/common/sys/sata/adapters/nv_sata/nv_sata.h
--- a/usr/src/uts/common/sys/sata/adapters/nv_sata/nv_sata.h
+++ b/usr/src/uts/common/sys/sata/adapters/nv_sata/nv_sata.h
@@ -625,7 +625,7 @@ typedef struct prde {
* processing loop, declare the interrupt wedged and
* disable.
*/
-#define NV_MAX_INTR_LOOP 10
+#define NV_MAX_INTR_LOOP 32
/*
* flag values for nv_copy_regs_out
This message posted from opensolaris.org
_______________________________________________
xen-discuss mailing list
[email protected]