Can anyone suggest a timeout I might be hitting or a setting I'm

The run down:

- EqualLogic target
- CentOS 5.2 client
- xfs > lvm > iscsi

During a period of high load the EqualLogic decides to load balance:

 INFO  4/13/09  12:08:29 AM  eql3    iSCSI session to target
',' from
initiator ',' was
closed.   Load balancing request was received on the array.  

 INFO  4/13/09  12:08:31 AM  eql3    iSCSI login to target
',' from
initiator ','
successful, using standard frame length.  

on the client see I get:

Apr 13 00:08:29 moo kernel: [4576850.161324] sd 5:0:0:0: SCSI error:
return code = 0x00020000

Apr 13 00:08:29 moo kernel: [4576850.161330] end_request: I/O error, dev
sdc, sector 113287552

Apr 13 00:08:32 moo kernel: [4576852.470879] I/O error in filesystem
("dm-10") meta-data dev dm-10 block 0x6c0a000
("xfs_trans_read_buf") error 5 buf count 4096

Apr 13 00:08:32 moo kernel: [4576852.471845]
xfs_force_shutdown(dm-10,0x1) called from line 415 of
file /builddir/build/BUILD/xfs-kmod-0.5/_kmod_build_/xfs_trans_buf.c.
Return address = 0xffffffff884420b5

Apr 13 00:08:32 moo kernel: [4576852.475055] Filesystem "dm-10": I/O
Error Detected.  Shutting down filesystem: dm-10

Apr 13 00:08:32 moo kernel: [4576852.475688] Please umount the
filesystem, and rectify the problem(s)

Checkout the timestamps, sync's up quite nicely. 

The funny thing is from the logs this load balancing seems to happen
every couple of days without a peep in the logs. Then twice in the last
couple nights, during a period of high load, it seems to trigger an
instant error that makes xfs want to bail out.

Matthew Kent \ SA \

You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to
To unsubscribe from this group, send email to
For more options, visit this group at

Reply via email to