Runnig udevadm trigger (udevadm version 147) does not repopulate the /
dev disk entries. During the fail over IO is being sent to the disk
and there are IO errors relating to this disk during this time. I will
report back when this development environment is accessible again with
more specific
Hello, Mike
Thank you for help me a lot.
The timeout is 30.
# cat /sys/block/sdb/device/timeout
30
I'm afraid that problem is the storage self.
So I called the HP customer service for asking the normal performace
of the storage.
The HP customer service show me that.
Hi all,
With the latest release of open-iscsi, is there a way to use the
offload capabilities of Broadcom cards like NetXtreme II BCM5709
(bnx2i driver) ?
For me, the only result i get on a 2.6.32 kernel for example is the
following :
kernel: bnx2i [01:00.01]: ISCSI_INIT passed
kernel: bnx2i
On 05/20/2010 04:13 AM, MrJacK wrote:
Hi all,
With the latest release of open-iscsi, is there a way to use the
offload capabilities of Broadcom cards like NetXtreme II BCM5709
(bnx2i driver) ?
It is not supported in the upstream releases of open-iscsi, because
Broadcom is slacking :) Ccd
On 05/13/2010 03:46 AM, 立凡 王 wrote:
# hdparm -tT /dev/sdb
/dev/sdb:
Timing cached reads: 6894 MB in 2.00 seconds = 3450.44 MB/sec
Timing buffered disk reads: 30 MB in 3.14 seconds = 9.55 MB/sec
(sdb is iscsi disk, the speed(9.55 MB/sec) is limited by network
speed(100 Mb/sec).)
#
We have a SLES 11 server connected to an Equallogic 10 Gig disk array.
Initially everything seemed to work just fine. When doing some IO
testing against the mounted volumes, in which we cause very high IO
loads, i.e. 95 to 100% as reported by iostat, we started seeing the
following messages in
Hi All,
/me hangs head in shame =) This is correct, in the upstream release
of the open-scsi there is no Broadcom offload support for your device
BCM5709.
Last year there was some effort gathered to merge the uIP daemon into
iscsid to order to provide an upstream solution; but, this has been
On 05/21/2010 03:43 PM, Taylor wrote:
We have a SLES 11 server connected to an Equallogic 10 Gig disk array.
Initially everything seemed to work just fine. When doing some IO
testing against the mounted volumes, in which we cause very high IO
loads, i.e. 95 to 100% as reported by iostat, we
Mike,
So is there any update to this?
Thank you,
Tarun
On Wed, May 5, 2010 at 9:42 AM, Mike Christie micha...@cs.wisc.edu wrote:
On 04/29/2010 04:44 AM, Oliver Hookins wrote:
I believe I'm hitting the exact same error... pity.
Any word on the fix?
I believe they have a fix. It is being
Mike, thanks for the reply. The first line in the log i posted shows
the ping timeout. Or were you wanting to see something else? Let me
know what specifically you are looking for.
Yes we are using multipath, I can post multipath.conf if that helps.
I tried tweaking various settings in
10 matches
Mail list logo