Thanks Mike. On our Prod boxes we use labels, so I will implement the
same on the RHEL 53 host.
Incidentally, we have been looking at the SM2 server logs and we think
that we have identified a driver / NIC issue that could well be
impacting the RHEL 53 server and the RHEL 52 (prod box). We
Thanks Mike...
On Mar 18, 5:45 pm, Mike Christie micha...@cs.wisc.edu wrote:
bigcatxjs wrote:
Hi,
We have encountered this error below. This is the first time I have
seen this before;
This is with the noop settings set to 0 right? Was this the RHEL 5.3 or
5.2 setup?
It is our RHEL
bigcatxjs wrote:
Mar 17 18:27:59 MYHOST53 kernel: scsi 2:0:0:0: rejecting I/O to dead
device
It looks like one of the following is happening:
1. were using RHEL 5.2 and the target logged us out or dropped the
session and when we tried to login we got what we thought was a fatal
error (but
On Mar 17, 5:06 pm, Mike Christie micha...@cs.wisc.edu wrote:
bigcatxjs wrote:
Thanks Mike...
On Mar 13, 8:45 pm, Mike Christie micha...@cs.wisc.edu wrote:
bigcatxjs wrote:
At these times is there lots of disk IO? Is there anything in the target
logs?
It is fair to say that all
On Mar 17, 5:06 pm, Mike Christie micha...@cs.wisc.edu wrote:
bigcatxjs wrote:
Thanks Mike...
On Mar 13, 8:45 pm, Mike Christie micha...@cs.wisc.edu wrote:
bigcatxjs wrote:
At these times is there lots of disk IO? Is there anything in the target
logs?
It is fair to say that all
bigcatxjs wrote:
Hi,
We have encountered this error below. This is the first time I have
seen this before;
This is with the noop settings set to 0 right? Was this the RHEL 5.3 or
5.2 setup?
Could you do
rpm -q iscsi-initiator-utils
Mar 17 12:40:47 MYHOST53 kernel: Vendor:
Thanks Mike...
On Mar 13, 8:45 pm, Mike Christie micha...@cs.wisc.edu wrote:
bigcatxjs wrote:
At these times is there lots of disk IO? Is there anything in the target
logs?
It is fair to say that all these volumes take a heavy hit, in terms of
I/O. Each host (excluding the RHEL 5.3.
bigcatxjs wrote:
Thanks Mike...
On Mar 13, 8:45 pm, Mike Christie micha...@cs.wisc.edu wrote:
bigcatxjs wrote:
At these times is there lots of disk IO? Is there anything in the target
logs?
It is fair to say that all these volumes take a heavy hit, in terms of
I/O. Each host (excluding
Thanks Mike,
For this RHEL 5.2 setup, does it make a difference if you do not use
ifaces and setup the box like in 5.3 below?
I have used bonded ifaces so that the I/O requests can be split across
multiple NICS (both Server-side and on the Datacore San Melody SM node
NICS). This split is
UPDATE: RHEL 5.3 Host with NO disk i/o to SAN volume has encountered
errors;
Mar 13 10:38:49 PETDBLINUX01 kernel: connection1:0: iscsi: detected
conn error (1011)
Mar 13 10:38:49 PETDBLINUX01 iscsid: Kernel reported iSCSI connection
1:0 error (1011) state (3)
Mar 13 10:38:52 PETDBLINUX01
bigcatxjs wrote:
UPDATE: RHEL 5.3 Host is showing errors. No Disk I/O to SAN volume
(last I/O Thursday 12th March);
Is there anything in the log before this? Something about a ping or nop
timing out?
Mar 13 10:38:49 MYHOST53 kernel: connection1:0: iscsi: detected conn
error (1011)
Mar
bigcatxjs wrote:
At these times is there lots of disk IO? Is there anything in the target
logs?
It is fair to say that all these volumes take a heavy hit, in terms of
I/O. Each host (excluding the RHEL 5.3. test host) run two Oracle
databases, of which some have intra-database replication
Thanks Ulrich,
Unfortunately, budgetary restrictions prevent us from moving to Fibre
Channel :(
Rich.
On Mar 12, 2:56 pm, Ulrich Windl ulrich.wi...@rz.uni-regensburg.de
wrote:
Hi,
I haven't investigated, but I see similar short offline periods for iSCSI
here.
For your situation I'd
bigcatxjs wrote:
For this RHEL 5.2 setup, does it make a difference if you do not use
ifaces and setup the box like in 5.3 below?
iscsiadm:
iSCSI Transport Class version 2.0-724
iscsiadm version 2.0-868
Target: iqn.2000-08.com.datacore:sm2-3
Current Portal: 172.16.200.9:3260,1
14 matches
Mail list logo