Hi,
we are serving ISCSI volumes from our omnios box ... in the log on
the client I keep seeing this pattern every few hours.
any idea what could be causing this ?
server and client are directly via a crossover cable over a dedicated interface.
Jan 21 01:21:34 iscsi-client kernel: :
It looks like your NFS is dropping also ( but then recovering ), so I wouldn't
be pinning the problem solely on iscsi.
The problem could be anywhere from the network driver all the way back to the
switch/cables etc. You'll need to go through each item methodically to find the
root cause.
On
We've seen problems like this when we have a SATA drive in a SAS expander
that is going out to lunch. Are there any drives showing errors in iostat
-En? or any drive timeout messages in the ring buffer?
-nld
On Tue, Jan 21, 2014 at 3:04 AM, Tobias Oetiker t...@oetiker.ch wrote:
Hi,
we are
On Jan 21, 2014, at 7:21 AM, Narayan Desai narayan.de...@gmail.com wrote:
We've seen problems like this when we have a SATA drive in a SAS expander
that is going out to lunch. Are there any drives showing errors in iostat
-En? or any drive timeout messages in the ring buffer?
Generally
Today Dan McDonald wrote:
On Jan 21, 2014, at 7:21 AM, Narayan Desai narayan.de...@gmail.com wrote:
We've seen problems like this when we have a SATA drive in a SAS expander
that is going out to lunch. Are there any drives showing errors in iostat
-En? or any drive timeout messages in
On Tue, 21 Jan 2014, Tobias Oetiker wrote:
Hi,
we are serving ISCSI volumes from our omnios box ... in the log on
the client I keep seeing this pattern every few hours.
any idea what could be causing this ?
server and client are directly via a crossover cable over a dedicated
Hi Tim,
Today Tim Rice wrote:
On Tue, 21 Jan 2014, Tobias Oetiker wrote:
Hi,
we are serving ISCSI volumes from our omnios box ... in the log on
the client I keep seeing this pattern every few hours.
any idea what could be causing this ?
server and client are directly via a
Sorry, I should have given the requisite yes, I know that this is a recipe
for sadness, for I too have experienced said sadness.
That said, we've seen this kind of problem when there was a device in a
vdev that was dying a slow death. There wouldn't necessarily be any sign,
aside from insanely
Hi Nld,
Today Narayan Desai wrote:
Sorry, I should have given the requisite yes, I know that this is a recipe
for sadness, for I too have experienced said sadness.
That said, we've seen this kind of problem when there was a device in a
vdev that was dying a slow death. There wouldn't
On 1/21/14, 10:09 PM, Saso Kiselkov wrote:
On 1/21/14, 10:01 PM, Tobias Oetiker wrote:
Hi Nld,
Today Narayan Desai wrote:
Sorry, I should have given the requisite yes, I know that this is a recipe
for sadness, for I too have experienced said sadness.
That said, we've seen this kind of
On 1/21/14, 10:16 PM, Saso Kiselkov wrote:
On 1/21/14, 10:09 PM, Saso Kiselkov wrote:
On 1/21/14, 10:01 PM, Tobias Oetiker wrote:
Hi Nld,
Today Narayan Desai wrote:
Sorry, I should have given the requisite yes, I know that this is a recipe
for sadness, for I too have experienced said
Today Saso Kiselkov wrote:
I guess you can check for this string at runtime:
$ strings /kernel/drv/amd64/igb | grep _eee_support
If it is missing, then it could be the buggy EEE support that's throwing
your link out of whack here.
Nevermind, missed your description of the KVM guests
12 matches
Mail list logo