We see the same issue on a x4540 Thor system with 500G disks:
lots of:
...
Nov 3 16:41:46 uva.nl scsi: [ID 107833 kern.warning] WARNING:
/p...@3c,0/pci10de,3...@f/pci1000,1...@0 (mpt5):
Nov 3 16:41:46 encore.science.uva.nl Disconnected command timeout for Target
7
...
This system is
I'm running nv126 XvM right now. I haven't tried it
without XvM.
Without XvM we do not see these issues. We're running the VMs through NFS now
(using ESXi)...
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
How did your migration to ESXi go? Are you using it on the same hardware or
did you just switch that server to an NFS server and run the VMs on another
box?
The latter, we run these VMs over NFS anyway and had ESXi boxes under test
already. we were already separating data exports from VM
in advance for your insights,
With kind regards,
Jeroen
- --
Jeroen Roodhart
IT Consultant
University of Amsterdam
j.r.roodh...@uva.nl Informatiseringscentrum
Tel. 020 525 7203
- --
See http://www.science.uva.nl/~jeroen for openPGP public key
data and spotted glaring mistakes, we would definitely appreciate your
comments.
Thanks for your help,
With kind regards,
Jeroen
- --
Jeroen Roodhart
IT Consultant
University of Amsterdam
j.r.roodh...@uva.nl Informatiseringscentrum
Tel. 020 525
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
Jeroen Roodhart wrote:
Questions: 1. Client wsize?
We usually set these to 342768 but this was tested with CenOS
defaults: 8192 (were doing this over NFSv3)
Is stand corrected here. Looking at proc/mounts I see we are in fact
using
- --
Jeroen Roodhart
IT Consultant
University of Amsterdam
j.r.roodh...@uva.nl Informatiseringscentrum
Tel. 020 525 7203
- --
See http://www.science.uva.nl/~jeroen for openPGP public key
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.7 (GNU/Linux)
Comment
).
With kind regards,
Jeroen
- --
Jeroen Roodhart
IT Consultant
University of Amsterdam
j.r.roodh...@uva.nl Informatiseringscentrum
Tel. 020 525 7203
- --
See http://www.science.uva.nl/~jeroen for openPGP public key
-BEGIN PGP
If you are going to trick the system into thinking a volatile cache is
nonvolatile, you
might as well disable the ZIL -- the data corruption potential is the same.
I'm sorry? I believe the F20 has a supercap or the like? The advise on:
Oh, one more comment. If you don't mirror your ZIL, and your unmirrored SSD
goes bad, you lose your whole pool. Or at least suffer data corruption.
Hmmm, I thought that in that case ZFS reverts to the regular on disks ZIL?
With kind regards,
Jeroen
--
This message posted from opensolaris.org
The write cache is _not_ being disabled. The write cache is being marked
as non-volatile.
Of course you're right :) Please filter my postings with a sed 's/write
cache/write cache flush/g' ;)
BTW, why is a Sun/Oracle branded product not properly respecting the NV
bit in the cache flush command?
Hi Karsten,
But is this mode of operation *really* safe?
As far as I can tell it is.
-The F20 uses some form of power backup that should provide power to the
interface card long enough to get the cache onto solid state in case of power
failure.
-Recollecting from earlier threads here; in
Hi Richard,
For this case, what is the average latency to the F20?
I'm not giving the average since I only performed a single run here (still need
to get autopilot set up :) ). However here is a graph of iostat IOPS/svc_t
sampled in 10sec intervals during a run of untarring an eclipse tarbal
Hi Casper,
:-)
Leuk te zien dat je straal nog steeds even ver komt :-)
I'm happy to see that it is now the default and I hope this will cause the
Linux NFS client implementation to be faster for conforming NFS servers.
Interesting thing is that apparently defaults on Solaris an Linux are
It doesn't have to be F20. You could use the Intel
X25 for example.
The mlc-based disks are bound to be too slow (we tested with an OCZ Vertex
Turbo). So you're stuck with the X25-E (which Sun stopped supporting for some
reason). I believe most normal SSDs do have some sort of cache and
Hi Al,
Have you tried the DDRdrive from Christopher George
cgeo...@ddrdrive.com?
Looks to me like a much better fit for your application than the F20?
It would not hurt to check it out. Looks to me like
you need a product with low *latency* - and a RAM based cache
would be a much better
Hi Roch,
Can you try 4 concurrent tar to four different ZFS
filesystems (same pool).
Hmmm, you're on to something here:
http://www.science.uva.nl/~jeroen/zil_compared_e1000_iostat_iops_svc_t_10sec_interval.pdf
In short: when using two exported file systems total time goes down to around
Hi list,
If you're running solaris proper, you better mirror
your
ZIL log device.
...
I plan to get to test this as well, won't be until
late next week though.
Running OSOL nv130. Power off the machine, removed the F20 and power back on.
Machines boots OK and comes up normally with the
Hi list,
If you're running solaris proper, you better mirror
your
ZIL log device.
...
I plan to get to test this as well, won't be until
late next week though.
Running OSOL nv130. Power off the machine, removed the F20 and power back on.
Machines boots OK and comes up normally with the
19 matches
Mail list logo