I ran the RealLife iometer profile on NFS based storage (vs. SW iSCSI), and got
nearly identical results to having the disks on iSCSI:
iSCSI
IOPS: 1003.8
MB/s: 7.8
Avg Latency (s): 27.9
NFS
IOPS: 1005.9
MB/s: 7.9
Avg Latency (s): 29.7
Interesting!
Here is how the pool was behaving during the
On Fri, 26 Jun 2009, Scott Meilicke wrote:
I ran the RealLife iometer profile on NFS based storage (vs. SW
iSCSI), and got nearly identical results to having the disks on
iSCSI:
Both of them are using TCP to access the server.
So it appears NFS is doing syncs, while iSCSI is not (See my
On Fri, Jun 26, 2009 at 6:04 PM, Bob
Friesenhahnbfrie...@simple.dallas.tx.us wrote:
On Fri, 26 Jun 2009, Scott Meilicke wrote:
I ran the RealLife iometer profile on NFS based storage (vs. SW iSCSI),
and got nearly identical results to having the disks on iSCSI:
Both of them are using TCP to
if those servers are on physical boxes right now i'd do some perfmon
caps and add up the iops.
Using perfmon to get a sense of what is required is a good idea. Use the 95
percentile to be conservative. The counters I have used are in the Physical
disk object. Don't ignore the latency counters
sm == Scott Meilicke no-re...@opensolaris.org writes:
sm Some storage will flush their caches despite the fact that the
sm NVRAM protection makes those caches as good as stable
sm storage. [...] ZFS also issues a flush every time an
sm application requests a synchronous write
Isn't that section of the evil tuning guide you're quoting actually about
checking if the NVRAM/driver connection is working right or not?
Miles, yes, you are correct. I just thought it was interesting reading about
how syncs and such work within ZFS.
Regarding my NFS test, you remind me that
On Wed, June 24, 2009 08:42, Philippe Schwarz wrote:
In my tests ESX4 seems to work fine with this, but i haven't already
stressed it ;-)
Therefore, i don't know if the 1Gb FDuplex per port will be enough, i
don't know either i'have to put sort of redundant access form ESX to
SAN,etc
2 first disks Hardware mirror of 146Go with Sol10 UFS filesystem on it.
The next 6 others will be used as a raidz2 ZFS volume of 535G,
compression and shareiscsi=on.
I'm going to CHAP protect it soon...
you're not going to get the random read write performance you need
for a vm backend out
See this thread for information on load testing for vmware:
http://communities.vmware.com/thread/73745?tstart=0start=0
Within the thread there are instructions for using iometer to load test your
storage. You should test out your solution before going live, and compare what
you get with what
Bottim line with virtual machines is that your IO will be random by
definition since it all goes into the same pipe. If you want to be
able to scale, go with RAID 1 vdevs. And don't skimp on the memory.
Our current experience hasn't shown a need for an SSD for the ZIL but
it might be
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
milosz a écrit :
Within the thread there are instructions for using iometer to load test your
storage. You should test out your solution before going live, and compare
what you get with what you need. Just because striping 3 mirrors *will* give
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
David Magda a écrit :
On Wed, June 24, 2009 08:42, Philippe Schwarz wrote:
In my tests ESX4 seems to work fine with this, but i haven't already
stressed it ;-)
Therefore, i don't know if the 1Gb FDuplex per port will be enough, i
don't know
- - the VM will be mostly few IO systems :
- -- WS2003 with Trend Officescan, WSUS (for 300 XP) and RDP
- -- Solaris10 with SRSS 4.2 (Sunray server)
(File and DB servers won't move in a nearby future to VM+SAN)
I thought -but could be wrong- that those systems could afford a high
latency
On Jun 24, 2009, at 16:54, Philippe Schwarz wrote:
Out of curiosity, any reason why went with iSCSI and not NFS? There
seems
to be some debate on which is better under which circumstances.
iSCSI instead of NFS ?
Because of the overwhelming difference in transfer rate between
them, In
14 matches
Mail list logo