As I said, please by all means try it and post your benchmarks for first hour,
first day and first week and then first month. The data will be of interest to
you. On a subjective basis, if you feel that an SSD is working just fine as
your ZIL, run with it. Good luck!
--
This message posted
en == Eff Norwood sm...@jsvp.com writes:
en http://www.anandtech.com/show/2738/8
but a few pages later:
http://www.anandtech.com/show/2738/25
so, as you say, ``with all major SSDs in the role of a ZIL you will
eventually not be happy.'' is true, but you seem to have accidentally
left
I can't think of an easy way to measure pages that have not been consumed since
it's really an SSD controller function which is obfuscated from the OS, and add
the variable of over provisioning on top of that. If anyone would like to
really get into what's going on inside of an SSD that makes
On Sat, Aug 28, 2010 at 05:50:38AM -0700, Eff Norwood wrote:
I can't think of an easy way to measure pages that have not been consumed
since it's really an SSD controller function which is obfuscated from the OS,
and add the variable of over provisioning on top of that. If anyone would
like
On Sat, Aug 28, 2010 at 8:19 AM, Ray Van Dolson rvandol...@esri.com wrote:
On Sat, Aug 28, 2010 at 05:50:38AM -0700, Eff Norwood wrote:
I can't think of an easy way to measure pages that have not been consumed
since it's really an SSD controller function which is obfuscated from the
OS, and
Hi,
i think, the local ZFS filesystem with raidz on the 7210 is not the
problem (when there are fast HDs), but you can test it with e.g.
bonnie++ (downloadable at sunfreeware.com), also NFS should not be the
problem because iscsi is also very slow(isn´t it?).
some other ideas are:
Network
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
If I remember correctly, ESX always uses synchronous writes over NFS. If
so, adding a dedicated log device (such as a DDRdrive) might help you
out here. You should be able to test it by disabling the ZIL for a short
while and see if performance
Hi,
In a setup similar to yours I changed from a single 15 disks raidz2 to 7 mirros
of 2 disks each. The change in performance was stellar. The key point in
serving things for VMware is that it always issue synchronous writes, wheter on
iscsi or NFS. When you have tens of VM the resulting
Saso is correct - ESX/i always uses F_SYNC for all writes and that is for sure
your performance killer. Do a snoop | grep sync and you'll see the sync write
calls from VMWare. We use DDRdrives in our production VMWare storage and they
are excellent for solving this problem. Our cluster supports
On Fri, August 27, 2010 08:46, Eff Norwood wrote:
Saso is correct - ESX/i always uses F_SYNC for all writes and that is for
sure your performance killer. Do a snoop | grep sync and you'll see the
sync write calls from VMWare. We use DDRdrives in our production VMWare
storage and they are
David asked me what I meant by filled up. If you make the unwise decision to
use an SSD as your ZIL, at some point days to weeks after you install it, all
of the pages will be allocated and you will suddenly find the device to be
slower than a conventional disk drive. This is due to the way
On Aug 27, 2010, at 1:04 AM, Mark markwo...@yahoo.com wrote:
We are using a 7210, 44 disks I believe, 11 stripes of RAIDz sets. When I
installed I selected the best bang for the buck on the speed vs capacity
chart.
We run about 30 VM's on it, across 3 ESX 4 servers. Right now, its all
On Fri, Aug 27, 2010 at 05:51:38AM -0700, David Magda wrote:
On Fri, August 27, 2010 08:46, Eff Norwood wrote:
Saso is correct - ESX/i always uses F_SYNC for all writes and that is for
sure your performance killer. Do a snoop | grep sync and you'll see the
sync write calls from VMWare. We
On Fri, Aug 27 at 6:16, Eff Norwood wrote:
David asked me what I meant by filled up. If you make the unwise
decision to use an SSD as your ZIL, at some point days to weeks
after you install it, all of the pages will be allocated and you
will suddenly find the device to be slower than a
Hey thanks for the replies everyone.
Saddly most of those options will not work, since we are using a SUN Unified
Storage 7210, the only option is to buy the SUN SSD's for it, which is about
$15k USD for a pair. We also don't have the ability to shut off ZIL or any of
the other options that
markwo...@yahoo.com said:
So the question is with a proper ZIL SSD from SUN, and a RAID10... would I be
able to support all the VM's or would it still be pushing the limits a 44
disk pool?
If it weren't a closed 7000-series appliance, I'd suggest running the
zilstat script. It should make it
On Fri, Aug 27, 2010 at 11:57:17AM -0700, Marion Hakanson wrote:
markwo...@yahoo.com said:
So the question is with a proper ZIL SSD from SUN, and a RAID10... would I
be
able to support all the VM's or would it still be pushing the limits a 44
disk pool?
If it weren't a closed
It does, its on a pair of large APC's.
Right now we're using NFS for our ESX Servers. The only iSCSI LUN's I have are
mounted inside a couple Windows VM's. I'd have to migrate all our VM's to
iSCSI, which I'm willing to do if it would help and not cause other issues.
So far the 7210
On Fri, Aug 27, 2010 at 12:46:42PM -0700, Mark wrote:
It does, its on a pair of large APC's.
Right now we're using NFS for our ESX Servers. The only iSCSI LUN's
I have are mounted inside a couple Windows VM's. I'd have to
migrate all our VM's to iSCSI, which I'm willing to do if it would
Wouldn't it be possible to saturate the SSD ZIL with enough backlogged sync
writes?
What I mean is, doesn't the ZIL eventually need to make it to the pool, and if
the pool as a whole (spinning disks) can't keep up with 30+ vm's of write
requests, couldn't you fill up the ZIL that way?
--
On Aug 27, 2010, at 2:32 PM, Mark wrote:
Saddly most of those options will not work, since we are using a SUN Unified
Storage 7210, the only option is to buy the SUN SSD's for it, which is about
$15k USD for a pair. We also don't have the ability to shut off ZIL or any
of the other
On Fri, Aug 27, 2010 at 01:22:15PM -0700, John wrote:
Wouldn't it be possible to saturate the SSD ZIL with enough
backlogged sync writes?
What I mean is, doesn't the ZIL eventually need to make it to the
pool, and if the pool as a whole (spinning disks) can't keep up with
30+ vm's of write
No. From what I've seen, ZFS will periodically flush writes from the
ZIL to disk. You may run into a read starvation situation where ZFS is
so busy flushing to disk that you won't get reads. If you have VMs where
developers expect low latency interactivity, they get unhappy. Trust me. :)
Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Mark
Sent: Friday, August 27, 2010 10:47 PM
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] VM's on ZFS - 7210
It does, its on a pair of large APC's.
Right now we're using NFS
By all means please try it to validate it yourself and post your results from
hour one, day one and week one. In a ZIL use case, although the data set is
small it is always writing a small ever changing (from the SSDs perspective)
data set. The SSD does not know to release previously written
On Fri, Aug 27, 2010 at 03:51:39PM -0700, Eff Norwood wrote:
By all means please try it to validate it yourself and post your
results from hour one, day one and week one. In a ZIL use case,
although the data set is small it is always writing a small ever
changing (from the SSDs perspective)
We are using a 7210, 44 disks I believe, 11 stripes of RAIDz sets. When I
installed I selected the best bang for the buck on the speed vs capacity chart.
We run about 30 VM's on it, across 3 ESX 4 servers. Right now, its all running
NFS, and it sucks... sooo slow.
iSCSI was no better.
I
27 matches
Mail list logo