On Wed, 9 Jun 2010, Edward Ned Harvey wrote:
disks. That is, specifically:
o If you do a large sequential read, with 3 mirrors (6 disks) then you get
6x performance of a single disk.
Should say "up to 6x". Which disk in the pair will be read from is
random so you are unlikely to get the ful
On Wed, 9 Jun 2010, Travis Tabbal wrote:
NFS writes on ZFS blows chunks performance wise. The only way to
increase the write speed is by using an slog
The above statement is not quite true. RAID-style adaptor cards which
contain battery backed RAM or RAID arrays which include battery backed
Garrett D'Amore wrote:
You can hardly have too much. At least 8 GB, maybe 16 would be good.
The benefit will depend on your workload, but zfs and buffer cache will use it all if you have a big enough read working set.
Could lack of RAM be contributing to some of my problems, do you th
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 6/9/2010 5:04 PM, Edward Ned Harvey wrote:
>
> Everything is faster with more ram. There is no limit, unless the total
> used disk in your system is smaller than the available ram in your system
> ... which seems very improbable.
>
Off topic, bu
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Joe Auty
>
> I'm also noticing that I'm a little short on RAM. I have 6 320 gig
> drives and 4 gig of RAM. If the formula is POOL_SIZE/250, this would
> mean that I need at least 6.4 gig of RAM
On Wed, Jun 9, 2010 at 9:20 AM, Geoff Nordli wrote:
> Have you played with the flush interval?
>
> I am using iscsi based zvols, and I am thinking about not using the caching
> in vbox and instead rely on the comstar/zfs side.
>
> What do you think?
If you care about your data, IgnoreFlush should
You can hardly have too much. At least 8 GB, maybe 16 would be good.
The benefit will depend on your workload, but zfs and buffer cache will use it
all if you have a big enough read working set.
-- Garrett
Joe Auty wrote:
>I'm also noticing that I'm a little short on RAM. I have 6 320 gig
>
>
>Brandon High wrote:
>On Tue, Jun 8, 2010 at 10:33 AM, besson3c wrote:
>
>
>What VM software are you using? There are a few knobs you can turn in VBox
>which will help with slow storage. See
>http://www.virtualbox.org/manual/ch12.html#id2662300 for instructions on
>reducing the flush interval.
> On Behalf Of Joe Auty
>Sent: Tuesday, June 08, 2010 11:27 AM
>
>
>I'd love to use Virtualbox, but right now it (3.2.2 commercial which I'm
>evaluating, I haven't been able to compile OSE on the CentOS 5.5 host yet)
is
>giving me kernel panics on the host while starting up VMs which are
obviousl
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of besson3c
>
> I'm wondering if somebody can kindly direct me to a sort of newbie way
> of assessing whether my ZFS pool performance is a bottleneck that can
> be improved upon, and/or whether I
NFS writes on ZFS blows chunks performance wise. The only way to increase the
write speed is by using an slog, the problem is that a "proper" slog device
(one that doesn't lose transactions) does not exist for a reasonable price. The
least expensive SSD that will work is the Intel X25-E, and eve
I'm also noticing that I'm a little short on RAM. I have 6 320 gig
drives and 4 gig of RAM. If the formula is POOL_SIZE/250, this would
mean that I need at least 6.4 gig of RAM.
What role does RAM play with queuing and caching and other things which
might impact overall disk performance? How much
On Jun 8, 2010, at 1:33 PM, besson3c wrote:
Sure! The pool consists of 6 SATA drives configured as RAID-Z. There
are no special read or write cache drives. This pool is shared to
several VMs via NFS, these VMs manage email, web, and a Quickbooks
server running on FreeBSD, Linux, and Wind
Brandon High wrote:
On Tue, Jun 8, 2010 at 11:27 AM, Joe Auty
wrote:
things. I've also read this on a VMWare forum,
although I don't know if
this correct? This is in context to me questioning why I don't seem to
have these same load average problems ru
Brandon High wrote:
On Tue, Jun 8, 2010 at 10:33 AM, besson3c wrote:
On heavy reads or writes (writes seem to be more problematic) my load averages on my VM host shoot up and overall performance is bogged down. I suspect that I do need a mirrored SLOG, but I'm wondering what the b
On Tue, Jun 8, 2010 at 12:04 PM, Joe Auty wrote:
>
> Cool, so maybe this guy was going off of earlier information? Was there
> a time when there was no way to enable cache flushing in Virtualbox?
>
The default is to ignore cache flushes, so he was correct for the default
setting. The IgnoreFlus
On Tue, Jun 8, 2010 at 11:27 AM, Joe Auty wrote:
> things. I've also read this on a VMWare forum, although I don't know if
> this correct? This is in context to me questioning why I don't seem to have
> these same load average problems running Virtualbox:
>
> The problem with the comparison Virtu
On Tue, Jun 8, 2010 at 10:33 AM, besson3c wrote:
> On heavy reads or writes (writes seem to be more problematic) my load
> averages on my VM host shoot up and overall performance is bogged down. I
> suspect that I do need a mirrored SLOG, but I'm wondering what the best way is
The load that you
It would be helpful if you posted more information about your
configuration.
Numbers *are* useful too, but minimally, describing your setup, use case,
the hardware and other such facts would provide people a place to start.
There are much brighter stars on this list than myself, but if you are
It would be helpful if you posted more information about your
configuration.
Numbers *are* useful too, but minimally, describing your setup, use case,
the hardware and other such facts would provide people a place to start.
There are much brighter stars on this list than myself, but if you are
sha
Hello,
I'm wondering if somebody can kindly direct me to a sort of newbie way of
assessing whether my ZFS pool performance is a bottleneck that can be improved
upon, and/or whether I ought to invest in a SSD ZIL mirrored pair? I'm a little
confused by what the output of iostat, fsstat, the zils
21 matches
Mail list logo