Garrett D'Amore wrote:
You can hardly have too much. At least 8 GB, maybe 16 would be good.
The benefit will depend on your workload, but zfs and buffer cache will use it all if you have a big enough read working set.
Could lack of RAM be contributing to some of my problems, do you
On Wed, 9 Jun 2010, Travis Tabbal wrote:
NFS writes on ZFS blows chunks performance wise. The only way to
increase the write speed is by using an slog
The above statement is not quite true. RAID-style adaptor cards which
contain battery backed RAM or RAID arrays which include battery backed
On Wed, 9 Jun 2010, Edward Ned Harvey wrote:
disks. That is, specifically:
o If you do a large sequential read, with 3 mirrors (6 disks) then you get
6x performance of a single disk.
Should say up to 6x. Which disk in the pair will be read from is
random so you are unlikely to get the full
On Jun 8, 2010, at 1:33 PM, besson3c j...@netmusician.org wrote:
Sure! The pool consists of 6 SATA drives configured as RAID-Z. There
are no special read or write cache drives. This pool is shared to
several VMs via NFS, these VMs manage email, web, and a Quickbooks
server running on
I'm also noticing that I'm a little short on RAM. I have 6 320 gig
drives and 4 gig of RAM. If the formula is POOL_SIZE/250, this would
mean that I need at least 6.4 gig of RAM.
What role does RAM play with queuing and caching and other things which
might impact overall disk performance? How much
NFS writes on ZFS blows chunks performance wise. The only way to increase the
write speed is by using an slog, the problem is that a proper slog device
(one that doesn't lose transactions) does not exist for a reasonable price. The
least expensive SSD that will work is the Intel X25-E, and even
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of besson3c
I'm wondering if somebody can kindly direct me to a sort of newbie way
of assessing whether my ZFS pool performance is a bottleneck that can
be improved upon, and/or whether I ought
On Behalf Of Joe Auty
Sent: Tuesday, June 08, 2010 11:27 AM
I'd love to use Virtualbox, but right now it (3.2.2 commercial which I'm
evaluating, I haven't been able to compile OSE on the CentOS 5.5 host yet)
is
giving me kernel panics on the host while starting up VMs which are
obviously
Brandon High wrote:
On Tue, Jun 8, 2010 at 10:33 AM, besson3c j...@netmusician.org wrote:
What VM software are you using? There are a few knobs you can turn in VBox
which will help with slow storage. See
http://www.virtualbox.org/manual/ch12.html#id2662300 for instructions on
reducing the
You can hardly have too much. At least 8 GB, maybe 16 would be good.
The benefit will depend on your workload, but zfs and buffer cache will use it
all if you have a big enough read working set.
-- Garrett
Joe Auty j...@netmusician.org wrote:
I'm also noticing that I'm a little short on
On Wed, Jun 9, 2010 at 9:20 AM, Geoff Nordli geo...@grokworx.com wrote:
Have you played with the flush interval?
I am using iscsi based zvols, and I am thinking about not using the caching
in vbox and instead rely on the comstar/zfs side.
What do you think?
If you care about your data,
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Joe Auty
I'm also noticing that I'm a little short on RAM. I have 6 320 gig
drives and 4 gig of RAM. If the formula is POOL_SIZE/250, this would
mean that I need at least 6.4 gig of RAM.
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 6/9/2010 5:04 PM, Edward Ned Harvey wrote:
Everything is faster with more ram. There is no limit, unless the total
used disk in your system is smaller than the available ram in your system
... which seems very improbable.
Off topic, but...
It would be helpful if you posted more information about your
configuration.
Numbers *are* useful too, but minimally, describing your setup, use case,
the hardware and other such facts would provide people a place to start.
There are much brighter stars on this list than myself, but if you are
blockquoteIt would be helpful if you posted more information about your
configuration.
Numbers *are* useful too, but minimally, describing your setup, use case,
the hardware and other such facts would provide people a place to start.
There are much brighter stars on this list than myself, but
On Tue, Jun 8, 2010 at 10:33 AM, besson3c j...@netmusician.org wrote:
On heavy reads or writes (writes seem to be more problematic) my load
averages on my VM host shoot up and overall performance is bogged down. I
suspect that I do need a mirrored SLOG, but I'm wondering what the best way is
On Tue, Jun 8, 2010 at 11:27 AM, Joe Auty j...@netmusician.org wrote:
things. I've also read this on a VMWare forum, although I don't know if
this correct? This is in context to me questioning why I don't seem to have
these same load average problems running Virtualbox:
The problem with the
Brandon High wrote:
On Tue, Jun 8, 2010 at 10:33 AM, besson3c j...@netmusician.org wrote:
On heavy reads or writes (writes seem to be more problematic) my load averages on my VM host shoot up and overall performance is bogged down. I suspect that I do need a mirrored SLOG, but I'm
Brandon High wrote:
On Tue, Jun 8, 2010 at 11:27 AM, Joe Auty j...@netmusician.org
wrote:
things. I've also read this on a VMWare forum,
although I don't know if
this correct? This is in context to me questioning why I don't seem to
have these same load average problems
19 matches
Mail list logo