Il 06/11/2016 13:28, David Gossage ha scritto:
I see maybe you don't really means raidz1 here. Raidz1 usually
refers to "raid5" type vdevs with at least 3 disks otherwise why pay a
penalty for tracking parity when you can have a mirrored pair. So in
your case you are changing it from one
On Sun, Nov 6, 2016 at 3:24 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> Il 06/11/2016 03:37, David Gossage ha scritto:
>
> The only thing you gain with raidz1 I think is maybe more usable space.
> Performance in general will not be as good, and whether the vdev is
>
Il 06/11/2016 03:37, David Gossage ha scritto:
The only thing you gain with raidz1 I think is maybe more usable
space. Performance in general will not be as good, and whether the
vdev is mirrored or z1 neither can survive 2 drives failing. In most
cases the z10 will rebuild faster with less
On Sat, Nov 5, 2016 at 6:20 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> 2016-11-05 12:06 GMT+01:00 Lindsay Mathieson >:
> > Yah, I get that. For me willing to risk loosing the entire gluster node
> and
> > having to resync it, I see the
>
> On Nov 5, 2016, at 3:52 AM, Lindsay Mathieson
> wrote:
> Cache is hardly used, I think you'll find with VM workload you're only
> getting around 4% hit rates. You're better off using the SSD for slog, it
> improves sync writes consdierably.
>
> I tried the
On 5/11/2016 9:20 PM, Gandalf Corvotempesta wrote:
I don't see any advantage doing a single RAIDz10, only drawbacks.
With multiple RAIDZ1 you get the same space, same features and same
performances as a single RAIDZ10 but much more availability and safety
for your data.
Better local IOPS,
On 4/11/2016 9:15 PM, Xavier Hernandez wrote:
I haven't tested it, but if you are current saturating the network,
maybe enabling the network.compression option might help, though it
will use more CPU.
There are also some compression related options that can be tweaked.
Looks like it still
2016-11-05 12:06 GMT+01:00 Lindsay Mathieson :
> Yah, I get that. For me willing to risk loosing the entire gluster node and
> having to resync it, I see the odds as pretty low vs just losing one disk in
> the RAID10 set and resilvering it locally.
I don't see any
On 5/11/2016 9:02 PM, Gandalf Corvotempesta wrote:
Ok, I wasn't clear enough.
Do you have a single RAIDZ10 or multiple RAIDZ1 ?
Single RAIODZ10, one brick per node
In a single RAIDZ10, if you totally loose a mirror (thus, both disks
from the same RAIDZ1 set), you loose the whole RAID10.
On 5/11/2016 8:17 PM, mabi wrote:
Just noticed that you have your ZFS logs on a single disk, you like
living dangerously ;-) a you should have a mirror for the slog to be
on the safe side.
Because I like living on the edge :)
I do have the gluster bricks for the ultimate recovery, but also
On 5/11/2016 7:02 PM, Gandalf Corvotempesta wrote:
With gluster, healing should be faster in case of failure. If you
loose a mirror, you have to resilver the whole RAID-10 from network,
by using ZFS and RAID-10
With gluster, if you loose a mirror, you have to heal only that one.
Six of one,
Hi Lindsay
Just noticed that you have your ZFS logs on a single disk, you like living
dangerously ;-) a you should have a mirror for the slog to be on the safe side.
Cheers,
M.
Original Message
Subject: Re: [Gluster-users] Improving IOPS
Local Time: November 5, 2016 9
2016-11-05 9:52 GMT+01:00 Lindsay Mathieson :
> pool: tank
> config:
> NAME STATE READ WRITE CKSUM
> tank ONLINE 0 0 0
> mirror-0 ONLINE 0 0 0
> ata-WDC_WD6000HLHX-01JJPV0_WD-WX41E81ZU901 ONLINE
On 5/11/2016 1:30 AM, Darrell Budic wrote:
What’s your CPU and disk layout for those? You’re close to what I’m running,
curious how it compares.
All my nodes are running RAIDZ10. I have SSD 5GB slog partion, 100GB Cache
Cache is hardly used, I think you'll find with VM workload you're only
Lindsay-
What’s your CPU and disk layout for those? You’re close to what I’m running,
curious how it compares.
My prod cluster:
3x E5-2609 @ 1.9G, 6 core, 32G RAM, 2x10G network, parts of 2x samsung 850 pro
used for zfs cache, no zil
2x 9 x 1G drives in straight zfs stripe
1x 8 x 2G drives in
2016-11-04 5:43 GMT+01:00 Lindsay Mathieson :
> Thanks Krutika, will have to get my test cluster back up :)
If you try, please share the results.
Something like the kernel uncompress would be nice. Currently it takes
ages to finish even in a 3 bonded nic.
Hi Lindsay,
On 04/11/16 05:43, Lindsay Mathieson wrote:
On 4 November 2016 at 14:35, Krutika Dhananjay wrote:
It will be available in 3.9 (and latest
upstream master too) if you're interested to try it out but
DO NOT use it in production yet. It may have some stability
On 4 November 2016 at 14:35, Krutika Dhananjay wrote:
> It will be available in 3.9 (and latest
> upstream master too) if you're interested to try it out but
> DO NOT use it in production yet. It may have some stability
> issues as it hasn't been thoroughly tested.
>
> You
There is compound fops feature coming up which reduces the
number of calls over the network in AFR transactions, thereby
improving performance. It will be available in 3.9 (and latest
upstream master too) if you're interested to try it out but
DO NOT use it in production yet. It may have some
On 4 November 2016 at 03:38, Gambit15 wrote:
> There are lots of factors involved. Can you describe your setup & use case a
> little more?
Replica 3 Cluster. Individual Bricks are RAIDZ10 (zfs) that can manage
450 MB/s write, 1.2GB/s Read.
- 2 * 1GB Bond, Balance-alb
-
There are lots of factors involved. Can you describe your setup & use case
a little more?
Doug
On 2 November 2016 at 00:09, Lindsay Mathieson
wrote:
> And after having posted about the dangers of premature optimisation ...
> any suggestion for improving IOPS? as
And after having posted about the dangers of premature optimisation ...
any suggestion for improving IOPS? as per earlier suggestions I tried
setting server.event-threads and client.event-threads to 4, but it made
no real difference.
nb: the limiting factor on my cluster is the network (2 *
22 matches
Mail list logo