On Sat, Nov 5, 2016 at 6:20 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> 2016-11-05 12:06 GMT+01:00 Lindsay Mathieson >:
> > Yah, I get that. For me willing to risk loosing the entire gluster node
> and
> > having to resync it, I see the
On 11/05/2016 05:47 AM, Fariborz Mafakheri wrote:
Hi all,
I have a gluster volume with 4 bricks(srv1, srv2, srv3 and srv4). srv2
is replicate of srv1 and srv3 is replicate of srv3. each of this
bricks has 1.7TB data.
I am gonna replace srv2 and srv4
with two new servers(srvP2 and srvP4).
>
> On Nov 5, 2016, at 3:52 AM, Lindsay Mathieson
> wrote:
> Cache is hardly used, I think you'll find with VM workload you're only
> getting around 4% hit rates. You're better off using the SSD for slog, it
> improves sync writes consdierably.
>
> I tried the
Hi all,
I have a gluster volume with 4 bricks(srv1, srv2, srv3 and srv4). srv2 is
replicate of srv1 and srv3 is replicate of srv3. each of this bricks has
1.7TB data.
I am gonna replace srv2 and srv4
with two new servers(srvP2 and srvP4). srvP2 and srvP4 are in another
datacenter and as I said
On 5/11/2016 9:20 PM, Gandalf Corvotempesta wrote:
I don't see any advantage doing a single RAIDz10, only drawbacks.
With multiple RAIDZ1 you get the same space, same features and same
performances as a single RAIDZ10 but much more availability and safety
for your data.
Better local IOPS,
On 4/11/2016 9:15 PM, Xavier Hernandez wrote:
I haven't tested it, but if you are current saturating the network,
maybe enabling the network.compression option might help, though it
will use more CPU.
There are also some compression related options that can be tweaked.
Looks like it still
2016-11-05 12:06 GMT+01:00 Lindsay Mathieson :
> Yah, I get that. For me willing to risk loosing the entire gluster node and
> having to resync it, I see the odds as pretty low vs just losing one disk in
> the RAID10 set and resilvering it locally.
I don't see any
On 5/11/2016 9:02 PM, Gandalf Corvotempesta wrote:
Ok, I wasn't clear enough.
Do you have a single RAIDZ10 or multiple RAIDZ1 ?
Single RAIODZ10, one brick per node
In a single RAIDZ10, if you totally loose a mirror (thus, both disks
from the same RAIDZ1 set), you loose the whole RAID10.
On 5/11/2016 8:17 PM, mabi wrote:
Just noticed that you have your ZFS logs on a single disk, you like
living dangerously ;-) a you should have a mirror for the slog to be
on the safe side.
Because I like living on the edge :)
I do have the gluster bricks for the ultimate recovery, but also
On 5/11/2016 7:02 PM, Gandalf Corvotempesta wrote:
With gluster, healing should be faster in case of failure. If you
loose a mirror, you have to resilver the whole RAID-10 from network,
by using ZFS and RAID-10
With gluster, if you loose a mirror, you have to heal only that one.
Six of one,
Hi Lindsay
Just noticed that you have your ZFS logs on a single disk, you like living
dangerously ;-) a you should have a mirror for the slog to be on the safe side.
Cheers,
M.
Original Message
Subject: Re: [Gluster-users] Improving IOPS
Local Time: November 5, 2016
2016-11-05 9:52 GMT+01:00 Lindsay Mathieson :
> pool: tank
> config:
> NAME STATE READ WRITE CKSUM
> tank ONLINE 0 0 0
> mirror-0 ONLINE 0 0 0
> ata-WDC_WD6000HLHX-01JJPV0_WD-WX41E81ZU901 ONLINE
On 5/11/2016 1:30 AM, Darrell Budic wrote:
What’s your CPU and disk layout for those? You’re close to what I’m running,
curious how it compares.
All my nodes are running RAIDZ10. I have SSD 5GB slog partion, 100GB Cache
Cache is hardly used, I think you'll find with VM workload you're only
Hi everybody,
do you know if there is any possibility to use filesystem snapshots (e.g.
btrfs or zfs) with GlusterFS?
For us the LVM snapshot mechanism is not feasible, because we will need too
many snapshots and we need a filesystem with compression.
How do you realize snapshot
14 matches
Mail list logo