Gluster doesn't "require" swap any more than any other service, and with
the price of RAM today, most admins should even consider removing swap
altogether.
D
On 7 February 2017 at 10:56, Mark Connor wrote:
> I am planning in deploying about 18 bricks of about 50 TB
Hey guys,
Any ideas?
[root@v0 ~]# gluster volume start data2
volume start: data2: failed: Volume id mismatch for brick
s0:/run/gluster/snaps/data2/brick1/data/brick. Expected volume id
d8b0a411-70d9-454d-b5fb-7d7ca424adf2, volume id
a7eae608-f1c4-44fd-a6aa-5b9c19e13565
found
[root@v0 ~]#
On 3 February 2017 at 11:09, Momonth wrote:
> Hi,
>
> I ran some benchmarking on SSD enabled servers, 10Gb connected, see
> the file attached.
>
> I'm still looking at GlusterFS as a persistent storage for containers,
> and it's clear it's not going to compete with local file
Have you verified that Gluster has marked the files as split-brain?
gluster volume heal info split-brain
If you're fairly confident about which files are correct, you can automate
the split-brain healing procedure.
>From the manual...
> volume heal split-brain bigger-file
>
On 27 January 2017 at 19:05, Kevin Lemonnier wrote:
> > Basically, every now & then I notice random VHD images popping up in the
> > heal queue, and they're almost always in pairs, "healing" the same file
> on
> > 2 of the 3 replicate bricks.
> > That already strikes me as
It's a given, but test it well before going into production. People have
occasionally had problems with corruption when converting to shards.
In my initial tests, enabling sharding took our I/O down to 15Kbps from
300Mpbs without.
data-self-heal-algorithm full
>
That could be painful. Any
>
> Type: Distributed-Replicate
> Number of Bricks: 2 x 2 = 4
>
With that setup, you lose quorum if you lose any one node.
Brick 1 replicates to brick 2, and brick 3 replicates to brick 4. If any
one of those goes down, quorum falls to <51%, which locks the brick under
the default settings.
If
On 20 January 2017 at 19:26, Lindsay Mathieson
wrote:
> This I think, highlights one of glusters few weaknesses - the
> inflexibility of brick layout. It would be really nice if you could just
> arbitrarily add bricks to distributed-replicate volumes and have files
he directory didn't work, the directory was restored as soon as glusterd
was restarted. I haven't yet tried stopping glusterd on *all* nodes before
doing this, although I'll need to plan for that, as it'll take the entire
cluster off the air.
Thanks for the reply,
Doug
> Regards,
> Avra
>
> RAID is not an option, JBOD with EC will be used.
>
Any particular reason for this, other than maximising space by avoiding two
layers of RAID/redundancy?
Local RAID would be far simpler & quicker for replacing failed drives, and
it would greatly reduce the number of bricks & load on Gluster.
this wasn't simply an oversight on my part.
Anyway, many thanks for the help, and I'd be happy to provide any logs if
desired, however whilst knowing what happened & why might be useful, all
now seems to have resolved itself.
Cheers,
Doug
>
> Regards,
> Avra
>
>
> O
Hi Bap,
On 6 February 2017 at 07:27, pasawwa wrote:
> Hello,
>
> we just created 3 node gluster ( replica 3 arbiter 1 ) and get "systemctl
> status glusterd" message:
>
> n1.test.net etc-glusterfs-glusterd.vol[1458]: [2017-02-03
> 17:56:24.691334] C [MSGID: 106003]
Hi Riccardo,
On 3 February 2017 at 07:06, Riccardo Filippone wrote:
> Good morning guys,
>
> we are going to deploy a new production infrastructure.
>
> in order to share some folders through our app servers (Tomcat 8), I wan
> to create a GlusterFS
Hey guys,
I tried to create a new volume from a cloned snapshot yesterday, however
something went wrong during the process & I'm now stuck with the new volume
being created on the server I ran the commands on (s0), but not on the rest
of the peers. I'm unable to delete this new volume from the
Hey guys,
I keep seeing different recommendations for the best shard sizes for VM
images, from 64MB to 512MB.
What's the benefit of smaller v larger shards?
I'm guessing smaller shards are quicker to heal, but larger shards will
provide better sequential I/O for single clients? Anything else?
I
Why are you using NFS for using Gluster with oVirt? oVirt is natively able
to mount Gluster volumes via FUSE, which is *far* more efficient!
Doug
On 12 January 2017 at 18:36, Giuseppe Ragusa
wrote:
> Hi all,
>
> In light of the future removal of native Gluster-NFS
>
> > data-self-heal-algorithm full
>
> There was a bug in the default algo, at least for VM hosting,
> not that long ago. Not sure if it was fixed but I know we were
> told here to use full instead, I'm guessing that's why he's using it too.
>
Huh, not heard of that. Do you have any useful links
> If your images easily fit within the bricks, why do you need sharding in
>> the first place? It adds an extra layer of complexity & removes the cool
>> feature of having entire files on each brick, making DR & things a lot
>> easier.
>
>
> Because healing with large VM images completes orders of
No problems with web hosting here, including loads of busy Wordpress sites
& the like. However you need to tune your filesystems correctly.
In our case, we've got webserver VMs running on top of a Gluster layer with
the following configurations...
- Swap either disabled or strictly minimised
Switch wise, have a look at the HP FlexFabric 5700-32XGT-8XG-2QSFP+ & Cisco
SG550XG-24T.
For what it's worth, you can minimise your bandwidth whilst maintaining
quorum if you use arbiters.
https://gluster.readthedocs.io/en/latest/Administrator%
20Guide/arbiter-volumes-and-quorum/
On 26 August
I've got a couple of geo-diverse high-capacity ZFS storage boxes for this
exact purpose. Geo-rep rsyncs to the boxes & regular snapshots are taken of
the ZFS volumes. Works flawlessly & allows us to traverse & restore
specific versions of individual files in seconds/minutes.
On 21 November 2016
There are lots of factors involved. Can you describe your setup & use case
a little more?
Doug
On 2 November 2016 at 00:09, Lindsay Mathieson
wrote:
> And after having posted about the dangers of premature optimisation ...
> any suggestion for improving IOPS? as
restore everything or hundreds of TB.
>
> 2017-03-23 23:07 GMT+01:00 Gambit15 <dougti+glus...@gmail.com>:
> > Don't snapshot the entire gluster volume, keep a rolling routine for
> > snapshotting the individual VMs & rsync those.
> > As already mentioned, you ne
On 19 March 2017 at 07:25, Mahdi Adnan wrote:
> Thank you for your email mate.
>
> Yes, im aware of this but, to save costs i chose replica 2, this cluster
> is all flash.
>
For what it's worth, arbiters FTW!
https://gluster.readthedocs.io/en/latest/Administrator
As I understand it, only new files will be sharded, but simply renaming or
moving them may be enough in that case.
I'm interested in the arbiter/sharding bug you've mentioned. Could you
provide any more details or a link?
Cheers,
D
On 30 March 2017 at 20:25, Laura Bailey
As long as the VM isn't hosted on one of the two Gluster nodes, that's
perfectly fine. One of my smaller clusters uses the same setup.
As for your other questions, as long as it supports Unix file permissions,
Gluster doesn't care what filesystem you use. Mix & match as you wish. Just
try to keep
Hey guys,
I use a replica 3 arbiter 1 setup for hosting VMs, and have just had an
issue where taking one of the non-arbiter peers offline caused gluster to
complain of lost quorum & pause the volumes.
The two "full" peers host the VMs and data, and the arbiter is a VM on a
neighbouring cluster.
Hi Guys,
I had to restart our datacenter yesterday, but since doing so a number of
the files on my gluster share have been stuck, marked as healing. After no
signs of progress, I manually set off a full heal last night, but after
24hrs, nothing's happened.
The gluster logs all look normal, and
nfo out put
> 4 - getxattr of one of the file, which needs healing, from all the bricks.
> 5 - What lead to the healing of file?
> 6 - gluster v status
> 7 - glustershd.log out put just after you run full heal or index heal
>
>
> Ashish
>
> --
they were
created after it went offline.
How do I fix this? Is it possible to locate the correct gfids somewhere &
redefine them on the files manually?
Cheers,
Doug
--
> *From: *"Gambit15"
> *To: *"Ashish Pandey"
> *Cc: *"gluste
entioned here
> https://lists.gluster.org/pipermail/gluster-users/2018-January/033352.html
>
Is my problem with .glusterfs though? I'd be super cautious removing the
entire directory unless I'm sure that's the solution...
Cheers,
> On Tue, Jul 3, 2018 at 4:27 PM, Gambit15 wrote:
>
>>
m kvm 132 Jun 30 14:55 hosted-engine.metadata ->
/var/run/vdsm/storage/98495dbc-a29c-4893-b6a0-0aa70860d0c9/99510501-6bdc-485a-98e8-c2f82ff8d519/71fa7e6c-cdfb-4da8-9164-2404b518d0ee
So if I delete those two symlinks & the files they point to, on one of the
two bricks, will that resolve the spli
Hi Guys,
I've got a distributed replicated 2+1 (arbiter) volume with sharding
enabled, running 3.8.8, for VM hosting, and I need to expand it before I
leave over the holiday break.
Each server's brick is mounted on its own LV, so my plan is the following
with each server, one-by-one:
1. Take
Hi Guys,
I've got a distributed replica 2+1 (rep 3 arbiter 1) cluster, and it
appears a shard has been assigned different GFIDs on each replica set.
===
[2018-11-29 10:05:12.035422] W [MSGID: 109009]
[dht-common.c:2148:dht_lookup_linkfile_cbk] 0-data-novo-dht:
Hey,
The op-version for each release doesn't seem to be documented anywhere,
not even in the release notes. Does anyone know where this information can
be found?
In this case, I've just upgraded from 3.8 to 3.12 and need to update my
pool's compatibility version, however I'm sure it'd be useful
35 matches
Mail list logo