ain qemu (this looks pretty strange to me)
I doubt that, it works well but I do see a difference when I rsync stuff in the
VMs.
Not a huge deal though, the bandwith isn't of great interest to me, the latency
is.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9
een solved !
Should be out hopefully in the next few days last I heard.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.
's a lot better
and the client was happy.
I don't know how much the update contributed, but I assume the ping is
playing a bit part in this.
I could send you the bonnie++ results from those two tests tomorrow if you want,
kept that at work.
I'll probably test that again with 3.7.12 just to compare.
; Fax.: +49-331-9773122
> Email : klemens.kit...@cs.uni-potsdam.de
> XMPP: kit...@jabber.ccc.de
>
> gpg --recv-keys --keyserver pgp.mit.edu 6EA09333
> ___
> Gluster-users mailing list
> Gluster-users
>Just a thought - do you have bitrot detection enabled? (I don't)
Yes, I did configure it to do a daily scrub when I reinstalled last time,
when I was wondering if maybe it was hardware. Doesn't seem like it detected
anything.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0
> cluster.shd-max-threads:4
> cluster.locking-scheme:granular
So you had no problems before setting that ? I'm currently re-installing my test
servers, as you can imagine really really hoping 3.7.12 fixes the corruption
problem,
I hope there isn't a new horrible bug ..
--
Kevin Lemonni
replace-brick also brought everything
down, but I guess that was the
joy of 3.7.6 :).
Thanks !
On Mon, Apr 18, 2016 at 08:17:05PM +0530, Krutika Dhananjay wrote:
> On Mon, Apr 18, 2016 at 8:02 PM, Kevin Lemonnier <lemonni...@ulrar.net>
> wrote:
>
> > I will try migrating to 3.7
starvation, something that is a well know issue.
>There is work happening to improve this in version 3.8:
>https://bugzilla.redhat.com/show_bug.cgi?id=1269461
> On 19 May 2016 at 09:58, Kevin Lemonnier <lemonni...@ulrar.net> wrote:
>
> That's a different probl
Looks like it crashed, if I do a /etc/init.d/gluster-server stop on the third
node and
ps aux | grep gluster after that there is still a lot of processes listed.
Should I kill everything ?
On Mon, May 23, 2016 at 04:31:28PM +0200, Kevin Lemonnier wrote:
> Hi,
>
> We have in productio
status lists the 3 bricks and I restarted the daemon
on the third node to be sure, still the same result.
The brick is stored on xfs on /mnt/vg1-storage and all the files seems to be
there,
it's not read only or anything.
Where can I check ?
Thanks
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283
ll
amount of time. Without sharding, the VM is frozen as long as the whole disk
hasn't
been healed, which will take hours on big clusters.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
__
e problem. I'll
try to kill it at night, for now the other two nodes are running fine and I just
can't afford to freez all the VM for the hour it'll take to heal right now.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital
, seems to be resolved. I hope it won't do that again
though
Thanks
On Mon, May 23, 2016 at 07:05:56PM +0200, Kevin Lemonnier wrote:
> On Mon, May 23, 2016 at 04:06:06PM +0100, Anant wrote:
> >Have you tried to stop all services of gluster ?? Like - glusterd ,
> >
en hour long heals never caused any problem to the VM, except of course the
fact that
the VM froze and all services on it stopped responding during the heal.
We didn't have sharding at first, and the behaviour is pretty much the same,
but the heals take
forever.
--
Kevin Lemonnier
PGP Fingerprint
issue tomorrow on my machines with the steps
>that Lindsay provided in this thread. I will let you know the result soon
>after that.
>
>-Krutika
>
>On Wednesday, May 18, 2016, Kevin Lemonnier <lemonni...@ulrar.net> wrote:
>> Hi,
>>
>> Some
storage
would corrupt too, right ?
Could I be missing some critical configuration for VM storage on my gluster
volume ?
On Mon, May 23, 2016 at 01:54:30PM +0200, Kevin Lemonnier wrote:
> Hi,
>
> I didn't specify it but I use "localhost" to add the storage in pro
this mailing list, did I miss some doc saying you should be using
directsync with glusterfs ?
On Tue, May 24, 2016 at 11:33:28AM +0200, Kevin Lemonnier wrote:
> Hi,
>
> Some news on this.
> I actually don't need to trigger a heal to get corruption, so the problem
> is not the healing.
y side, you really
should test it yourself to see how it works for you.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
, but nothing
"stuck" there,
they always go away in a few seconds getting replaced by other shards.
Thanks
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing li
As requested on IRC, here are the logs on the 3 nodes.
On Thu, May 12, 2016 at 04:03:02PM +0200, Kevin Lemonnier wrote:
> Hi,
>
> I had a problem some time ago with 3.7.6 and freezing during heals,
> and multiple persons advised to use 3.7.11 instead. Indeed, with that
> ve
As discussed, the missing ipvr50 log file.
On Thu, May 12, 2016 at 04:24:14PM +0200, Kevin Lemonnier wrote:
> As requested on IRC, here are the logs on the 3 nodes.
>
> On Thu, May 12, 2016 at 04:03:02PM +0200, Kevin Lemonnier wrote:
> > Hi,
> >
> > I had a probl
disk.
Am I really the only one with that problem ?
Maybe one of the drives is dying too, who knows, but SMART isn't saying
anything ..
On Thu, May 12, 2016 at 04:03:02PM +0200, Kevin Lemonnier wrote:
> Hi,
>
> I had a problem some time ago with 3.7.6 and freezing during heals,
>
tead of the application
would avoid that.
You should test both solutions though, see what fits best !
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
that Lindsay provided in this thread. I will let you know the result soon
> after that.
>
> -Krutika
>
> On Wednesday, May 18, 2016, Kevin Lemonnier <lemonni...@ulrar.net> wrote:
> > Hi,
> >
> > Some news on this.
> > Over the week end the RAID
On Wed, May 18, 2016 at 06:54:57PM +0200, Gandalf Corvotempesta wrote:
> Il 18/05/2016 13:55, Kevin Lemonnier ha scritto:
> > Yes, that's why you need to use sharding. With sharding, the heal is
> > much quicker and the whole VM isn't freezed during the heal, only the
> > sh
r someone who actually knows to answer !
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
s are usually a huge number
of small-ish files, so locking them during a heal is pretty much invisible
(healing a
2 Kb file being almost instant).
If you had huge files on this, without sharding, it would have beem different :)
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signatu
have to actually re-open the file to
trigger a heal ?
Any idea on how to prevent that ? It's a lot better than 3.7.6 'cause it can be
fixed in a minute,
but that's still not great to explain to the clients.
Thanks
On Mon, Apr 25, 2016 at 02:01:09PM +0200, Kevin Lemonnier wrote:
> Hi,
>
to read only
but did complain about errors in the console, so I just rebooted them to be
sure.
They were fine during the heal, they started complaining after the heal
finished,
but I'm guessing that's just because they weren't accessing their disks a lot.
On Mon, May 02, 2016 at 11:57:19AM +0200, K
écrit :
>Could you share the client logs and information about the approx
>time/day
>when you saw this issue?
>
>-Krutika
>
>On Sat, Apr 16, 2016 at 12:57 AM, Kevin Lemonnier
><lemonni...@ulrar.net>
>wrote:
>
>> Hi,
>>
>> We have a small gluste
tistics heal-count. I assumed it was just syncing the
writes of the different
VM's, that it was expected.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
G
ackground, which is all the more helpful for preventing
> starvation of vms during client heal.
>
> Considering these factors, I think it would be better if you upgraded your
> machines to 3.7.10.
>
> Do let me know if migrating to 3.7.10 solves your issues.
>
> -Krutik
>
> Actually I think Kevin Lemonnier is using sharding with 3.7.6
>
Yes, it's been running in production with sharding since february. Worked
mostly fine I have to say,
appart for the heal problems we had friday. Previous heals, when the cluster
was a lot less
heavely used, went f
or now we are using
only ~200 mb/s
max on it.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
the number of nodes total ?
Would hate to get into a split brain because we upgraded to an even number of
node.
Thanks,
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users ma
gt; /var/log/glusterfs/.log
>
> -Krutika
>
> On Sun, Apr 17, 2016 at 9:37 PM, Kevin Lemonnier <lemonni...@ulrar.net>
> wrote:
>
> > I believe Proxmox is just an interface to KVM that uses the lib, so if I'm
> > not mistaken there isn't client logs ?
> >
&g
ing on?
>
>-Krutika
>On Wed, May 25, 2016 at 1:28 PM, Kevin Lemonnier <lemonni...@ulrar.net>
>wrote:
>
> >A A Whats the underlying filesystem under the bricks?
>
> I use XFS, I read that was recommended. What are you using ?
>
directsync is better than nothing, but still doesn't solve the problem.
Really can't use this in production, the VM goes read only after a few
days because it saw too much I/O errors. Must be missing something
On Tue, May 24, 2016 at 12:24:44PM +0200, Kevin Lemonnier wrote:
> So the VM w
>Whats the underlying filesystem under the bricks?
I use XFS, I read that was recommended. What are you using ?
Since yours seems to work, I'm not opposed to changing !
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signat
sue, then it could
> possibly because of a bug in AFR
> that Pranith recently fixed.
> To confirm if that is indeed the case, could you tell meA if you saw
> the pause after a brick (single brick) was
> down while IO was going on?
>
> -Krutika
>
the error
as usual, just attached a screen of the VM's console, might help.
I can see that everytime the VM powers down, glusterFS complains about an inode
still
active, might it be the problem ?
Thanks for the help !
On Wed, May 25, 2016 at 04:10:02PM +0200, Kevin Lemonnier wrote:
> Just
"64M" in option "shard-block-size"
Was there a change to the format of the shard sizes I missed ?
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailin
cluster.quorum-type: auto
performance.readdir-ahead: on
Thanks
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org
e okay.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
size is an option to set, not a parameter during the volume creation.
It is, but existing shards won't be touched : You'll need to move the file out
and back
in to apply the new shard size.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Descri
iling list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
same bug.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
26:52.485443] C
[rpc-clnt-ping.c:165:rpc_clnt_ping_timer_expired] 0-gluster-client-5: server
172.16.0.50:49153 has not responded in the last 42 seconds, disconnecting.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digit
ster.org/pub/gluster/glusterfs/3.7/3.7.12/Debian/jessie/apt
jessie main
Should include the patch, right ?
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
>
> Which NFS server are you using? the std one built into proxmox/debian?
> how do you handle redundancy?
I mean the one in gluster, I added the gluster volume as NFS in proxmox
instead of as gluster.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
De
at least not in the latest proxmox.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
_
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
__
ving the disk and before erasing the snapshots.
> Otherwise they’d crash when removing the snapshot, something in qemu not
> quite right I imagine.
>
All of this sounds weird, I've never had a problem like this. But I'm not using
ovirt, maybe it's a problem with that.
--
Kevin Lemonnier
M, which can be annoying to get depending on what
you are using.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster
he qemu process somewhere.
That's where you'll find the libgfapi logs.
Never used oVirt so I can't really help on that :/
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mail
re if you do find it, I think
you're
neither the first nor the last person to ask that question here.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
G
ing on the size. That's just not acceptable. Unless that's
related to the bug in the heal algo and it's been fixed ? Not sure
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mai
uses some qemu command behing the scene so you can do it even
if you aren't using proxmox though, just need to figure out the syntax.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Glus
> So, is the setup wrong or does gluster not provide high availability?
How exactly is it setup ?
libgfapi ? fuse ? NFS mount ?
It should work, we're using proxmox at work (which uses KVM) with gluster
and it does work well. What version of gluster are you using ?
--
Kevin Lemonnier
u have access to.
I'm betting you'll see your error there.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/m
add the two arbiters at once.
If you can afford downtime creating a new volume is clearly a lot simpler :)
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-u
> One more actually I don¹t know how you are handling HA but with glusterfs
> but I believe that if you are not using NFS Ganesha you have single point of
> failure everytime , isn't it ?
Not if they are using the gluster fuse client, which is probably the case.
--
Kevin Lemo
l work a lot better during the live migration
using nfs than sshfs :)
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.glust
> They have different ovirt mgmt.
I don't use ovirt, but when I need to migrate
a VM between two versions I just mount the other
cluster using NFS instead of fuse / libgfapi.
Then I just use the live disk migration of KVM
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0
dangerous, and
should probably be written in big fat red letter in the doc.
Maybe it is and I just missed it though, but I'm pretty sure
I'm not the only one.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
_
o maybe it's better in renew releases.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
> - or change the shard setting inplace? (I don't think that would work)
It will only affect new files, so you'll need to copy the current images
to new names. That's why I was speaking about livre migrating the disk
to the same volume, that's how I did it last year.
--
Kevin Lemonnier
ble
>
> cluster.eager-lock: enable
>
> performance.stat-prefetch: off
>
> performance.io-cache: off
>
> performance.read-ahead: off
>
> performance.quick-read: off
>
> server.allow-insecure: on
>
> perfor
and stuff like that all the time, it's a whole lot of small files load).
Thanks
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http
> We fixed this (thanks to Satheesaran for recreating the issue and to
> Raghavendra G and Pranith for the RCA) as recently as last week.
> The bug was in DHT-shard interaction.
Ah, that's a great news !
I'll give the next relaeses a try for our next cluster then, thanks for
the info.
bid deal, at least yet.
Since VMs are
easy enough to live migrate from one volume to another, it seemed like the
easiest solution.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-
>
> Sure sounds like what corrupted everything for me a few months ago :).
Not quite though, the corruption occured before the rebalance in my case.
Maybe you just didn't realise before starting the rebalance ?
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signatu
shards do exist
(and are bigger) on the "live" volume. I don't understand why now that I have
removed the new bricks
everything isn't working like before ..
On Mon, Sep 05, 2016 at 11:06:16PM +0200, Kevin Lemonnier wrote:
> Hi,
>
> I just added 3 bricks to a volume and all th
Hi,
I just added 3 bricks to a volume and all the VMs are doing I/O errors now.
I rebooted a VM to see and it can't start again, am I missing something ? Is
the reblance required
to make everything run ?
That's urgent, thanks.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
no choice ..
>On 6/09/2016 8:00 AM, Kevin Lemonnier wrote:
>
> I tried a fix-layout, and since that didn't work I removed the brick (start
> then commit when it showed
> completed). Not better, the volume is now running on the 3 original bricks
> (replica 3) but the VMs
> a
rovide output of `gluster volume info`.
>-Krutika
>On Tue, Sep 6, 2016 at 4:29 AM, Kevin Lemonnier <lemonni...@ulrar.net>
>wrote:
>
> >A A - What was the original (and current) geometry? (status and info)
>
> It was a 1x3 that I was trying
on this volume,
but for one of the VM we'd need to recover that disk, see if we may be able
to extract some data from it with some time.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster
2, removing a brick from
the first replica,
then add 2 of the new servers as disperese (at that point the volume would be
2x2),
then go up to replica 3 adding the third one plus the one I removed earlier.
That should work, right ? There is no other "better" way of doing it ?
Thanks,
ng works pretty well now though !
Now I can't try tiering, unfortunatly I don't have the option of having
hardware for
that, but maybe that would indeed solve it if it makes looking up lots of tiny
files
quicker.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.
is to either run a VM on top of
glusterFS, which isn't always possible, or just use something else. I really
hope the improvments Pranith mentionned will help in that regard, I'd love
to use GlusterFS for more than just hosting VMs.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
. We are using 3.7.12 and 3.7.15 though, didn't try 3.8 yet.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
, but that doesn't seem like an ideal sale speech :
"You can do whatever you want on the VM, but try to avoid MySQL, it crashes".
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-use
like reject * does reject all without checking the allow ..
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailma
: enable
cluster.server-quorum-type: server
cluster.quorum-type: auto
performance.readdir-ahead: on
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users
orized by default, but looks
like it wasn't !
I don't need to authorize the domains right, just the IPs ?
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster
addresses, is that
doable ?
A quick google search shows people doing it by editing the volfile, but I
suspect
that's an old method right ? There must be a way to tell gluster to just not
listen
on the public IP.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
g I ended up blocking everything with iptables, that
works for this cluster but doesn't for others,
so that's not a good fix for me. I wish I could just tell gluster to bind on a
specific IP.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.as
want to test and give us feedback, that
>would be very helpful and we can move very quickly to get it in GA state.
>On Tue, Sep 27, 2016 at 5:13 PM, AndrA(c) Bauer <aba...@magix.net> wrote:
>
> Dito...
> Am 24.09.2016 um 17:29 schrieb Kevin Lemonnier:
> > O
e used to be running VMs off that.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
ce, but I didn't even get to that point :(
That's the only bug I've experienced so far in 3.7.12, everything else
(including increasing the replica count) seems to be working perfectly fine.
That's why I'm still installing that version, even though it's outdated.
Thanks !
--
Kevin Lemonnier
PGP
e add-brick VMs host1:/brick host2:/brick host3:/brick force
(I have the same without force just before that, so I assume force is needed)
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluste
gt;
Ha, glad you could reproduce this ! (Well, all things considered)
Looks very much like what I had indeed. So it's still a problem
in recent versions, glad I didn't try again then.
Thanks for taking the time, let's hope that'll help them :)
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2
rtunatly, you need to start qemu by hand
to get those as far as I know.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.or
around.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
.12. It took a while to finally get a version that worked for us,
so we stayed on it once we got it. Maybe that problem has already been fixed in
later versions.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digit
balance when everything came crashing down, hoping that
might fix it, but it didn't.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www
tofs
In /etc/auto.map.d/master.autofs :
applicatif
-fstype=glusterfs,defaults,_netdev,backupvolfile-server=other_node
localhost:/applicatif
With this, /mnt/autofs/applicatif will automatically mount the /applicatif
volume.
--
Kevin Lemonnier
PGP Fing
Now if sleep 10 works for you great, but I guess you wouldn't be posting
here if it did :D
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
h
to processes accessing the same file won't end up well I think.
Dovecot has it's own replication mechanism, you should problably
take a look at dsync.
--
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
signature.asc
Description: Digit
1 - 100 of 123 matches
Mail list logo