Re: [Gluster-users] Slow write times to gluster disk

2018-07-13 Thread Raghavendra Gowdappa
On Fri, Jul 13, 2018 at 5:00 AM, Pat Haley wrote: > > Hi Raghavendra, > > We were wondering if you have had a chance to look at this again, and if > so, did you have any further suggestions? > Sorry Pat. Too much of work :). I'll be working on a patch today to make sure read-ahead doesn't flush

Re: [Gluster-users] Upgrade to 4.1.1 geo-replication does not work

2018-07-13 Thread Marcus Pedersén
Hi Kotresh, Yes, all nodes have the same version 4.1.1 both master and slave. All glusterd are crashing on the master side. Will send logs tonight. Thanks, Marcus Marcus Pedersén Systemadministrator Interbull Centre Sent from my phone Den 13

[Gluster-users] Insanely long times to run qemu libgfapi operations

2018-07-13 Thread Ian Geiser
Greetings, I am having problems diagnosing an issue with qemu connecting to gluster with the gfapi. In my current setup, I am using a 6 node "Distributed-Disperse" configuration on glusterfs 4.0.2 on Ubuntu 18.04. Below is my current configuration: root@hio-5:~# gluster volume info shared

Re: [Gluster-users] issue with self-heal

2018-07-13 Thread Brian Andrus
You message means something (usually glusterfsd) is not running quite right or at all on one of the servers. If you can tell which it is, you need to stop/restart glusterd and glusterfsd. Note: sometimes just stopping them doesn't really stop them. You need to do a killall -9  for glusterd,

[Gluster-users] File and directories: best solution by design

2018-07-13 Thread Marcello Orizi
Hi all, we have two servers with a shared filesystem managed by GlusterFS. Some days ago, in order to do some mainteinance task we needed to shutdown one of the two servers. After some hours, we restarted the server and all was OK. GlusterFS restarted correctly and it was syncing files. The

[Gluster-users] issue with self-heal

2018-07-13 Thread hsafe
Hello Gluster community, After several hundred GB of data writes (small image  100k 1M) into a replicated 2x glusterfs servers , I am facing issue with healing process. Earlier the heal info returned the bricks and nodes and the fact that there are no failed heal; but now it gets to the

Re: [Gluster-users] Geo-Replication memory leak on slave node

2018-07-13 Thread Sunny Kumar
Hi Mark, Currently I am looking at this issue (Kotresh is busy with some other work) so can you please share the latest log with me. Thanks, Sunny On Fri, Jul 13, 2018 at 12:41 PM Mark Betham wrote: > > Hi Kotresh, > > I was wondering if you had found any time t take a look at the issue I am

Re: [Gluster-users] Upgrade to 4.1.1 geo-replication does not work

2018-07-13 Thread Kotresh Hiremath Ravishankar
Hi Marcus, Is the gluster geo-rep version is same on both master and slave? Thanks, Kotresh HR On Fri, Jul 13, 2018 at 1:26 AM, Marcus Pedersén wrote: > Hi Kotresh, > > i have replaced both files (gsyncdconfig.py >