Re: [Gluster-users] IPv4 / IPv6 doesn't work

2017-02-09 Thread Atin Mukherjee
We had seen a similar issue and Rajesh has provided a detailed explanation on why at [1]. I'd suggest you to not to change glusterd.vol but execute "gluster volume set transport.address-family inet" to allow Gluster to listen on IPv4 by default. [1] https://bugzilla.redhat.com/show_bug.cgi?id=14

Re: [Gluster-users] Machine becomes its own peer

2017-02-17 Thread Atin Mukherjee
On Fri, Feb 17, 2017 at 11:19 AM, Scott Hazelhurst < scott.hazelhu...@wits.ac.za> wrote: > > Dear all > > Last week I posted a query about a problem I had with a machine that had > failed but the underlying hard disk with the gluster brick was good. I’ve > made some progress in restoring. I now ha

Re: [Gluster-users] Machine becomes its own peer

2017-02-17 Thread Atin Mukherjee
Joe - Scott had sent me a private email and I provided the work around, for some (unknown) reason all the nodes ended up having two uuids for a particular peer which caused it. I've asked for the log files to further debug. On Fri, 17 Feb 2017 at 21:58, Joe Julian wrote: > Does your repaired ser

Re: [Gluster-users] volume start: data0: failed: Commit failed on localhost.

2017-02-25 Thread Atin Mukherjee
On Fri, Feb 24, 2017 at 11:06 PM, Deepak Naidu wrote: > Thanks Rafi for workaround. > > *>> To find the root cause we need to get logs for the first failure of > volume start or volume stop .* > > Below, exact steps to re-produce the issue and attached log file contents > from /varlog/gluster fol

Re: [Gluster-users] Old bricks from deleted volumes

2017-03-03 Thread Atin Mukherjee
On Fri, Mar 3, 2017 at 12:43 PM, J.R. W wrote: > Hello, > > I have bricks were there volume doesn't exists anymore. Is there a way I > can add these bricks to a new volume? > > Essentially - > > gluster volume create new-volume host1:/export/brick1 > host2:/export/brick2 > > /export/brick

Re: [Gluster-users] Unable to Mount on Non-Server Machines

2017-03-13 Thread Atin Mukherjee
Can you please mention the client version of the mount which you were unable to mount? Did you upgrade from 3.9 yo 3.10? Once I have these informations, I'll try to reproduce this issue and get back. On Fri, 10 Mar 2017 at 22:42, Andrew Kester wrote: > I have a Gluster volume that I'm unable to

Re: [Gluster-users] Unable to Mount on Non-Server Machines

2017-03-13 Thread Atin Mukherjee
mentioned as one of the IP/Host & then the mount request fails with authentication failure. On Mon, Mar 13, 2017 at 9:10 PM, Atin Mukherjee wrote: > Can you please mention the client version of the mount which you were > unable to mount? Did you upgrade from 3.9 yo 3.10? Once I have the

Re: [Gluster-users] Proposal to deprecate replace-brick for "distribute only" volumes

2017-03-16 Thread Atin Mukherjee
Makes sense. On Thu, 16 Mar 2017 at 06:51, Raghavendra Talur wrote: > Hi, > > In the last few releases, we have changed replace-brick command such > that it can be called only with "commit force" option. When invoked, > this is what happens to the volume: > > a. distribute only volume: the given

Re: [Gluster-users] auth failure after upgrade to GlusterFS 3.10

2017-03-20 Thread Atin Mukherjee
We are working on a patch and target is to get it in 3.10.1. Apologies for the inconvenience. On Mon, 20 Mar 2017 at 04:02, Yong Zhang wrote: > Someone already logged a bug here: > > https://bugzilla.redhat.com/show_bug.cgi?id=1429117 > > > > I have the same issue, any solutions? > _

Re: [Gluster-users] [Gluster-Maintainers] [Gluster-devel] Release 3.10.1: Scheduled for the 30th of March

2017-03-29 Thread Atin Mukherjee
WIP: https://review.gluster.org/16957 >> ___ >> Gluster-devel mailing list >> gluster-de...@gluster.org >> http://lists.gluster.org/mailman/listinfo/gluster-devel >> >

Re: [Gluster-users] Unable to Mount on Non-Server Machines

2017-03-30 Thread Atin Mukherjee
This issue is now fixed in 3.10.1. On Tue, 21 Mar 2017 at 19:07, David Chin wrote: > I'm facing the same issue as well. I'm running the version 3.10.0-2 for > both server and client. > > Works fine when the client and server are on the same machine. > > > I did a telnet to the opened port relate

Re: [Gluster-users] fuse mount fails after upgrading to 3.10.0

2017-04-02 Thread Atin Mukherjee
Please upgrade to 3.10.1, we had a regression on 3.10.0. On Sun, 2 Apr 2017 at 17:43, Amudhan P wrote: > I Have upgraded my cluster from gluster3.8.3 to 3.10.0. after upgarde I am > not able to mount volume using 3.10.0 client throwing below error message > in log. > But same volume able to moun

Re: [Gluster-users] fuse mount fails after upgrading to 3.10.0

2017-04-02 Thread Atin Mukherjee
any issue in upgrading from 3.8.3 > to 3.10.0. > > regards > Amudhan P > > > > > > > > On Sun, Apr 2, 2017 at 5:45 PM, Atin Mukherjee > wrote: > > Please upgrade to 3.10.1, we had a regression on 3.10.0. > > On Sun, 2 Apr 2017 at 17:43, Amudhan P

Re: [Gluster-users] [Gluster-devel] Announcing release 3.11 : Scope, schedule and feature tracking

2017-04-13 Thread Atin Mukherjee
o/s/BkgH8sdtg# >> >> [7] Glusto tests: https://github.com/gluster/glusto-tests >> ___ >> Gluster-devel mailing list >> gluster-de...@gluster.org >> http://lists.gluster.org/mailman/listinfo/gluster-devel >>

Re: [Gluster-users] [Gluster-devel] Glusterfs meta data space consumption issue

2017-04-16 Thread Atin Mukherjee
On Mon, 17 Apr 2017 at 08:23, ABHISHEK PALIWAL wrote: > Hi All, > > Here we have below steps to reproduce the issue > > Reproduction steps: > > > > root@128:~# gluster volume create brick 128.224.95.140:/tmp/brick force > - create the gluster volume > > volume create: brick: success: please s

Re: [Gluster-users] OOM Kills glustershd process in 3.10.1

2017-04-25 Thread Atin Mukherjee
On Tue, Apr 25, 2017 at 9:22 PM, Amudhan P wrote: > Hi Pranith, > > if I restart glusterd service in the node alone will it work. bcoz I feel > that doing volume force start will trigger bitrot process to crawl disks in > all nodes. > Have you enabled bitrot? If not then the process will not be

Re: [Gluster-users] [Gluster-devel] Don't allow data loss via add-brick (was Re: Add single server)

2017-05-03 Thread Atin Mukherjee
On Wed, May 3, 2017 at 3:41 PM, Raghavendra Talur wrote: > On Tue, May 2, 2017 at 8:46 PM, Nithya Balachandran > wrote: > > > > > > On 2 May 2017 at 16:59, Shyam wrote: > >> > >> Talur, > >> > >> Please wait for this fix before releasing 3.10.2. > >> > >> We will take in the change to either pr

Re: [Gluster-users] Elasticsearch facing CorruptIndexException exception with GlusterFs 3.10.1

2017-05-03 Thread Atin Mukherjee
I think there is still some pending stuffs in some of the gluster perf xlators to make that work complete. Cced the relevant folks for more information. Can you please turn off all the perf xlator options as a work around to move forward? On Wed, May 3, 2017 at 8:04 PM, Abhijit Paul wrote: > Dea

Re: [Gluster-users] Can not delete a volume

2017-05-07 Thread Atin Mukherjee
Can you please paste the output of 'gluster peer status' from the node where volume delete CLI failed? Also during this failure did you find out which node(s) are down (as per the command) from the glusterd logs? On Sun, 7 May 2017 at 21:53, Tahereh Fattahi wrote: > Hi > > When I want to delete

Re: [Gluster-users] Can not delete a volume

2017-05-07 Thread Atin Mukherjee
transaction. > > On Sun, May 7, 2017 at 9:00 PM, Atin Mukherjee > wrote: > >> Can you please paste the output of 'gluster peer status' from the node >> where volume delete CLI failed? Also during this failure did you find out >> which node(s) are down (as p

Re: [Gluster-users] Empty info file preventing glusterd from starting

2017-05-09 Thread Atin Mukherjee
On Tue, May 9, 2017 at 3:37 PM, ABHISHEK PALIWAL wrote: > + Muthu-vingeshwaran > > On Tue, May 9, 2017 at 11:30 AM, ABHISHEK PALIWAL > wrote: > >> Hi Atin/Team, >> >> We are using gluster-3.7.6 with setup of two brick and while restart of >> system I have seen that the glusterd daemon is getting

Re: [Gluster-users] Empty info file preventing glusterd from starting

2017-05-09 Thread Atin Mukherjee
gt; Abhishek > > On Tue, May 9, 2017 at 5:58 PM, Atin Mukherjee > wrote: > >> >> >> On Tue, May 9, 2017 at 3:37 PM, ABHISHEK PALIWAL > > wrote: >> >>> + Muthu-vingeshwaran >>> >>> On Tue, May 9, 2017 at 11:30 AM, ABHISHEK PALIWA

Re: [Gluster-users] Failure while upgrading gluster to 3.10.1

2017-05-12 Thread Atin Mukherjee
Can you please provide output of following from all the nodes: cat /var/lib/glusterd/glusterd.info cat /var/lib/glusterd/peers/* On Wed, May 10, 2017 at 5:02 PM, Pawan Alwandi wrote: > Hello, > > I'm trying to upgrade gluster from 3.6.2 to 3.10.1 but don't see the > glusterfsd and glusterfs pr

Re: [Gluster-users] Failure while upgrading gluster to 3.10.1

2017-05-13 Thread Atin Mukherjee
I have already asked for the following earlier: Can you please provide output of following from all the nodes: cat /var/lib/glusterd/glusterd.info cat /var/lib/glusterd/peers/* On Sat, 13 May 2017 at 12:22, Pawan Alwandi wrote: > Hello folks, > > Does anyone have any idea whats going on here?

Re: [Gluster-users] Failure while upgrading gluster to 3.10.1

2017-05-14 Thread Atin Mukherjee
-9ceb-8be6a9f2d073 > state=3 > hostname1=192.168.0.5 > uuid=83e9a0b9-6bd5-483b-8516-d8928805ed95 > state=3 > hostname1=192.168.0.6 > > # gluster --version > glusterfs 3.6.2 built on Jan 21 2015 14:23:44 > > > > On Sat, May 13, 2017 at 6:28 PM, Atin Mukherjee > wr

Re: [Gluster-users] Failure while upgrading gluster to 3.10.1

2017-05-14 Thread Atin Mukherjee
On Sun, 14 May 2017 at 21:43, Atin Mukherjee wrote: > Allright, I see that you haven't bumped up the op-version. Can you please > execute: > > gluster v set all cluster.op-version 30101 and then restart glusterd on > all the nodes and check the brick status? > s/30101/

Re: [Gluster-users] Failure while upgrading gluster to 3.10.1

2017-05-15 Thread Atin Mukherjee
ersion 31001 > volume set: failed: Required op_version (31001) is not supported > Yes you should given 3.6 version is EOLed. > > > > On Mon, May 15, 2017 at 3:32 AM, Atin Mukherjee > wrote: > >> On Sun, 14 May 2017 at 21:43, Atin Mukherjee wrote: >> >>&

Re: [Gluster-users] Failure while upgrading gluster to 3.10.1

2017-05-18 Thread Atin Mukherjee
mgmt_v3_unlock+0x4c3) > [0x7fd6bdcef1b3] ) 0-management: Lock for vol shared not held > [2017-05-17 06:48:35.613432] W [MSGID: 106118] > [glusterd-handler.c:5223:__glusterd_peer_rpc_notify] 0-management: Lock not > released for shared > [2017-05-17 06:48:35.614317] E [MSGID: 106170] &g

Re: [Gluster-users] Failure while upgrading gluster to 3.10.1

2017-05-18 Thread Atin Mukherjee
On Thu, 18 May 2017 at 23:40, Atin Mukherjee wrote: > On Wed, 17 May 2017 at 12:47, Pawan Alwandi wrote: > >> Hello Atin, >> >> I realized that these >> http://gluster.readthedocs.io/en/latest/Upgrade-Guide/upgrade_to_3.10/ >> instructions only work for upg

Re: [Gluster-users] Rebalance + VM corruption - current status and request for feedback

2017-05-20 Thread Atin Mukherjee
On Sun, 21 May 2017 at 02:17, Mahdi Adnan wrote: > Thank you so much for your replay. > > Yes, i checked the 310 repository before and it was't there, see below; > I think 3.10.2 bits are still in 3.10 test repository in storage sig and hasn't been pushed to release mirror yet. Niels can update

Re: [Gluster-users] Failure while upgrading gluster to 3.10.1

2017-05-20 Thread Atin Mukherjee
et us know > if you find any more info in the logs) > > Pawan > > On Thu, May 18, 2017 at 11:45 PM, Atin Mukherjee > wrote: > >> >> On Thu, 18 May 2017 at 23:40, Atin Mukherjee wrote: >> >>> On Wed, 17 May 2017 at 12:47, Pawan Al

Re: [Gluster-users] Failure while upgrading gluster to 3.10.1

2017-05-21 Thread Atin Mukherjee
gt; Thanks for continued support. I've attached requested files from all 3 > nodes. > > (I think we already verified the UUIDs to be correct, anyway let us know > if you find any more info in the logs) > > Pawan > > On Thu, May 18, 2017 at 11:45 PM, Atin Mukherjee > wro

Re: [Gluster-users] Failure while upgrading gluster to 3.10.1

2017-05-22 Thread Atin Mukherjee
On Mon, May 22, 2017 at 11:35 AM, Pawan Alwandi wrote: > Hello Atin, > > The tar's have the content of `/var/lib/glusterd` too for all 3 nodes, > please check again. > > Thanks > > On Mon, May 22, 2017 at 11:32 AM, Atin Mukherjee > wrote: > >> Pawan, &g

Re: [Gluster-users] Failure while upgrading gluster to 3.10.1

2017-05-22 Thread Atin Mukherjee
On Mon, May 22, 2017 at 7:51 PM, Atin Mukherjee wrote: > Sorry Pawan, I did miss the other part of the attachments. So looking from > the glusterd.info file from all the hosts, it looks like host2 and host3 > do not have the correct op-version. Can you please set the op-version as >

Re: [Gluster-users] Failure while upgrading gluster to 3.10.1

2017-05-22 Thread Atin Mukherjee
On Mon, May 22, 2017 at 9:05 PM, Pawan Alwandi wrote: > > On Mon, May 22, 2017 at 8:36 PM, Atin Mukherjee > wrote: > >> >> >> On Mon, May 22, 2017 at 7:51 PM, Atin Mukherjee >> wrote: >> >>> Sorry Pawan, I did miss the other part of the atta

Re: [Gluster-users] Failure while upgrading gluster to 3.10.1

2017-05-24 Thread Atin Mukherjee
Brick 192.168.0.6:/data/exports/shared > Status: Transport endpoint is not connected > > Brick 192.168.0.7:/data/exports/shared > Status: Transport endpoint is not connected > > Any idea whats up here? > > Pawan > > On Mon, May 22, 2017 at 9:42 PM, Atin Mukherjee >

Re: [Gluster-users] Failure while upgrading gluster to 3.10.1

2017-05-25 Thread Atin Mukherjee
2049Y2458 > Self-heal Daemon on localhostN/AY2463 > NFS Server on 192.168.0.62049Y2194 > Self-heal Daemon on 192.168.0.6N/AY2199 > NFS Server on 192.168.0.5 2049Y2089 &g

Re: [Gluster-users] Failed Volume

2017-05-26 Thread Atin Mukherjee
You'd basically have to copy the content of /var/lib/glusterd from fs001 to fs003 with out overwritting fs003's onode specific details. Please ensure you don't touch glusterd.info file and content of /var/lib/glusterd/peers in fs003 and rest can be copied. Post that I expect glusterd will come up.

Re: [Gluster-users] Adding a new replica to the cluster

2017-05-29 Thread Atin Mukherjee
On Mon, May 29, 2017 at 9:52 PM, Merwan Ouddane wrote: > Hello, > > I wanted to play around with gluster and I made a 2 nodes cluster > replicated, then I wanted to add a third replica "on the fly". > > I manage to probe my third server from the cluster, but when I try to add > the new brick to t

Re: [Gluster-users] Failure while upgrading gluster to 3.10.1

2017-05-29 Thread Atin Mukherjee
di wrote: > Sorry for big attachment in previous mail...last 1000 lines of those logs > attached now. > > On Mon, May 29, 2017 at 4:44 PM, Pawan Alwandi wrote: > >> >> >> On Thu, May 25, 2017 at 9:54 PM, Atin Mukherjee >> wrote: >> >>> >>>

Re: [Gluster-users] Failure while upgrading gluster to 3.10.1

2017-05-31 Thread Atin Mukherjee
ing gluster one after another, but still see the same > result. > > > On Tue, May 30, 2017 at 10:40 AM, Atin Mukherjee > wrote: > >> Pawan - I couldn't reach to any conclusive analysis so far. But, looking >> at the client (nfs) & glusterd log files, it d

Re: [Gluster-users] [Gluster-devel] Empty info file preventing glusterd from starting

2017-05-31 Thread Atin Mukherjee
We are going to start working on this patch. Gaurav (Cced) will rebase the patch and put it for reviews. On Wed, May 31, 2017 at 4:28 PM, ABHISHEK PALIWAL wrote: > So is there any one working on it to fix this issue either by this patch > or some other way? if yes then please provide the time pl

Re: [Gluster-users] Failure while upgrading gluster to 3.10.1

2017-05-31 Thread Atin Mukherjee
log file (starting from last restart) along with the brick logs collected from all the nodes. > Pawan > > On Wed, May 31, 2017 at 2:10 PM, Atin Mukherjee > wrote: > >> Pawan, >> >> I'd need the sosreport from all the nodes to debug and figure out what

Re: [Gluster-users] Adding a new replica to the cluster

2017-05-31 Thread Atin Mukherjee
gluster volume info, gluster peer status from all the nodes. > > > Merwan > > -- > *De :* Atin Mukherjee > *Envoyé :* mardi 30 mai 2017 04:51 > *À :* Merwan Ouddane > *Cc :* gluster-users@gluster.org > *Objet :* Re: [Gluster-users] Adding

Re: [Gluster-users] "Another Transaction is in progres..."

2017-05-31 Thread Atin Mukherjee
On Wed, May 31, 2017 at 7:31 PM, Erekle Magradze < erekle.magra...@recogizer.de> wrote: > Hello, > I think I have the same issues, I am using gluster as the VM image storage > under oVirt, would it make sense to restart gluster services on all hosts > (of course with turned off VMs) > If two tran

Re: [Gluster-users] Adding a new replica to the cluster

2017-06-04 Thread Atin Mukherjee
quot; for each server > > > Thank you, > > Merwan > -- > *De :* Atin Mukherjee > *Envoyé :* mercredi 31 mai 2017 12:20:49 > > *À :* Merwan Ouddane > *Cc :* gluster-users@gluster.org > *Objet :* Re: [Gluster-users] Adding a new replica to the cl

Re: [Gluster-users] Adding a new replica to the cluster

2017-06-06 Thread Atin Mukherjee
request" ? > (from the log i send in the first place) > > Merwan > > ------ > *De :* Atin Mukherjee > *Envoyé :* dimanche 4 juin 2017 15:31:34 > *À :* Merwan Ouddane > *Cc :* gluster-users > > *Objet :* Re: [Gluster-users] Adding a new

Re: [Gluster-users] substitution of two faulty servers

2017-06-09 Thread Atin Mukherjee
Go for replace brick. On Fri, 9 Jun 2017 at 19:29, Erekle Magradze wrote: > Hello, > > I have glusterfs 3.8.9, integrated with oVirt. > > glusterfs is running on 6 servers, I have one brick from each server for > oVirt virtdata volume (used for VM images) > > and 2 bricks from each servers (12 b

Re: [Gluster-users] substitution of two faulty servers

2017-06-09 Thread Atin Mukherjee
an replace one brick at a time irrespective of the replica count. > or, if there is other way please drop me a hint or link > On 06/09/2017 05:44 PM, Atin Mukherjee wrote: > > Go for replace brick. > > On Fri, 9 Jun 2017 at 19:29, Erekle Magradze > wrote: > >> Hello

Re: [Gluster-users] How to remove dead peer, osrry urgent again :(

2017-06-11 Thread Atin Mukherjee
On Sun, 11 Jun 2017 at 06:25, Lindsay Mathieson wrote: > On 11/06/2017 10:46 AM, WK wrote: > > I thought you had removed vna as defective and then ADDED in vnh as > > the replacement? > > > > Why is vna still there? > > Because I *can't* remove it. It died, was unable to be brought up. The > glus

Re: [Gluster-users] How to remove dead peer, osrry urgent again :(

2017-06-11 Thread Atin Mukherjee
On Sun, 11 Jun 2017 at 16:03, Lindsay Mathieson wrote: > On 11/06/2017 6:42 PM, Atin Mukherjee wrote: > > If the dead server doesn't host any volumes (bricks of volumes to be > > specific) then you can actually remove the uuid entry from > > /var/lib/glusterd fr

Re: [Gluster-users] How to remove dead peer, osrry urgent again :(

2017-06-11 Thread Atin Mukherjee
On Sun, 11 Jun 2017 at 16:26, Lindsay Mathieson wrote: > On 11/06/2017 6:42 PM, Atin Mukherjee wrote: > > If the dead server doesn't host any volumes (bricks of volumes to be > specific) then you can actually remove the uuid entry from > /var/lib/glusterd from other nodes

Re: [Gluster-users] How to remove dead peer, osrry urgent again :(

2017-06-11 Thread Atin Mukherjee
On Sun, 11 Jun 2017 at 16:35, Gandalf Corvotempesta < gandalf.corvotempe...@gmail.com> wrote: > > > Il 11 giu 2017 1:00 PM, "Atin Mukherjee" ha scritto: > > Yes. And please ensure you do this after bringing down all the glusterd > instances and then once the

Re: [Gluster-users] Gluster deamon fails to start

2017-06-12 Thread Atin Mukherjee
glusterd failed to resolve addresses of the bricks. This could happen either because of glusterd was trying to resolve the address of the brick before the N/W interface was up or a change in the IP? If it's the latter then a point to note is that it's recommended that cluster should be formed using

Re: [Gluster-users] Gluster deamon fails to start

2017-06-12 Thread Atin Mukherjee
/w interface for gluster management interface by peer probing the same node with multiple addresses. But you'd still need to ensure that glusterd can communicate to the IP with which brick addresses are defined. > > Sent using OWA for iPhone > ------ >

Re: [Gluster-users] Gluster deamon fails to start

2017-06-12 Thread Atin Mukherjee
hone > -- > *From:* Sahina Bose > *Sent:* Monday, June 12, 2017 6:49:29 AM > *To:* Atin Mukherjee > *Cc:* Langley, Robert; SATHEESARAN; Kasturi Narra; > gluster-users@gluster.org > > *Subject:* Re: [Gluster-users] Gluster deamon fails to start >

Re: [Gluster-users] How to remove dead peer, osrry urgent again :(

2017-06-12 Thread Atin Mukherjee
On Tue, 13 Jun 2017 at 06:39, Lindsay Mathieson wrote: > > On 13 June 2017 at 02:56, Pranith Kumar Karampuri > wrote: > >> We can also do "gluster peer detach force right? > > > > Just to be sure I setup a test 3 node vm gluster cluster :) then shut down > one of the nodes and tried to remove i

Re: [Gluster-users] gluster peer probe failing

2017-06-15 Thread Atin Mukherjee
https://review.gluster.org/#/c/17494/ will it and the next update of 3.10 should have this fix. If sysctl net.ipv4.ip_local_reserved_ports has any value > short int range then this would be a problem with the current version. Would you be able to reset the reserved ports temporarily to get this go

Re: [Gluster-users] gluster peer probe failing

2017-06-15 Thread Atin Mukherjee
ng the reserved ports are already in the short int range, so maybe I > misunderstood something? or is it a different issue? > > > > *From:* Atin Mukherjee [mailto:amukh...@redhat.com] > *Sent:* Thursday, June 15, 2017 10:56 AM > *To:* Guy Cukierman > *Cc:* gluster-user

Re: [Gluster-users] About the maintenance time

2017-06-15 Thread Atin Mukherjee
On Fri, Jun 16, 2017 at 9:54 AM, Ravishankar N wrote: > On 06/16/2017 09:07 AM, te-yamau...@usen.co.jp wrote: > >> I currently use it in the replica configuration of 3.10.2. >> The brick process may not start when restarting the storage server. Also, >> when using gnfs, the I / O may hang up and

Re: [Gluster-users] peer probe failures

2017-06-15 Thread Atin Mukherjee
can you please share the glusterd log file? On Thu, Jun 15, 2017 at 5:18 PM, Guy Cukierman wrote: > Hi, > > I’m having a similar issue, were you able to solve it? > > Thanks. > > > > > > > > Hey all, > > > > I've got a strange problem going on here. I've installed glusterfs-server > > on ubuntu

Re: [Gluster-users] different brick using the same port?

2017-06-19 Thread Atin Mukherjee
On Sun, Jun 18, 2017 at 1:40 PM, Yong Zhang wrote: > Hi, all > > > > I found two of my bricks from different volumes are using the same port > 49154 on the same glusterfs server node, is this normal? > No it's not. Can you please help me with the following information: 1. gluster --version 2.

Re: [Gluster-users] different brick using the same port?

2017-06-19 Thread Atin Mukherjee
On Mon, Jun 19, 2017 at 7:01 PM, Joe Julian wrote: > Isn't this just brick multiplexing? > I initially thought about it but with brick multiplexing the pid should be the same which is not the case here. > > > On June 19, 2017 5:55:54 AM PDT, Atin Mukherjee > wrote: >

Re: [Gluster-users] Unable to get transaction opinfo for transaction ID gluster version 3.6

2017-06-20 Thread Atin Mukherjee
glusterfs-3.6 version is no more active and not supported by community. You should upgrade to the latest which is 3.10 (long term maintenance). If you want to explore more, then you can choose to upgrade to 3.11 (short term maintenance) which has more features but has a life of only 3 months. On T

Re: [Gluster-users] gluster peer probe failing

2017-06-22 Thread Atin Mukherjee
>1. Any time estimation on to when this fix would be released? >2. Any recommended workaround? > > > > Best, > > Guy. > > > > *From:* Gaurav Yadav [mailto:gya...@redhat.com] > *Sent:* Tuesday, June 20, 2017 9:46 AM > *To:* Guy Cukierman > *Cc:

Re: [Gluster-users] remounting volumes, is there an easier way

2017-06-22 Thread Atin Mukherjee
On Tue, Jun 20, 2017 at 7:32 PM, Ludwig Gamache wrote: > All, > > Over the week-end, one of my volume became unavailable. All clients could > not access their mount points. On some of the clients, I had user processes > that we using these mount points. So, I could not do a umount/mount without >

Re: [Gluster-users] Gluster failure due to "0-management: Lock not released for "

2017-06-22 Thread Atin Mukherjee
Could you attach glusterd.log and cmd_history.log files from all the nodes? On Wed, Jun 21, 2017 at 11:40 PM, Victor Nomura wrote: > Hi All, > > > > I’m fairly new to Gluster (3.10.3) and got it going for a couple of months > now but suddenly after a power failure in our building it all came cra

Re: [Gluster-users] Volume options appear twice

2017-06-23 Thread Atin Mukherjee
We have few volume options which are defined for more than one translator. So this is known. The only issue we have to change them to unique names now would cause an issue in backward compatibility. With Gluster-4.0 we'll try to address this priblem. On Fri, 23 Jun 2017 at 01:32, Renaud Fortier w

Re: [Gluster-users] Gluster failure due to "0-management: Lock not released for "

2017-06-27 Thread Atin Mukherjee
t the N/W layer and rectify the problems. On Thu, Jun 22, 2017 at 9:30 PM, Atin Mukherjee wrote: > Could you attach glusterd.log and cmd_history.log files from all the nodes? > > On Wed, Jun 21, 2017 at 11:40 PM, Victor Nomura wrote: > >> Hi All, >> >> >> >

Re: [Gluster-users] Gluster volume not mounted

2017-06-27 Thread Atin Mukherjee
The mount log file of the volume would help in debugging the actual cause. On Tue, Jun 27, 2017 at 6:33 PM, Joel Diaz wrote: > Good morning Gluster users, > > I'm very new to the Gluster file system. My apologies if this is not the > correct way to seek assistance. However, I would appreciate so

Re: [Gluster-users] Gluster failure due to "0-management: Lock not released for "

2017-06-30 Thread Atin Mukherjee
> Regards, > > > > Victor Nomura > > > > *From:* Atin Mukherjee [mailto:amukh...@redhat.com] > *Sent:* June-27-17 12:29 AM > > > *To:* Victor Nomura > *Cc:* gluster-users > > *Subject:* Re: [Gluster-users] Gluster failure due to "0-management: Lock

Re: [Gluster-users] Some bricks are offline after restart, how to bring them online gracefully?

2017-06-30 Thread Atin Mukherjee
On Fri, Jun 30, 2017 at 1:31 AM, Jan wrote: > Hi all, > > Gluster and Ganesha are amazing. Thank you for this great work! > > I’m struggling with one issue and I think that you might be able to help > me. > > I spent some time by playing with Gluster and Ganesha and after I gain > some experience

Re: [Gluster-users] Failure while upgrading gluster to 3.10.1

2017-07-03 Thread Atin Mukherjee
fwiw, only the rpm is available). Is it possible that glusterfs-gnfs be > made available for debian too? > Kaleb - can you please help answering to this query? > > Thanks, > Pawan > > > On Wed, May 31, 2017 at 5:26 PM, Atin Mukherjee > wrote: > >> >>

Re: [Gluster-users] [ovirt-users] Gluster issue with /var/lib/glusterd/peers/ file

2017-07-03 Thread Atin Mukherjee
Please attach glusterd & cmd_history log files from all the nodes. On Mon, Jul 3, 2017 at 2:55 PM, Sahina Bose wrote: > > > On Sun, Jul 2, 2017 at 5:38 AM, Mike DePaulo wrote: > >> Hi everyone, >> >> I have ovirt 4.1.1/4.1.2 running on 3 hosts with a gluster hosted engine. >> >> I was working o

Re: [Gluster-users] Gluster failure due to "0-management: Lock not released for "

2017-07-04 Thread Atin Mukherjee
in > order to perform any gluster commands on any node. > > > > *From:* Victor Nomura [mailto:vic...@mezine.com] > *Sent:* July-04-17 9:41 AM > *To:* 'Atin Mukherjee' > *Cc:* 'gluster-users' > *Subject:* RE: [Gluster-users] Gluster failure due to "0-

Re: [Gluster-users] op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)

2017-07-05 Thread Atin Mukherjee
On Wed, Jul 5, 2017 at 8:32 PM, Sahina Bose wrote: > > > On Wed, Jul 5, 2017 at 8:16 PM, Gianluca Cecchi > wrote: > >> >> >> On Wed, Jul 5, 2017 at 7:42 AM, Sahina Bose wrote: >> >>> >>> ... then the commands I need to run would be: gluster volume reset-brick export >>

Re: [Gluster-users] op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)

2017-07-05 Thread Atin Mukherjee
And what does glusterd log indicate for these failures? On Wed, Jul 5, 2017 at 8:43 PM, Gianluca Cecchi wrote: > > > On Wed, Jul 5, 2017 at 5:02 PM, Sahina Bose wrote: > >> >> >> On Wed, Jul 5, 2017 at 8:16 PM, Gianluca Cecchi < >> gianluca.cec...@gmail.com> wrote: >> >>> >>> >>> On Wed, Jul 5,

Re: [Gluster-users] op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)

2017-07-05 Thread Atin Mukherjee
e log the failure in debug mode. Would you be able to restart glusterd service with debug log mode and reran this test and share the log? On Wed, Jul 5, 2017 at 9:12 PM, Gianluca Cecchi wrote: > > > On Wed, Jul 5, 2017 at 5:22 PM, Atin Mukherjee > wrote: > >> And what does

Re: [Gluster-users] op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)

2017-07-05 Thread Atin Mukherjee
On Thu, Jul 6, 2017 at 3:47 AM, Gianluca Cecchi wrote: > On Wed, Jul 5, 2017 at 6:39 PM, Atin Mukherjee > wrote: > >> OK, so the log just hints to the following: >> >> [2017-07-05 15:04:07.178204] E [MSGID: 106123] >> [glusterd-mgmt.c:1532:glusterd_mgmt_v3_commi

Re: [Gluster-users] op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)

2017-07-06 Thread Atin Mukherjee
On Thu, Jul 6, 2017 at 5:26 PM, Gianluca Cecchi wrote: > On Thu, Jul 6, 2017 at 8:38 AM, Gianluca Cecchi > wrote: > >> >> Eventually I can destroy and recreate this "export" volume again with the >> old names (ovirt0N.localdomain.local) if you give me the sequence of >> commands, then enable deb

Re: [Gluster-users] op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)

2017-07-07 Thread Atin Mukherjee
You'd need to allow some more time to dig into the logs. I'll try to get back on this by Monday. On Fri, Jul 7, 2017 at 2:23 PM, Gianluca Cecchi wrote: > On Thu, Jul 6, 2017 at 3:22 PM, Gianluca Cecchi > wrote: > >> On Thu, Jul 6, 2017 at 2:16 PM, Atin Mukherjee &g

Re: [Gluster-users] op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)

2017-07-10 Thread Atin Mukherjee
On Fri, Jul 7, 2017 at 2:23 PM, Gianluca Cecchi wrote: > On Thu, Jul 6, 2017 at 3:22 PM, Gianluca Cecchi > wrote: > >> On Thu, Jul 6, 2017 at 2:16 PM, Atin Mukherjee >> wrote: >> >>> >>> >>> On Thu, Jul 6, 2017 at 5:26 PM, Gianluca Cecchi &

Re: [Gluster-users] Remove and re-add bricks/peers

2017-07-17 Thread Atin Mukherjee
That's the way. However I'd like to highlight that you're running a very old gluster release. We are currently with 3.11 release which is STM and the long term support is with 3.10. You should consider to upgrade to atleast 3.10. On Mon, Jul 17, 2017 at 3:25 PM, Tom Cannaerts - INTRACTO < tom.cann

Re: [Gluster-users] Gluster set brick online and start sync.

2017-07-17 Thread Atin Mukherjee
You'd need to attach the bricks logs of gl0:/mnt/brick0/gm0 & gl0:/mnt/brick1/gm0 On Mon, Jul 17, 2017 at 5:56 PM, Alexey Zakurin wrote: > Hello everybody, > > Please, help to fix me a problem. > > I have a distributed-replicated volume between two servers. On each server > I have 2 RAID-10 arra

Re: [Gluster-users] Remove and re-add bricks/peers

2017-07-18 Thread Atin Mukherjee
ng to the re-adding question, what steps do I need to do to clear > the config of the failed peers? Do I just wipe the data directory of the > volume, or do I need to clear some other config file/folders as well? > > Tom > > > Op ma 17 jul. 2017 om 16:39 schreef Atin Mukherjee : &

Re: [Gluster-users] glusterd-locks.c:572:glusterd_mgmt_v3_lock

2017-07-20 Thread Atin Mukherjee
Please share the cmd_history.log file from all the storage nodes. On Thu, Jul 20, 2017 at 2:34 PM, Paolo Margara wrote: > Hi list, > > recently I've noted a strange behaviour of my gluster storage, sometimes > while executing a simple command like "gluster volume status > vm-images-repo" as a re

Re: [Gluster-users] glusterd-locks.c:572:glusterd_mgmt_v3_lock

2017-07-20 Thread Atin Mukherjee
d for monitoring it's suggested to be run from only one node. On Thu, Jul 20, 2017 at 3:54 PM, Paolo Margara wrote: > In attachment the requested logs for all the three nodes. > > thanks, > > Paolo > > Il 20/07/2017 11:38, Atin Mukherjee ha scritto: > > Please share

Re: [Gluster-users] vol status detail - times out?

2017-07-24 Thread Atin Mukherjee
Yes it could as depending on number of bricks there might be too many brick ops involved. This is the reason we introduced --timeout option in CLI which can be used to have a larger time out value. However this fix is available from release-3.9 onwards. On Mon, Jul 24, 2017 at 3:54 PM, lejeczek w

Re: [Gluster-users] glusterd-locks.c:572:glusterd_mgmt_v3_lock

2017-07-26 Thread Atin Mukherjee
: > > OK, on my nagios instance I've disabled gluster status check on all nodes > except on one, I'll check if this is enough. > > Thanks, > > Paolo > > Il 20/07/2017 13:50, Atin Mukherjee ha scritto: > > So from the cmd_history.logs across a

Re: [Gluster-users] glusterd-locks.c:572:glusterd_mgmt_v3_lock

2017-07-28 Thread Atin Mukherjee
> three nodes are executed by the supervdsm daemon and not only by the SPM > node. Could be this the root cause of this problem? > Indeed. > > Greetings, > > Paolo > > PS: could you suggest a better method than attachment for sharing log > files? > > Il 26/07/20

Re: [Gluster-users] filesystem failure on one peer makes volume read only?

2017-08-01 Thread Atin Mukherjee
Could you share the output of 'gluster volume info ' ? On Tue, Aug 1, 2017 at 5:03 PM, lejeczek wrote: > .. is this default/desired behaviour? > > And is this configurable/controllable behaviour? > I'm thinking - it would be nice not to have whole vol go read-only(three > peers in cluster) but a

Re: [Gluster-users] connection to 10.5.6.32:49155 failed (Connection refused); disconnecting socket

2017-08-01 Thread Atin Mukherjee
This means shd client is not able to establish the connection with the brick on port 49155. Now this could happen if glusterd has ended up providing a stale port back which is not what brick is listening to. If you had killed any brick process using sigkill signal instead of sigterm this is expecte

Re: [Gluster-users] "other names" - how to clean/get rid of ?

2017-08-01 Thread Atin Mukherjee
Are you referring to other names of peer status output? If so, then a peerinfo entry having other names populated means it might be having multiple n/w interfaces or the reverse address resolution is picking this name. But why are you worried on the this part? On Tue, 1 Aug 2017 at 23:24, peljasz

Re: [Gluster-users] glusterd daemon - restart

2017-08-02 Thread Atin Mukherjee
On Wed, 2 Aug 2017 at 19:27, Mark Connor wrote: > Sorry, I meant RedHat's Gluster Storage Server 3.2 which is latest and > greatest. > For RHGS related questions/issues please get in touch with RedHat support. This forum is for the community gluster version. > On Wed, Aug 2, 2017 at 9:28 AM, K

Re: [Gluster-users] Glusterd not working with systemd in redhat 7

2017-08-17 Thread Atin Mukherjee
Change-Id: Icfa7b2652417135530479d0aa4e2a82b0476f710 BUG: 1472267 Signed-off-by: Gaurav Yadav Reviewed-on: https://review.gluster.org/17813 Smoke: Gluster Build System Reviewed-by: Prashanth Pai CentOS-regression: Gluster Build System Reviewed-by: Atin Mukherjee Note : 3.12 release is

Re: [Gluster-users] Glusterd not working with systemd in redhat 7

2017-08-18 Thread Atin Mukherjee
On Fri, Aug 18, 2017 at 12:22 PM, Atin Mukherjee wrote: > You're hitting a race here. By the time glusterd tries to resolve the > address of one of the remote bricks of a particular volume, the n/w > interface is not up by that time. We have fixed this issue in mainline and > 3.

Re: [Gluster-users] Glusterd not working with systemd in redhat 7

2017-08-18 Thread Atin Mukherjee
On Fri, 18 Aug 2017 at 13:45, Raghavendra Talur wrote: > On Fri, Aug 18, 2017 at 1:38 PM, Atin Mukherjee > wrote: > > > > > > On Fri, Aug 18, 2017 at 12:22 PM, Atin Mukherjee > > wrote: > >> > >> You're hitting a race here. By the time glu

Re: [Gluster-users] Glusterd not working with systemd in redhat 7

2017-08-18 Thread Atin Mukherjee
On Fri, Aug 18, 2017 at 1:59 PM, Dmitry Melekhov wrote: > 18.08.2017 12:21, Atin Mukherjee пишет: > > > On Fri, 18 Aug 2017 at 13:45, Raghavendra Talur wrote: > >> On Fri, Aug 18, 2017 at 1:38 PM, Atin Mukherjee >> wrote: >> > >> > >>

Re: [Gluster-users] Glusterd not working with systemd in redhat 7

2017-08-18 Thread Atin Mukherjee
On Fri, Aug 18, 2017 at 2:01 PM, Niels de Vos wrote: > On Fri, Aug 18, 2017 at 12:22:33PM +0530, Atin Mukherjee wrote: > > You're hitting a race here. By the time glusterd tries to resolve the > > address of one of the remote bricks of a particular volume, the n/w > > i

Re: [Gluster-users] Glusterd not working with systemd in redhat 7

2017-08-21 Thread Atin Mukherjee
On Mon, Aug 21, 2017 at 2:49 AM, Cesar da Silva wrote: > Hi! > I am having same issue but I am running Ubuntu v16.04. > It does not mount during boot, but works if I mount it manually. I am > running the Gluster-server on the same machines (3 machines) > Here is the /tc/fstab file > > /dev/sdb1 /

  1   2   3   4   5   6   7   8   >