Re: [Gluster-devel] Regression Failure: ./tests/basic/quota.t

2015-07-01 Thread Sachin Pandit
- Original Message - > From: "Vijaikumar M" > To: "Kotresh Hiremath Ravishankar" , "Gluster Devel" > > Cc: "Sachin Pandit" > Sent: Thursday, July 2, 2015 12:01:03 PM > Subject: Re: Regression Failure: ./tests/basic/quota.t > > We look into this issue > > Thanks, > Vijay > > On Thursd

Re: [Gluster-devel] Regression Failure: ./tests/basic/quota.t

2015-07-01 Thread Vijaikumar M
We look into this issue Thanks, Vijay On Thursday 02 July 2015 11:46 AM, Kotresh Hiremath Ravishankar wrote: Hi, I see quota.t regression failure for the following. The changes are related to example programs in libgfchangelog. http://build.gluster.org/job/rackspace-regression-2GB-triggered/1

[Gluster-devel] Regression Failure: ./tests/basic/quota.t

2015-07-01 Thread Kotresh Hiremath Ravishankar
Hi, I see quota.t regression failure for the following. The changes are related to example programs in libgfchangelog. http://build.gluster.org/job/rackspace-regression-2GB-triggered/11785/consoleFull Could someone from quota team, take a look at it. Thanks and Regards, Kotresh H R _

Re: [Gluster-devel] Huge memory consumption with quota-marker

2015-07-01 Thread Vijaikumar M
On Thursday 02 July 2015 11:27 AM, Krishnan Parthasarathi wrote: Yes. The PROC_MAX is the maximum no. of 'worker' threads that would be spawned for a given syncenv. - Original Message - - Original Message - From: "Krishnan Parthasarathi" To: "Pranith Kumar Karampuri" Cc:

Re: [Gluster-devel] Huge memory consumption with quota-marker

2015-07-01 Thread Raghavendra Gowdappa
- Original Message - > From: "Krishnan Parthasarathi" > To: "Raghavendra Gowdappa" > Cc: "Pranith Kumar Karampuri" , "Vijay Bellur" > , "Vijaikumar M" > , "Gluster Devel" , > "Nagaprasad Sathyanarayana" > > Sent: Thursday, July 2, 2015 11:27:34 AM > Subject: Re: Huge memory consumpti

Re: [Gluster-devel] Huge memory consumption with quota-marker

2015-07-01 Thread Krishnan Parthasarathi
Yes. The PROC_MAX is the maximum no. of 'worker' threads that would be spawned for a given syncenv. - Original Message - > > > - Original Message - > > From: "Krishnan Parthasarathi" > > To: "Pranith Kumar Karampuri" > > Cc: "Vijay Bellur" , "Vijaikumar M" > > , "Gluster Devel

Re: [Gluster-devel] Huge memory consumption with quota-marker

2015-07-01 Thread Raghavendra Gowdappa
- Original Message - > From: "Krishnan Parthasarathi" > To: "Pranith Kumar Karampuri" > Cc: "Vijay Bellur" , "Vijaikumar M" > , "Gluster Devel" > , "Raghavendra Gowdappa" , > "Nagaprasad Sathyanarayana" > > Sent: Thursday, July 2, 2015 10:54:44 AM > Subject: Re: Huge memory consumpti

Re: [Gluster-devel] Huge memory consumption with quota-marker

2015-07-01 Thread Krishnan Parthasarathi
Yes, we could take synctask size as an argument for synctask_create. The increase in synctask threads is not really a problem, it can't grow more than 16 (SYNCENV_PROC_MAX). - Original Message - > > > On 07/02/2015 10:40 AM, Krishnan Parthasarathi wrote: > > > > - Original Message --

Re: [Gluster-devel] spurious failures tests/bugs/tier/bug-1205545-CTR-and-trash-integration.t

2015-07-01 Thread Krishnan Parthasarathi
> > > > A port assigned by Glusterd for a brick is found to be in use already by > > the brick. Any changes in Glusterd recently which can cause this? > > > > Or is it a test infra problem? This issue is likely to be caused by http://review.gluster.org/11039 This patch changes the port allocati

Re: [Gluster-devel] Huge memory consumption with quota-marker

2015-07-01 Thread Pranith Kumar Karampuri
On 07/02/2015 10:40 AM, Krishnan Parthasarathi wrote: - Original Message - On Wednesday 01 July 2015 08:41 AM, Vijaikumar M wrote: Hi, The new marker xlator uses syncop framework to update quota-size in the background, it uses one synctask per write FOP. If there are 100 parallel wr

Re: [Gluster-devel] Unable to send patches to review.gluster.org

2015-07-01 Thread Anoop C S
Same here. git pull from r.g.o failed with the following error. Permission denied (publickey). fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. --Anoop C S. On 07/02/2015 09:53 AM, Anuradha Talur wrote: > Hi, > > I'm u

Re: [Gluster-devel] Huge memory consumption with quota-marker

2015-07-01 Thread Krishnan Parthasarathi
- Original Message - > On Wednesday 01 July 2015 08:41 AM, Vijaikumar M wrote: > > Hi, > > > > The new marker xlator uses syncop framework to update quota-size in the > > background, it uses one synctask per write FOP. > > If there are 100 parallel writes with all different inodes but on

Re: [Gluster-devel] Problems when using different hostnames in a bricks and a peer

2015-07-01 Thread Atin Mukherjee
Which gluster version are you using? Better peer identification feature (available 3.6 onwards) should tackle this problem IMO. ~Atin On 07/02/2015 10:05 AM, Rarylson Freitas wrote: > Hi, > > Recently, my company needed to change our hostnames used in the Gluster > Pool. > > In a first moment,

Re: [Gluster-devel] Lock migration as a part of rebalance

2015-07-01 Thread Raghavendra G
One solution I can think of is to have the responsibility of lock migration process spread between both client and rebalance process. A rough algo is outlined below: 1. We should've a static identifier for client process (something like process-uuid of mount process - lets call it client-uuid) in

Re: [Gluster-devel] Unable to send patches to review.gluster.org

2015-07-01 Thread Pranith Kumar Karampuri
I get the following error: error: unpack failed: error No space left on device fatal: Unpack error, check server log Pranith On 07/02/2015 09:58 AM, Atin Mukherjee wrote: + Infra, can any one of you just take a look at it? On 07/02/2015 09:53 AM, Anuradha Talur wrote: Hi, I'm unable to send

[Gluster-devel] Problems when using different hostnames in a bricks and a peer

2015-07-01 Thread Rarylson Freitas
Hi, Recently, my company needed to change our hostnames used in the Gluster Pool. In a first moment, we have two Gluster Nodes called storage1 and storage2. Our volumes used two bricks: storage1:/MYVOLYME and storage2:/MYVOLUME. We put the storage1 and storage2 IPs in the /etc/hosts file of our n

Re: [Gluster-devel] Unable to send patches to review.gluster.org

2015-07-01 Thread Atin Mukherjee
+ Infra, can any one of you just take a look at it? On 07/02/2015 09:53 AM, Anuradha Talur wrote: > Hi, > > I'm unable to send patches to r.g.o, also not able to login. > I'm getting the following errors respectively: > 1) > Permission denied (publickey). > fatal: Could not read from remote repos

Re: [Gluster-devel] Unable to send patches to review.gluster.org

2015-07-01 Thread Venky Shankar
Me too. Earlier (past week or so) this error used to last for about 15-20 minutes, but today seems to be it's day. Venky On Thu, Jul 2, 2015 at 9:53 AM, Anuradha Talur wrote: > Hi, > > I'm unable to send patches to r.g.o, also not able to login. > I'm getting the following errors respectivel

[Gluster-devel] Unable to send patches to review.gluster.org

2015-07-01 Thread Anuradha Talur
Hi, I'm unable to send patches to r.g.o, also not able to login. I'm getting the following errors respectively: 1) Permission denied (publickey). fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. 2) Internal server error

Re: [Gluster-devel] Progress on adding support for SEEK_DATA and SEEK_HOLE

2015-07-01 Thread Niels de Vos
On Wed, Jul 01, 2015 at 07:15:12PM +0200, Xavier Hernandez wrote: > On 07/01/2015 08:53 AM, Niels de Vos wrote: > >On Tue, Jun 30, 2015 at 11:48:20PM +0530, Ravishankar N wrote: > >> > >> > >>On 06/22/2015 03:22 PM, Ravishankar N wrote: > >>> > >>> > >>>On 06/22/2015 01:41 PM, Miklos Szeredi wrote:

Re: [Gluster-devel] Huge memory consumption with quota-marker

2015-07-01 Thread Vijay Bellur
On Wednesday 01 July 2015 08:41 AM, Vijaikumar M wrote: Hi, The new marker xlator uses syncop framework to update quota-size in the background, it uses one synctask per write FOP. If there are 100 parallel writes with all different inodes but on the same directory '/dir', there will be ~100 txn

Re: [Gluster-devel] Progress on adding support for SEEK_DATA and SEEK_HOLE

2015-07-01 Thread Xavier Hernandez
On 07/01/2015 08:53 AM, Niels de Vos wrote: On Tue, Jun 30, 2015 at 11:48:20PM +0530, Ravishankar N wrote: On 06/22/2015 03:22 PM, Ravishankar N wrote: On 06/22/2015 01:41 PM, Miklos Szeredi wrote: On Sun, Jun 21, 2015 at 6:20 PM, Niels de Vos wrote: Hi, it seems that there could be a r

[Gluster-devel] Huge memory consumption with quota-marker

2015-07-01 Thread Vijaikumar M
Hi, The new marker xlator uses syncop framework to update quota-size in the background, it uses one synctask per write FOP. If there are 100 parallel writes with all different inodes but on the same directory '/dir', there will be ~100 txn waiting in queue to acquire a lock on on its parent i.

[Gluster-devel] Minutes of today's Gluster Community Meeting (20150701)

2015-07-01 Thread Kaleb S. KEITHLEY
The agenda for today's meeting meeting is available from https://public.pad.fsfe.org/p/gluster-community-meetings Minutes: http://meetbot.fedoraproject.org/gluster-meeting/2015-07-01/gluster-meeting.2015-07-01-12.05.html Minutes (text): http://meetbot.fedoraproject.org/gluster-meeting/2015-07-

Re: [Gluster-devel] spurious failures tests/bugs/tier/bug-1205545-CTR-and-trash-integration.t

2015-07-01 Thread Raghavendra Talur
On Jul 1, 2015 18:42, "Raghavendra Talur" wrote: > > > > On Wed, Jul 1, 2015 at 3:18 PM, Joseph Fernandes wrote: >> >> Hi All, >> >> TEST 4-5 are failing i.e the following >> >> TEST $CLI volume start $V0 >> TEST $CLI volume attach-tier $V0 replica 2 $H0:$B0/${V0}$CACHE_BRICK_FIRST $H0:$B0/${V0}$

Re: [Gluster-devel] spurious failures tests/bugs/tier/bug-1205545-CTR-and-trash-integration.t

2015-07-01 Thread Raghavendra Talur
On Wed, Jul 1, 2015 at 3:18 PM, Joseph Fernandes wrote: > Hi All, > > TEST 4-5 are failing i.e the following > > TEST $CLI volume start $V0 > TEST $CLI volume attach-tier $V0 replica 2 $H0:$B0/${V0}$CACHE_BRICK_FIRST > $H0:$B0/${V0}$CACHE_BRICK_LAST > > Glusterd Logs say: > [2015-07-01 07:33:25.0

Re: [Gluster-devel] spurious failures tests/bugs/tier/bug-1205545-CTR-and-trash-integration.t

2015-07-01 Thread Joseph Fernandes
Hi All, TEST 4-5 are failing i.e the following TEST $CLI volume start $V0 TEST $CLI volume attach-tier $V0 replica 2 $H0:$B0/${V0}$CACHE_BRICK_FIRST $H0:$B0/${V0}$CACHE_BRICK_LAST Glusterd Logs say: [2015-07-01 07:33:25.053412] I [rpc-clnt.c:965:rpc_clnt_connection_init] 0-management: setting

Re: [Gluster-devel] [Gluster-users] Gluster Docker images are available at docker hub

2015-07-01 Thread Deepak Shetty
On Wed, Jul 1, 2015 at 9:39 AM, Atin Mukherjee wrote: > > > On 05/06/2015 12:31 PM, Humble Devassy Chirammal wrote: > > Hi All, > > > > > > Docker images of GlusterFS 3.6 for Fedora ( 21) and CentOS (7) are now > > available at docker hub ( https://registry.hub.docker.com/u/gluster/ ). > > These

Re: [Gluster-devel] [Gluster-users] Gluster Docker images are available at docker hub

2015-07-01 Thread Atin Mukherjee
On 07/01/2015 03:03 PM, Deepak Shetty wrote: > On Wed, Jul 1, 2015 at 11:32 AM, Krishnan Parthasarathi > wrote: > >> >>> Yeah this followed by glusterd restart should help >>> >>> But frankly, i was hoping that 'rm' the file isn't a neat way to fix this >>> issue >> >> Why is rm not a neat way?

Re: [Gluster-devel] [Gluster-users] Gluster Docker images are available at docker hub

2015-07-01 Thread Deepak Shetty
On Wed, Jul 1, 2015 at 11:32 AM, Krishnan Parthasarathi wrote: > > > Yeah this followed by glusterd restart should help > > > > But frankly, i was hoping that 'rm' the file isn't a neat way to fix this > > issue > > Why is rm not a neat way? Is it because the container deployment tool > needs to

Re: [Gluster-devel] spurious failures tests/bugs/tier/bug-1205545-CTR-and-trash-integration.t

2015-07-01 Thread Pranith Kumar Karampuri
Thanks Joseph! Pranith On 07/01/2015 01:59 PM, Joseph Fernandes wrote: Yep will have a look - Original Message - From: "Pranith Kumar Karampuri" To: "Joseph Fernandes" , "Gluster Devel" Sent: Wednesday, July 1, 2015 1:44:44 PM Subject: spurious failures tests/bugs/tier/bug-1205545-

Re: [Gluster-devel] spurious failures tests/bugs/tier/bug-1205545-CTR-and-trash-integration.t

2015-07-01 Thread Joseph Fernandes
Yep will have a look - Original Message - From: "Pranith Kumar Karampuri" To: "Joseph Fernandes" , "Gluster Devel" Sent: Wednesday, July 1, 2015 1:44:44 PM Subject: spurious failures tests/bugs/tier/bug-1205545-CTR-and-trash-integration.t hi, http://build.gluster.org/job/rackspace-re

[Gluster-devel] spurious failures tests/bugs/tier/bug-1205545-CTR-and-trash-integration.t

2015-07-01 Thread Pranith Kumar Karampuri
hi, http://build.gluster.org/job/rackspace-regression-2GB-triggered/11757/consoleFull has the logs. Could you please look into it. Pranith ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Gluster Docker images are available at docker hub

2015-07-01 Thread Humble Devassy Chirammal
> Yeah this followed by glusterd restart should help > > But frankly, i was hoping that 'rm' the file isn't a neat way to fix this > issue >>Why is rm not a neat way? Is it because the container deployment tool needs to >>know about gluster internals? But isn't a Dockerfile dealing with details >>

Re: [Gluster-devel] [Gluster-users] Gluster Docker images are available at docker hub

2015-07-01 Thread Anand Nekkunti
On 07/01/2015 11:51 AM, Krishnan Parthasarathi wrote: We do have a way to tackle this situation from the code. Raghavendra Talur will be sending a patch shortly. We should fix it by undoing what daemon-refactoring did, that broke the lazy creation of uuid for a node. Fixing it elsewhere is jus