- Original Message -
> From: "Vijaikumar M"
> To: "Kotresh Hiremath Ravishankar" , "Gluster Devel"
>
> Cc: "Sachin Pandit"
> Sent: Thursday, July 2, 2015 12:01:03 PM
> Subject: Re: Regression Failure: ./tests/basic/quota.t
>
> We look into this issue
>
> Thanks,
> Vijay
>
> On Thursd
We look into this issue
Thanks,
Vijay
On Thursday 02 July 2015 11:46 AM, Kotresh Hiremath Ravishankar wrote:
Hi,
I see quota.t regression failure for the following. The changes are related to
example programs in libgfchangelog.
http://build.gluster.org/job/rackspace-regression-2GB-triggered/1
Hi,
I see quota.t regression failure for the following. The changes are related to
example programs in libgfchangelog.
http://build.gluster.org/job/rackspace-regression-2GB-triggered/11785/consoleFull
Could someone from quota team, take a look at it.
Thanks and Regards,
Kotresh H R
_
On Thursday 02 July 2015 11:27 AM, Krishnan Parthasarathi wrote:
Yes. The PROC_MAX is the maximum no. of 'worker' threads that would be spawned
for a given
syncenv.
- Original Message -
- Original Message -
From: "Krishnan Parthasarathi"
To: "Pranith Kumar Karampuri"
Cc:
- Original Message -
> From: "Krishnan Parthasarathi"
> To: "Raghavendra Gowdappa"
> Cc: "Pranith Kumar Karampuri" , "Vijay Bellur"
> , "Vijaikumar M"
> , "Gluster Devel" ,
> "Nagaprasad Sathyanarayana"
>
> Sent: Thursday, July 2, 2015 11:27:34 AM
> Subject: Re: Huge memory consumpti
Yes. The PROC_MAX is the maximum no. of 'worker' threads that would be spawned
for a given
syncenv.
- Original Message -
>
>
> - Original Message -
> > From: "Krishnan Parthasarathi"
> > To: "Pranith Kumar Karampuri"
> > Cc: "Vijay Bellur" , "Vijaikumar M"
> > , "Gluster Devel
- Original Message -
> From: "Krishnan Parthasarathi"
> To: "Pranith Kumar Karampuri"
> Cc: "Vijay Bellur" , "Vijaikumar M"
> , "Gluster Devel"
> , "Raghavendra Gowdappa" ,
> "Nagaprasad Sathyanarayana"
>
> Sent: Thursday, July 2, 2015 10:54:44 AM
> Subject: Re: Huge memory consumpti
Yes, we could take synctask size as an argument for synctask_create.
The increase in synctask threads is not really a problem, it can't
grow more than 16 (SYNCENV_PROC_MAX).
- Original Message -
>
>
> On 07/02/2015 10:40 AM, Krishnan Parthasarathi wrote:
> >
> > - Original Message --
> >
> > A port assigned by Glusterd for a brick is found to be in use already by
> > the brick. Any changes in Glusterd recently which can cause this?
> >
> > Or is it a test infra problem?
This issue is likely to be caused by http://review.gluster.org/11039
This patch changes the port allocati
On 07/02/2015 10:40 AM, Krishnan Parthasarathi wrote:
- Original Message -
On Wednesday 01 July 2015 08:41 AM, Vijaikumar M wrote:
Hi,
The new marker xlator uses syncop framework to update quota-size in the
background, it uses one synctask per write FOP.
If there are 100 parallel wr
Same here. git pull from r.g.o failed with the following error.
Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
--Anoop C S.
On 07/02/2015 09:53 AM, Anuradha Talur wrote:
> Hi,
>
> I'm u
- Original Message -
> On Wednesday 01 July 2015 08:41 AM, Vijaikumar M wrote:
> > Hi,
> >
> > The new marker xlator uses syncop framework to update quota-size in the
> > background, it uses one synctask per write FOP.
> > If there are 100 parallel writes with all different inodes but on
Which gluster version are you using? Better peer identification feature
(available 3.6 onwards) should tackle this problem IMO.
~Atin
On 07/02/2015 10:05 AM, Rarylson Freitas wrote:
> Hi,
>
> Recently, my company needed to change our hostnames used in the Gluster
> Pool.
>
> In a first moment,
One solution I can think of is to have the responsibility of lock migration
process spread between both client and rebalance process. A rough algo is
outlined below:
1. We should've a static identifier for client process (something like
process-uuid of mount process - lets call it client-uuid) in
I get the following error:
error: unpack failed: error No space left on device
fatal: Unpack error, check server log
Pranith
On 07/02/2015 09:58 AM, Atin Mukherjee wrote:
+ Infra, can any one of you just take a look at it?
On 07/02/2015 09:53 AM, Anuradha Talur wrote:
Hi,
I'm unable to send
Hi,
Recently, my company needed to change our hostnames used in the Gluster
Pool.
In a first moment, we have two Gluster Nodes called storage1 and storage2.
Our volumes used two bricks: storage1:/MYVOLYME and storage2:/MYVOLUME. We
put the storage1 and storage2 IPs in the /etc/hosts file of our n
+ Infra, can any one of you just take a look at it?
On 07/02/2015 09:53 AM, Anuradha Talur wrote:
> Hi,
>
> I'm unable to send patches to r.g.o, also not able to login.
> I'm getting the following errors respectively:
> 1)
> Permission denied (publickey).
> fatal: Could not read from remote repos
Me too. Earlier (past week or so) this error used to last for about
15-20 minutes, but today seems to be it's day.
Venky
On Thu, Jul 2, 2015 at 9:53 AM, Anuradha Talur wrote:
> Hi,
>
> I'm unable to send patches to r.g.o, also not able to login.
> I'm getting the following errors respectivel
Hi,
I'm unable to send patches to r.g.o, also not able to login.
I'm getting the following errors respectively:
1)
Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
2) Internal server error
On Wed, Jul 01, 2015 at 07:15:12PM +0200, Xavier Hernandez wrote:
> On 07/01/2015 08:53 AM, Niels de Vos wrote:
> >On Tue, Jun 30, 2015 at 11:48:20PM +0530, Ravishankar N wrote:
> >>
> >>
> >>On 06/22/2015 03:22 PM, Ravishankar N wrote:
> >>>
> >>>
> >>>On 06/22/2015 01:41 PM, Miklos Szeredi wrote:
On Wednesday 01 July 2015 08:41 AM, Vijaikumar M wrote:
Hi,
The new marker xlator uses syncop framework to update quota-size in the
background, it uses one synctask per write FOP.
If there are 100 parallel writes with all different inodes but on the
same directory '/dir', there will be ~100 txn
On 07/01/2015 08:53 AM, Niels de Vos wrote:
On Tue, Jun 30, 2015 at 11:48:20PM +0530, Ravishankar N wrote:
On 06/22/2015 03:22 PM, Ravishankar N wrote:
On 06/22/2015 01:41 PM, Miklos Szeredi wrote:
On Sun, Jun 21, 2015 at 6:20 PM, Niels de Vos wrote:
Hi,
it seems that there could be a r
Hi,
The new marker xlator uses syncop framework to update quota-size in the
background, it uses one synctask per write FOP.
If there are 100 parallel writes with all different inodes but on the
same directory '/dir', there will be ~100 txn waiting in queue to
acquire a lock on on its parent i.
The agenda for today's meeting meeting is available from
https://public.pad.fsfe.org/p/gluster-community-meetings
Minutes:
http://meetbot.fedoraproject.org/gluster-meeting/2015-07-01/gluster-meeting.2015-07-01-12.05.html
Minutes (text):
http://meetbot.fedoraproject.org/gluster-meeting/2015-07-
On Jul 1, 2015 18:42, "Raghavendra Talur"
wrote:
>
>
>
> On Wed, Jul 1, 2015 at 3:18 PM, Joseph Fernandes
wrote:
>>
>> Hi All,
>>
>> TEST 4-5 are failing i.e the following
>>
>> TEST $CLI volume start $V0
>> TEST $CLI volume attach-tier $V0 replica 2
$H0:$B0/${V0}$CACHE_BRICK_FIRST $H0:$B0/${V0}$
On Wed, Jul 1, 2015 at 3:18 PM, Joseph Fernandes
wrote:
> Hi All,
>
> TEST 4-5 are failing i.e the following
>
> TEST $CLI volume start $V0
> TEST $CLI volume attach-tier $V0 replica 2 $H0:$B0/${V0}$CACHE_BRICK_FIRST
> $H0:$B0/${V0}$CACHE_BRICK_LAST
>
> Glusterd Logs say:
> [2015-07-01 07:33:25.0
Hi All,
TEST 4-5 are failing i.e the following
TEST $CLI volume start $V0
TEST $CLI volume attach-tier $V0 replica 2 $H0:$B0/${V0}$CACHE_BRICK_FIRST
$H0:$B0/${V0}$CACHE_BRICK_LAST
Glusterd Logs say:
[2015-07-01 07:33:25.053412] I [rpc-clnt.c:965:rpc_clnt_connection_init]
0-management: setting
On Wed, Jul 1, 2015 at 9:39 AM, Atin Mukherjee wrote:
>
>
> On 05/06/2015 12:31 PM, Humble Devassy Chirammal wrote:
> > Hi All,
> >
> >
> > Docker images of GlusterFS 3.6 for Fedora ( 21) and CentOS (7) are now
> > available at docker hub ( https://registry.hub.docker.com/u/gluster/ ).
> > These
On 07/01/2015 03:03 PM, Deepak Shetty wrote:
> On Wed, Jul 1, 2015 at 11:32 AM, Krishnan Parthasarathi > wrote:
>
>>
>>> Yeah this followed by glusterd restart should help
>>>
>>> But frankly, i was hoping that 'rm' the file isn't a neat way to fix this
>>> issue
>>
>> Why is rm not a neat way?
On Wed, Jul 1, 2015 at 11:32 AM, Krishnan Parthasarathi wrote:
>
> > Yeah this followed by glusterd restart should help
> >
> > But frankly, i was hoping that 'rm' the file isn't a neat way to fix this
> > issue
>
> Why is rm not a neat way? Is it because the container deployment tool
> needs to
Thanks Joseph!
Pranith
On 07/01/2015 01:59 PM, Joseph Fernandes wrote:
Yep will have a look
- Original Message -
From: "Pranith Kumar Karampuri"
To: "Joseph Fernandes" , "Gluster Devel"
Sent: Wednesday, July 1, 2015 1:44:44 PM
Subject: spurious failures
tests/bugs/tier/bug-1205545-
Yep will have a look
- Original Message -
From: "Pranith Kumar Karampuri"
To: "Joseph Fernandes" , "Gluster Devel"
Sent: Wednesday, July 1, 2015 1:44:44 PM
Subject: spurious failures
tests/bugs/tier/bug-1205545-CTR-and-trash-integration.t
hi,
http://build.gluster.org/job/rackspace-re
hi,
http://build.gluster.org/job/rackspace-regression-2GB-triggered/11757/consoleFull
has the logs. Could you please look into it.
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
> Yeah this followed by glusterd restart should help
>
> But frankly, i was hoping that 'rm' the file isn't a neat way to fix this
> issue
>>Why is rm not a neat way? Is it because the container deployment tool
needs to
>>know about gluster internals? But isn't a Dockerfile dealing with details
>>
On 07/01/2015 11:51 AM, Krishnan Parthasarathi wrote:
We do have a way to tackle this situation from the code. Raghavendra
Talur will be sending a patch shortly.
We should fix it by undoing what daemon-refactoring did, that broke the lazy
creation
of uuid for a node. Fixing it elsewhere is jus
35 matches
Mail list logo