- Original Message -
> On 22/05/2014, at 1:34 PM, Kaushal M wrote:
> > Thanks Justin, I found the problem. The VM can be deleted now.
>
> Done. :)
>
>
> > Turns out, there was more than enough time for the rebalance to complete.
> > But we hit a race, which caused a command to fail.
> >
- Original Message -
> From: "Kaushal M"
> To: "Justin Clift" , "Gluster Devel"
>
> Sent: Thursday, May 22, 2014 6:04:29 PM
> Subject: Re: [Gluster-devel] bug-857330/normal.t failure
>
> Thanks Justin, I found the problem. The VM can be deleted now.
>
> Turns out, there was more than
http://review.gluster.com/#/c/7823/ - the fix here
On Thu, May 22, 2014 at 1:41 PM, Harshavardhana
wrote:
> Here are the important locations in the XFS tree coming from 2.6.32 branch
>
> STATIC int
> xfs_set_acl(struct inode *inode, int type, struct posix_acl *acl)
> {
> struct xfs_inode
Here are the important locations in the XFS tree coming from 2.6.32 branch
STATIC int
xfs_set_acl(struct inode *inode, int type, struct posix_acl *acl)
{
struct xfs_inode *ip = XFS_I(inode);
unsigned char *ea_name;
int error;
if (S_ISLNK(inode->i_mode)) ---
On 05/22/2014 02:10 AM, Alex Pyrgiotis wrote:
On 02/17/2014 06:22 PM, Vijay Bellur wrote:
On 02/17/2014 05:11 PM, Alex Pyrgiotis wrote:
On 02/10/2014 07:06 PM, Vijay Bellur wrote:
On 02/05/2014 04:10 PM, Alex Pyrgiotis wrote:
Hi all,
Just wondering, do we have any news on that?
Hi Alex,
I
On Wed, May 21, 2014 at 06:40:57PM +0200, Niels de Vos wrote:
> A lot of work has been done on getting blockers resolved for the next
> 3.5 release. We're not there yet, but we're definitely getting close to
> releasing a 1st beta.
>
> Humble will follow-up with an email related to the documentat
On 22/05/2014, at 1:34 PM, Kaushal M wrote:
> Thanks Justin, I found the problem. The VM can be deleted now.
Done. :)
> Turns out, there was more than enough time for the rebalance to complete. But
> we hit a race, which caused a command to fail.
>
> The particular test that failed is waiting
Thanks Justin, I found the problem. The VM can be deleted now.
Turns out, there was more than enough time for the rebalance to complete.
But we hit a race, which caused a command to fail.
The particular test that failed is waiting for rebalance to finish. It does
this by doing a 'gluster volume r
[Adding the right alias for gluster-devel this time around]
On 05/22/2014 05:29 PM, Vijay Bellur wrote:
Hi All,
Given the addition of new sub-maintainers & release maintainers to the
community [1], I have felt the need to publish a set of guidelines for
all categories of maintainers to have a n
I haven't yet. But I will.
Justin,
Can I get take a peek inside the vm?
~kaushal
On Thu, May 22, 2014 at 4:53 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
> Kaushal,
>Rebalance status command seems to be failing sometimes. I sent a mail
> about such spurious failure earlier to
Kaushal,
Rebalance status command seems to be failing sometimes. I sent a mail about
such spurious failure earlier today. Did you get a chance to look at the logs
and confirm that rebalance didn't fail and it is indeed a timeout?
Pranith
- Original Message -
> From: "Kaushal M"
> To:
The test is waiting for rebalance to finish. This is a rebalance with some
actual data so it could have taken a long time to finish. I did set a
pretty high timeout, but it seems like it's not enough for the new VMs.
Possible options are,
- Increase this timeout further
- Reduce the amount of data
Kaushal has more context about these CCed. Keep the setup until he responds so
that he can take a look.
Pranith
- Original Message -
> From: "Justin Clift"
> To: "Pranith Kumar Karampuri"
> Cc: "Gluster Devel"
> Sent: Thursday, May 22, 2014 3:54:46 PM
> Subject: bug-857330/normal.t fai
Hi Pranith,
Ran a few VM's with your Gerrit CR 7835 applied, and in "DEBUG"
mode (I think).
One of the VM's had a failure in bug-857330/normal.t:
Test Summary Report
---
./tests/basic/rpm.t (Wstat: 0 Tests: 0 Failed: 0)
Parse errors: Bad plan
I have posted a patch that fixes this issue:
http://review.gluster.org/#/c/7842/
Thanks,
Vijay
On Thursday 22 May 2014 11:35 AM, Vijay Bellur wrote:
On 05/21/2014 08:50 PM, Vijaikumar M wrote:
KP, Atin and myself did some debugging and found that there was a
deadlock in glusterd.
When creati
/glusterd-backend%N.log maybe?
On 22/05/2014, at 8:03 AM, Kaushal M wrote:
> The glusterds spawned using cluster.rc store their logs at
> /d/backends//glusterd.log . But the cleanup() function cleans
> /d/backends/, so those logs are lost before we can archive.
>
> cluster.rc should be fixed to
The glusterds spawned using cluster.rc store their logs at
/d/backends//glusterd.log . But the cleanup() function cleans
/d/backends/, so those logs are lost before we can archive.
cluster.rc should be fixed to use a better location for the logs.
~kaushal
On Thu, May 22, 2014 at 11:45 AM, Kaush
17 matches
Mail list logo