On 02/05/2015 11:32 PM, Peter Auyeung wrote:
Hi Soumya
root@glusterprod001:~# gluster volume info | grep nfs.acl
02/05/15 10:00:05 [ /root ]
Seems like we do not have ACL enabled.
nfs client is a RHEL4 standard NFS client
Oh by default ACLs are enabled. It seem to be shown in 'gluster volu
We decided to append time-stamp with snapname when creating a snapshot
by default. Users can override this with a flag "no-timestamp", then
snapshot will be created without appending time-stamp. So the snapshot
create syntax would be like " snapshot create
[no-timestamp] [description ] [force] "
This used to happen because of a dht issue, +Raghavendra to check if he
knows something about this.
Pranith
On 02/06/2015 10:06 AM, David F. Robinson wrote:
Not repeatable. Once it shows up, it stays there. I sent some other
strange behavior I am seeing to Pranith earlier this evening.
Atta
Not repeatable. Once it shows up, it stays there. I sent some other
strange behavior I am seeing to Pranith earlier this evening. Attached
below...
David
Another issue I am having that might be related is that I cannot delete
some directories. It complains that the directories are not empt
copy that. Thanks for looking into the issue.
David
-- Original Message --
From: "Benjamin Turner"
To: "David F. Robinson"
Cc: "Ben Turner" ; "Pranith Kumar Karampuri"
; "Xavier Hernandez" ;
"gluster-us...@gluster.org" ; "Gluster Devel"
Sent: 2/5/2015 9:05:43 PM
Subject: Re: [Gl
Correct! I have seen(back in the day, its been 3ish years since I have
seen it) having say 50+ volumes each with a geo rep session take system
load levels to the point where pings couldn't be serviced within the ping
timeout. So it is known to happen but there has been alot of work in the
geo rep
- Original Message -
> From: "Ben Turner"
> To: "Pranith Kumar Karampuri" , "David F. Robinson"
>
> Cc: "Xavier Hernandez" , "Benjamin Turner"
> , gluster-us...@gluster.org,
> "Gluster Devel"
> Sent: Friday, February 6, 2015 3:25:28 AM
> Subject: Re: [Gluster-users] [Gluster-devel] m
- Original Message -
> From: "Niels de Vos"
> To: "Pranith Kumar Karampuri"
> Cc: gluster-us...@gluster.org, "Gluster Devel"
> Sent: Friday, February 6, 2015 2:32:36 AM
> Subject: Re: [Gluster-devel] failed heal
>
> On Thu, Feb 05, 2015 at 11:21:58AM +0530, Pranith Kumar Karampuri wro
Rafi,
Sorry it took me some time - I had to merge these with some of my
changes - the scif0 (iWARP) does not support SRQ (max_srq : 0) so have
changed some of the code to use QP instead - can provide those if
there is interest after this is stable.
Here's the good -
The performance with the patc
Isn't rsync what geo-rep uses?
David (Sent from mobile)
===
David F. Robinson, Ph.D.
President - Corvid Technologies
704.799.6944 x101 [office]
704.252.1310 [cell]
704.799.7974 [fax]
david.robin...@corvidtec.com
http://www.corvidtechnologies.com
> On Feb 5
Out of curiosity, are you using --inplace?
On 02/05/2015 02:59 PM, David F. Robinson wrote:
Should I run my rsync with --block-size = something other than the default? Is
there an optimal value? I think 128k is the max from my quick search. Didn't
dig into it throughly though.
David (Sent fr
Should I run my rsync with --block-size = something other than the default? Is
there an optimal value? I think 128k is the max from my quick search. Didn't
dig into it throughly though.
David (Sent from mobile)
===
David F. Robinson, Ph.D.
President - Corvid Techn
It was a mix of files from very small to very large. And many terabytes of
data. Approx 20tb
David (Sent from mobile)
===
David F. Robinson, Ph.D.
President - Corvid Technologies
704.799.6944 x101 [office]
704.252.1310 [cell]
704.799.7974 [fax]
david.robin
I'll send you the emails I sent Pranith with the logs. What causes these
disconnects?
David (Sent from mobile)
===
David F. Robinson, Ph.D.
President - Corvid Technologies
704.799.6944 x101 [office]
704.252.1310 [cell]
704.799.7974 [fax]
david.robin...@cor
On Thu, Feb 05, 2015 at 11:21:58AM +0530, Pranith Kumar Karampuri wrote:
>
> On 02/04/2015 11:52 PM, David F. Robinson wrote:
> >I don't recall if that was before or after my upgrade.
> >I'll forward you an email thread for the current heal issues which are
> >after the 3.6.2 upgrade...
> This is
This is *tomorrow* at 12:00 UTC (approximately 15.5 hours from now) in
#gluster-meeting on Freenode. See you all there!
- Original Message -
> Perhaps it's not obvious to the broader community, but a bunch of people
> have put a bunch of work into various projects under the "4.0" banner.
Hi Soumya
root@glusterprod001:~# gluster volume info | grep nfs.acl
02/05/15 10:00:05 [ /root ]
Seems like we do not have ACL enabled.
nfs client is a RHEL4 standard NFS client
Thanks
-Peter
From: Soumya Koduri [skod...@redhat.com]
Sent: Wednesday, Febru
On 02/05/2015 03:48 PM, Pranith Kumar Karampuri wrote:
I believe David already fixed this. I hope this is the same issue he
told about permissions issue.
Oops, it is not. I will take a look.
Pranith
Pranith
On 02/05/2015 03:44 PM, Xavier Hernandez wrote:
Is the failure repeatable ? with the
I believe David already fixed this. I hope this is the same issue he
told about permissions issue.
Pranith
On 02/05/2015 03:44 PM, Xavier Hernandez wrote:
Is the failure repeatable ? with the same directories ?
It's very weird that the directories appear on the volume when you do
an 'ls' on t
Is the failure repeatable ? with the same directories ?
It's very weird that the directories appear on the volume when you do an
'ls' on the bricks. Could it be that you only made a single 'ls' on fuse
mount which not showed the directory ? Is it possible that this 'ls'
triggered a self-heal t
20 matches
Mail list logo