I am running into two problems (possibly related?).
1) Every once in a while, when I do a 'rm -rf DIRNAME', it comes back
with an error:
rm: cannot remove `DIRNAME` : Directory not empty
If I try the 'rm -rf' again after the error, it deletes the
directory. The issue is that
-group limitation for all fuse clients.
David (Sent from mobile)
===
David F. Robinson, Ph.D.
President - Corvid Technologies
704.799.6944 x101 [office]
704.252.1310 [cell]
704.799.7974 [fax]
david.robin...@corvidtec.com
http://www.corvidtechnologies.com
I am seeing the following on one of my FUSE clients (indy.rst and
indy.rst.old has ??? ???).
Has anyone seen this before? Any idea what causes this for a given
client?
If I try to access the file, I get a stale file handle..
# cp indy.rst dfr.rst
cp: cannot stat `indy.rst': Stale file handle
: remote
operation failed: No such file or directory. Path:
gfid:df69a1ee-cc85-47a9-b8ca-a32db565c340
(df69a1ee-cc85-47a9-b8ca-a32db565c340)
===
David F. Robinson, Ph.D.
President - Corvid Technologies
704.799.6944 x101 [office]
704.252.1310 [cell]
704.799.7974 [fax
--
From: Shyam srang...@redhat.com
To: David F. Robinson david.robin...@corvidtec.com; Gluster Devel
gluster-de...@gluster.org; gluster-users@gluster.org
gluster-users@gluster.org; Susant Palai spa...@redhat.com
Sent: 2/9/2015 11:11:20 AM
Subject: Re: [Gluster-devel] cannot delete non-empty directory
=32.5.jpeg
trusted.gfid=d+0ÇxþM¯GxÑ@Â
trusted.glusterfs.dht.linkto=homegfs_bkp-client-1
-- Original Message --
From: Shyam srang...@redhat.com
To: David F. Robinson david.robin...@corvidtec.com; Gluster Devel
gluster-de...@gluster.org; gluster-users@gluster.org
gluster-users
you or will you have some kind of tool to
go through and clean up all of these stale links? Or, would you just
leave them as they are?
David
-- Original Message --
From: Raghavendra Gowdappa rgowd...@redhat.com
To: David F. Robinson david.robin...@corvidtec.com
Cc: Shyam srang
: on
changelog.rollover-time: 15
changelog.fsync-interval: 3
===
David F. Robinson, Ph.D.
President - Corvid Technologies
704.799.6944 x101 [office]
704.252.1310 [cell]
704.799.7974 [fax]
david.robin...@corvidtec.com
http://www.corvidtechnologies.com
I don't think I understood what you sent enough to give it a try. I'll
wait until it comes out in a beta or release version.
David
-- Original Message --
From: Ben Turner btur...@redhat.com
To: Justin Clift jus...@gluster.org; David F. Robinson
david.robin...@corvidtec.com
Cc
..
-rwxrw 2 streadway sbir 42440 Jun 19 2014 ARMOR PACKAGES.one
-rwxrw 2 streadway sbir 38184 Jun 19 2014 CURRENT STANDARD
ARMORING.one
-- Original Message --
From: Xavier Hernandez xhernan...@datalab.es
To: David F. Robinson david.robin...@corvidtec.com; Benjamin
Turner bennytu
It was a mix of files from very small to very large. And many terabytes of
data. Approx 20tb
David (Sent from mobile)
===
David F. Robinson, Ph.D.
President - Corvid Technologies
704.799.6944 x101 [office]
704.252.1310 [cell]
704.799.7974 [fax
Should I run my rsync with --block-size = something other than the default? Is
there an optimal value? I think 128k is the max from my quick search. Didn't
dig into it throughly though.
David (Sent from mobile)
===
David F. Robinson, Ph.D.
President - Corvid
I'll send you the emails I sent Pranith with the logs. What causes these
disconnects?
David (Sent from mobile)
===
David F. Robinson, Ph.D.
President - Corvid Technologies
704.799.6944 x101 [office]
704.252.1310 [cell]
704.799.7974 [fax]
david.robin
Isn't rsync what geo-rep uses?
David (Sent from mobile)
===
David F. Robinson, Ph.D.
President - Corvid Technologies
704.799.6944 x101 [office]
704.252.1310 [cell]
704.799.7974 [fax]
david.robin...@corvidtec.com
http://www.corvidtechnologies.com
On Feb 5
copy that. Thanks for looking into the issue.
David
-- Original Message --
From: Benjamin Turner bennytu...@gmail.com
To: David F. Robinson david.robin...@corvidtec.com
Cc: Ben Turner btur...@redhat.com; Pranith Kumar Karampuri
pkara...@redhat.com; Xavier Hernandez xhernan
: on
changelog.changelog: on
changelog.fsync-interval: 3
changelog.rollover-time: 15
server.manage-gids: on
-- Original Message --
From: Xavier Hernandez xhernan...@datalab.es
To: David F. Robinson david.robin...@corvidtec.com; Benjamin
Turner bennytu...@gmail.com
Cc: gluster-users@gluster.org
I don't recall if that was before or after my upgrade.
I'll forward you an email thread for the current heal issues which are
after the 3.6.2 upgrade...
David
-- Original Message --
From: Pranith Kumar Karampuri pkara...@redhat.com
To: David F. Robinson david.robin...@corvidtec.com
-029_proposal_draft_rev1.docx* CB_work/ gun_work/ Refs/
David
===
David F. Robinson, Ph.D.
President - Corvid Technologies
704.799.6944 x101 [office]
704.252.1310 [cell]
704.799.7974 [fax]
david.robin...@corvidtec.com
http://www.corvidtechnologies.com
. The
preferred option would be to simply use sssd on my storage systems, but
it doesn't seem to play well with gluster.
David
-- Original Message --
From: David F. Robinson david.robin...@corvidtec.com
To: Gluster Devel gluster-de...@gluster.org;
gluster-users@gluster.org gluster
. The files/directories were not shown until I did the ls on
the bricks.
David
===
David F. Robinson, Ph.D.
President - Corvid Technologies
704.799.6944 x101 [office]
704.252.1310 [cell]
704.799.7974 [fax]
david.robin...@corvidtec.com
http
wks_backup/homer_backup/logs: trusted.ec.heal: Operation not supported
wks_backup/homer_backup: trusted.ec.heal: Operation not supported
-- Original Message --
From: Benjamin Turner bennytu...@gmail.com
To: David F. Robinson david.robin...@corvidtec.com
Cc: Gluster Devel gluster-de
connection from
gfs01a.corvidtec.com-1369-2015/02/04-00:16:53:613570-homegfs-client-2-0-0
-- Original Message --
From: Benjamin Turner bennytu...@gmail.com
To: David F. Robinson david.robin...@corvidtec.com
Cc: Gluster Devel gluster-de...@gluster.org;
gluster-users@gluster.org gluster-users
I upgraded one of my bricks from 3.6.1 to 3.6.2 and I can no longer do a
'gluster volume heal homegfs info'. It hangs and never returns any
information.
I was trying to ensure that gfs01a had finished healing before upgrading
the other machines (gfs01b, gfs02a, gfs02b) in my configuration (see
/brick02a/homegfs
Number of entries: 0
Brick gfsib02b.corvidtec.com:/data/brick02b/homegfs
Number of entries: 0
===
David F. Robinson, Ph.D.
President - Corvid Technologies
704.799.6944 x101 [office]
704.252.1310 [cell]
704.799.7974 [fax]
david.robin...@corvidtec.com
http
failed
(Invalid argument)
-- Original Message --
From: Kaushal M kshlms...@gmail.com
To: David F. Robinson david.robin...@corvidtec.com
Cc: Joe Julian j...@julianfamily.org; Gluster Users
gluster-users@gluster.org; Gluster Devel gluster-de...@gluster.org
Sent: 1/27/2015 1:49:56 AM
Subject
-a'. Worked perfectly.
And the errors that were showing up in the logs every 3-seconds stopped.
Thanks for your help. Greatly appreciated.
David
-- Original Message --
From: Xavier Hernandez xhernan...@datalab.es
To: David F. Robinson david.robin...@corvidtec.com; Kaushal M
this is working properly.
David
-- Original Message --
From: Xavier Hernandez xhernan...@datalab.es
To: David F. Robinson david.robin...@corvidtec.com; Kaushal M
kshlms...@gmail.com
Cc: Gluster Users gluster-users@gluster.org; Gluster Devel
gluster-de...@gluster.org
Sent: 1/27/2015 10:02:31
...@datalab.es
To: David F. Robinson david.robin...@corvidtec.com; Kaushal M
kshlms...@gmail.com
Cc: Gluster Users gluster-users@gluster.org; Gluster Devel
gluster-de...@gluster.org
Sent: 1/27/2015 10:02:31 AM
Subject: Re: [Gluster-devel] [Gluster-users] v3.6.2
Hi,
I had a similar problem once
/glusterd restart
(sleep 20; mount -a; mount /backup_nfs/homegfs)
-- Original Message --
From: Xavier Hernandez xhernan...@datalab.es
To: David F. Robinson david.robin...@corvidtec.com; Kaushal M
kshlms...@gmail.com
Cc: Gluster Users gluster-users@gluster.org; Gluster Devel
gluster-de
/brick02bkp/homegfs_bkp on port 49155
-- Original Message --
From: David F. Robinson david.robin...@corvidtec.com
To: gluster-users@gluster.org gluster-users@gluster.org; Gluster
Devel gluster-de...@gluster.org
Sent: 1/26/2015 9:50:09 AM
Subject: v3.6.2
I have a server with v3.6.2 from
.
SELINUXTYPE=targeted
-- Original Message --
From: Justin Clift jus...@gluster.org
To: David F. Robinson david.robin...@corvidtec.com
Cc: Gluster Users gluster-users@gluster.org; Gluster Devel
gluster-de...@gluster.org
Sent: 1/26/2015 11:11:15 AM
Subject: Re: [Gluster-devel] v3.6.2
-: received signum (0), shutting down
-- Original Message --
From: Anatoly Pugachev mator...@gmail.com
To: David F. Robinson david.robin...@corvidtec.com
Cc: gluster-users@gluster.org gluster-users@gluster.org; Gluster
Devel gluster-de...@gluster.org
Sent: 1/26/2015 2:48:08 PM
Subject
[root@gfs01bkp bricks]# ps -ef | grep rpcbind
rpc 2306 1 0 11:32 ?00:00:00 rpcbind
root 5265 4638 0 11:55 pts/000:00:00 grep rpcbind
-- Original Message --
From: Joe Julian j...@julianfamily.org
To: David F. Robinson david.robin...@corvidtec.com;
gluster
...@vcplinux.com.br
To: David F. Robinson david.robin...@corvidtec.com;
gluster-users@gluster.org
Sent: 1/26/2015 12:20:16 PM
Subject: Re: [Gluster-users] v3.6.2
Suggestion:
In my CentOS 7 and GlusterFS 3.6.1 (and .2) the NFS
works normally.
Run the rpcinfo -p command and see if result is the same
in our datacenter. When I powered it back up, NFS through
gluster will no longer start.
David
-- Original Message --
From: Kaushal M kshlms...@gmail.com
To: David F. Robinson david.robin...@corvidtec.com
Cc: Atin Mukherjee amukh...@redhat.com; Pranith Kumar Karampuri
pkara...@redhat.com
...@redhat.com
To: Pranith Kumar Karampuri pkara...@redhat.com; Justin Clift
jus...@gluster.org; David F. Robinson david.robin...@corvidtec.com
Cc: Gluster Users gluster-users@gluster.org; Gluster Devel
gluster-de...@gluster.org
Sent: 1/26/2015 11:51:13 PM
Subject: Re: [Gluster-devel] v3.6.2
On 01/27
nfs.log attached. Where is glusterd.log?
David
-- Original Message --
From: Atin Mukherjee amukh...@redhat.com
To: David F. Robinson david.robin...@corvidtec.com; Pranith Kumar
Karampuri pkara...@redhat.com; Justin Clift jus...@gluster.org
Cc: Gluster Users gluster-users@gluster.org
-- Original Message --
From: Joe Julian j...@julianfamily.org
To: Kaushal M kshlms...@gmail.com; David F. Robinson
david.robin...@corvidtec.com
Cc: Gluster Users gluster-users@gluster.org; Gluster Devel
gluster-de...@gluster.org
Sent: 1/27/2015 12:48:49 AM
Subject: Re: [Gluster-devel
When I installed the 3.5.3beta on my HPC cluster, I get the following
warnings during the mounts:
WARNING: getfattr not found, certain checks will be skipped..
I do not have attr installed on my compute nodes. Is this something
that I need in order for gluster to work properly or can this
You are correct... Typo on my part. It happened when I installed
3.6.0-beta3.
I'll file the bug report so that fuse installation is dependent on attr
being installed... Thanks...
David
-- Original Message --
From: Niels de Vos nde...@redhat.com
To: David F. Robinson david.robin
Is this bug-fix going to be in the 3.5.3 beta release?
David
-- Original Message --
From: Niels de Vos nde...@redhat.com
To: M S Vishwanath Bhat vb...@redhat.com
Cc: David F. Robinson david.robin...@corvidtec.com;
gluster-users@gluster.org
Sent: 8/15/2014 6:25:04 AM
Subject: Re
One other question... Is there a way to set a config variable to turn
off the compression for the rsync?
David
-- Original Message --
From: M S Vishwanath Bhat vb...@redhat.com
To: David F. Robinson david.robin...@corvidtec.com;
gluster-users@gluster.org
Sent: 8/13/2014 6:47:11 AM
I was hoping someone could help me debug my geo-replication under
gluster 3.5.2.
I am trying to use geo-replication to create a lagged backup of my data.
What I wanted to do was to turn off the geo-replication (gluster volume
geo-replication homegfs gfsib01bkp.corvidtec.com::homegfs_bkp stop)
When I do an rsync to backup my workstations onto a gluster mounted file
system, I end up with thousands of healing problems. The heal status
repeatedly shows the same number of healed/failed during a gluster
volume heal homegfs info statistics check. There are over 9,000 files
healed and
.x86_64
glusterfs-cli-3.5.0-2.el6.x86_64
glusterfs-rdma-3.5.0-2.el6.x86_64
-- Original Message --
From: David F. Robinson david.robin...@corvidtec.com
To: gluster-users@gluster.org
Sent: 5/19/2014 10:58:57 AM
Subject: rsync + stale file handle
When I do an rsync to backup my
45 matches
Mail list logo