Hi Alastair,
Can you please provide the snap daemon logs. It is present in
/var/log/glusterfs/snaps/snapd.log.
Provide the snapd logs of the node from which you have mounted the volume
(i.e. the node whose ip address/hostname you have given while mounting the
volume).
Regards,
Raghavendra
On
I just upgraded my cluster to 3.7.11 from 3.7.10 and access to the .snaps
directories now fail with
bash: cd: .snaps: Transport endpoint is not connected
in the volume log file on the client I see:
016-04-22 21:08:28.005854] I [rpc-clnt.c:1847:rpc_clnt_reconfig]
> 2-homes-snapd-client:
-Atin
Sent from one plus one
On 22-Apr-2016 8:04 pm, "Dj Merrill" wrote:
>
> On 04/20/2016 07:32 PM, Atin Mukherjee wrote:
> > Unfortunately there is no such document. But I can take you through
> > couple of code files [1] [2] where the first one defines all the volume
> >
On 04/20/2016 07:32 PM, Atin Mukherjee wrote:
> Unfortunately there is no such document. But I can take you through
> couple of code files [1] [2] where the first one defines all the volume
> tunables and their respective supported op-version where the later has
> the exact number of all those
Scanned files are 1112 only on the node the rebalance command run, all
other fields are 0 for every nodes.
If the issue is happening because of temp file name, we will make sure
not to use temp files while using gluster.
On Fri, Apr 22, 2016 at 9:43 AM, Xavier Hernandez
Some time ago I saw an issue with Gluster-NFS combined with disperse
under high write load. I thought that it was already solved, but this
issue is very similar.
The problem seemed to be related to multithreaded epoll and throttling.
For some reason NFS was sending a massive amount of
Even the number of scanned files is 0 ?
This seems an issue with DHT. I'm not an expert on this area. Not sure
if the regular expression pattern that some files still match could
cause an interference with rebalance.
Anyway, if you have found a solution for your use case, it's ok for me.
Not only skipped column but all columns are 0 in rebalance status
command. It seems rebalance does not to anything. All '-T'
files are there. Anyway we wrote our custom mapreduce tool and it is
copying files right now to gluster and it is utilizing all 60 nodes as
expected. I will delete
Hi Chen,
I thought I replied to your previous mail.
This issue has been faced by other users also. Serkan is the one if you follow
his mail on gluster-user.
I still have to dig further into it. Soon we will try to reproduce it and debug
it.
My observation is that we face this issue while
When you execute a rebalance 'force' the skipped column should be 0 for
all nodes and all '-T' files must have disappeared. Otherwise
something failed. Is this true in your case ?
On 21/04/16 15:19, Serkan Çoban wrote:
Same result. Also checked the rebalance.log file, it has also no
10 matches
Mail list logo