Re: [Gluster-users] Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica

2018-11-02 Thread Ravishankar N

Mabi,

If bug 1637953 is what you are experiencing, then you need to follow the 
workarounds mentioned in 
https://lists.gluster.org/pipermail/gluster-users/2018-October/035178.html. 
Can you see if this works?


-Ravi


On 11/02/2018 11:40 PM, mabi wrote:

I tried again to manually run a heal by using the "gluster volume heal" command 
because still not files have been healed and noticed the following warning in the 
glusterd.log file:

[2018-11-02 18:04:19.454702] I [MSGID: 106533] 
[glusterd-volume-ops.c:938:__glusterd_handle_cli_heal_volume] 0-management: 
Received heal vol req for volume myvol-private
[2018-11-02 18:04:19.457311] W [rpc-clnt.c:1753:rpc_clnt_submit] 0-glustershd: 
error returned while attempting to connect to host:(null), port:0

It looks like the glustershd can't connect to "host:(null)", could that be the reason why 
there is no healing taking place? if yes why do I see here "host:(null)"? and what needs 
fixing?

This seeem to have happened since I upgraded from 3.12.14 to 4.1.5.

I really would appreciate some help here, I suspect being an issue with 
GlusterFS 4.1.5.

Thank you in advance for any feedback.


‐‐‐ Original Message ‐‐‐
On Wednesday, October 31, 2018 11:13 AM, mabi  wrote:


Hello,

I have a GlusterFS 4.1.5 cluster with 3 nodes (including 1 arbiter) and currently have a 
volume with around 27174 files which are not being healed. The "volume heal 
info" command shows the same 27k files under the first node and the second node but 
there is nothing under the 3rd node (arbiter).

I already tried running a "volume heal" but none of the files got healed.

In the glfsheal log file for that particular volume the only error I see is a 
few of these entries:

[2018-10-31 10:06:41.524300] E [rpc-clnt.c:184:call_bail] 
0-myvol-private-client-0: bailing out frame type(GlusterFS 4.x v1) 
op(INODELK(29)) xid = 0x108b sent = 2018-10-31 09:36:41.314203. timeout = 1800 
for 127.0.1.1:49152

and then a few of these warnings:

[2018-10-31 10:08:12.161498] W [dict.c:671:dict_ref] 
(-->/usr/lib/x86_64-linux-gnu/glusterfs/4.1.5/xlator/cluster/replicate.so(+0x6734a) 
[0x7f2a6dff434a] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x5da84) 
[0x7f2a798e8a84] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_ref+0x58) 
[0x7f2a798a37f8] ) 0-dict: dict is NULL [Invalid argument]

the glustershd.log file shows the following:

[2018-10-31 10:10:52.502453] E [rpc-clnt.c:184:call_bail] 
0-myvol-private-client-0: bailing out frame type(GlusterFS 4.x v1) 
op(INODELK(29)) xid = 0xaa398 sent = 2018-10-31 09:40:50.927816. timeout = 1800 
for 127.0.1.1:49152
[2018-10-31 10:10:52.502502] E [MSGID: 114031] 
[client-rpc-fops_v2.c:1306:client4_0_inodelk_cbk] 0-myvol-private-client-0: 
remote operation failed [Transport endpoint is not connected]

any idea what could be wrong here?

Regards,
Mabi


___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica

2018-11-02 Thread mabi
I tried again to manually run a heal by using the "gluster volume heal" command 
because still not files have been healed and noticed the following warning in 
the glusterd.log file:

[2018-11-02 18:04:19.454702] I [MSGID: 106533] 
[glusterd-volume-ops.c:938:__glusterd_handle_cli_heal_volume] 0-management: 
Received heal vol req for volume myvol-private
[2018-11-02 18:04:19.457311] W [rpc-clnt.c:1753:rpc_clnt_submit] 0-glustershd: 
error returned while attempting to connect to host:(null), port:0

It looks like the glustershd can't connect to "host:(null)", could that be the 
reason why there is no healing taking place? if yes why do I see here 
"host:(null)"? and what needs fixing?

This seeem to have happened since I upgraded from 3.12.14 to 4.1.5.

I really would appreciate some help here, I suspect being an issue with 
GlusterFS 4.1.5.

Thank you in advance for any feedback.


‐‐‐ Original Message ‐‐‐
On Wednesday, October 31, 2018 11:13 AM, mabi  wrote:

> Hello,
>
> I have a GlusterFS 4.1.5 cluster with 3 nodes (including 1 arbiter) and 
> currently have a volume with around 27174 files which are not being healed. 
> The "volume heal info" command shows the same 27k files under the first node 
> and the second node but there is nothing under the 3rd node (arbiter).
>
> I already tried running a "volume heal" but none of the files got healed.
>
> In the glfsheal log file for that particular volume the only error I see is a 
> few of these entries:
>
> [2018-10-31 10:06:41.524300] E [rpc-clnt.c:184:call_bail] 
> 0-myvol-private-client-0: bailing out frame type(GlusterFS 4.x v1) 
> op(INODELK(29)) xid = 0x108b sent = 2018-10-31 09:36:41.314203. timeout = 
> 1800 for 127.0.1.1:49152
>
> and then a few of these warnings:
>
> [2018-10-31 10:08:12.161498] W [dict.c:671:dict_ref] 
> (-->/usr/lib/x86_64-linux-gnu/glusterfs/4.1.5/xlator/cluster/replicate.so(+0x6734a)
>  [0x7f2a6dff434a] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x5da84) 
> [0x7f2a798e8a84] 
> -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_ref+0x58) 
> [0x7f2a798a37f8] ) 0-dict: dict is NULL [Invalid argument]
>
> the glustershd.log file shows the following:
>
> [2018-10-31 10:10:52.502453] E [rpc-clnt.c:184:call_bail] 
> 0-myvol-private-client-0: bailing out frame type(GlusterFS 4.x v1) 
> op(INODELK(29)) xid = 0xaa398 sent = 2018-10-31 09:40:50.927816. timeout = 
> 1800 for 127.0.1.1:49152
> [2018-10-31 10:10:52.502502] E [MSGID: 114031] 
> [client-rpc-fops_v2.c:1306:client4_0_inodelk_cbk] 0-myvol-private-client-0: 
> remote operation failed [Transport endpoint is not connected]
>
> any idea what could be wrong here?
>
> Regards,
> Mabi


___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] GCS release 0.2

2018-11-02 Thread John Strunk
Today, we are announcing the availability of GCS (Gluster Container
Storage) 0.2. This is a follow-on to last month’s release of version 0.1.

In addition to various bug fixes and enhancements, highlights include:
- Update of glusterd2 container to 20181102 nightly
- Deploy environment now uses Kubernetes 1.12
- Changes to Gluster pod naming
- Fix for incorrect capacity reporting

== Included components
Glusterd2
- Image:
docker.io/gluster/glusterd2-nightly@sha256:3f8345fee243154d7c8e32af47b4974c7a4cc7a13f776cb3c1d7f129e391cc6c
- Created: 2018-11-02T14:32:44.280939032Z
Gluster CSI driver:
- Image:
docker.io/gluster/glusterfs-csi-driver@sha256:6c861dbda285c0889e332ebb57d644e4a98e29481f22da294e91f92155c4435f
- Created: 2018-08-30T06:26:35.742876505Z

To get started with this release, please see the releases page [1] and the
deploy instructions [2].

Regards,
Team GCS

[1] https://github.com/gluster/gcs/releases
[2] https://github.com/gluster/gcs/tree/master/deploy
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users