Re: [Gluster-users] VolumeOpt Set fails of a freshly created volume

2019-01-30 Thread David Spisla
Hello Gluster Community,

today I got the same error messages in glusterd.log when setting volume
options of a freshly created volume. See the log entry:

[2019-01-30 10:15:55.597268] I [run.c:242:runner_log]
(-->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0xdad2a)
[0x7f08ce71ed2a]
-->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0xda81c)
[0x7f08ce71e81c] -->/usr/lib64/libglusterfs.so.0(runner_log+0x105)
[0x7f08d4bd0575] ) 0-management: Ran script:
/var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh
--volname=integration-archive1 -o cluster.lookup-optimize=on
--gd-workdir=/var/lib/glusterd
*[2019-01-30 10:15:55.806303] W [socket.c:719:__socket_rwv] 0-management:
readv on 10.10.12.102:24007  failed
(Input/output error)*
*[2019-01-30 10:15:55.806344] E [socket.c:246:ssl_dump_error_stack]
0-management:   error:140943F2:SSL routines:ssl3_read_bytes:sslv3 alert
unexpected messag*e
The message "E [MSGID: 101191]
[event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
handler" repeated 51 times between [2019-01-30 10:15:51.659656] and
[2019-01-30 10:15:55.635151]
[2019-01-30 10:15:55.806370] I [MSGID: 106004]
[glusterd-handler.c:6430:__glusterd_peer_rpc_notify] 0-management: Peer
 (), in state
, has disconnected from glusterd.
[2019-01-30 10:15:55.806487] W
[glusterd-locks.c:795:glusterd_mgmt_v3_unlock]
(-->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0x24349)
[0x7f08ce668349]
-->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0x2d950)
[0x7f08ce671950]
-->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0xe0239)
[0x7f08ce724239] ) 0-management: Lock for vol archive1 not held
[2019-01-30 10:15:55.806505] W [MSGID: 106117]
[glusterd-handler.c:6451:__glusterd_peer_rpc_notify] 0-management: Lock not
released for archive1
[2019-01-30 10:15:55.806522] W
[glusterd-locks.c:795:glusterd_mgmt_v3_unlock]
(-->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0x24349)
[0x7f08ce668349]
-->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0x2d950)
[0x7f08ce671950]
-->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0xe0239)
[0x7f08ce724239] ) 0-management: Lock for vol archive2 not held
[2019-01-30 10:15:55.806529] W [MSGID: 106117]
[glusterd-handler.c:6451:__glusterd_peer_rpc_notify] 0-management: Lock not
released for archive2
[2019-01-30 10:15:55.806543] W
[glusterd-locks.c:795:glusterd_mgmt_v3_unlock]
(-->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0x24349)
[0x7f08ce668349]
-->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0x2d950)
[0x7f08ce671950]
-->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0xe0239)
[0x7f08ce724239] ) 0-management: Lock for vol gluster_shared_storage not
held
[2019-01-30 10:15:55.806553] W [MSGID: 106117]
[glusterd-handler.c:6451:__glusterd_peer_rpc_notify] 0-management: Lock not
released for gluster_shared_storage
[2019-01-30 10:15:55.806576] W
[glusterd-locks.c:806:glusterd_mgmt_v3_unlock]
(-->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0x24349)
[0x7f08ce668349]
-->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0x2d950)
[0x7f08ce671950]
-->/usr/lib64/glusterfs/5.3/xlator/mgmt/glusterd.so(+0xe0074)
[0x7f08ce724074] ) 0-management: Lock owner mismatch. Lock for vol
integration-archive1 held by 451b6e04-5098-4a35-a312-edbb0d8328a0
[2019-01-30 10:15:55.806584] W [MSGID: 106117]
[glusterd-handler.c:6451:__glusterd_peer_rpc_notify] 0-management: Lock not
released for integration-archive1
[2019-01-30 10:15:55.806846] E [rpc-clnt.c:346:saved_frames_unwind] (-->
/usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x17d)[0x7f08d4b8122d] (-->
/usr/lib64/libgfrpc.so.0(+0xca3d)[0x7f08d4948a3d] (-->
/usr/lib64/libgfrpc.so.0(+0xcb5e)[0x7f08d4948b5e] (-->
/usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x8b)[0x7f08d494a0bb]
(--> /usr/lib64/libgfrpc.so.0(+0xec68)[0x7f08d494ac68] ) 0-management:
forced unwinding frame type(glusterd mgmt v3) op(--(1)) called at
2019-01-30 10:15:55.804680 (xid=0x1ae)
[2019-01-30 10:15:55.806865] E [MSGID: 106115]
[glusterd-mgmt.c:116:gd_mgmt_v3_collate_errors] 0-management: Locking
failed on fs-lrunning-c2-n2. Please check log file for details.
[2019-01-30 10:15:55.806914] E [MSGID: 101191]
[event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
handler
[2019-01-30 10:15:55.806898] E [MSGID: 106150]
[glusterd-syncop.c:1904:gd_sync_task_begin] 0-management: Locking Peers
Failed.
The message "E [MSGID: 101191]
[event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
handler" repeated 4 times between [2019-01-30 10:15:55.806914] and
[2019-01-30 10:15:56.322122]
[2019-01-30 10:15:56.322287] E [MSGID: 106529]
[glusterd-volume-ops.c:1916:glusterd_op_stage_delete_volume] 0-management:
Some of the peers are down
[2019-01-30 10:15:56.322319] E [MSGID: 106301]
[glusterd-syncop.c:1308:gd_stage_op_phase] 0-management: Staging of
operation 'Volume Delete' failed on localhost : Some of the peers are down

Again my peer "fs-lrunning-c2-n2" is not connec

Re: [Gluster-users] VolumeOpt Set fails of a freshly created volume

2019-01-16 Thread Atin Mukherjee
On Wed, Jan 16, 2019 at 9:48 PM David Spisla  wrote:

> Dear Gluster Community,
>
> i created a replica 4 volume from gluster-node1 on a 4-Node Cluster with
> SSL/TLS network encryption . During setting the 'cluster.use-compound-fops'
> option, i got the error:
>
> $  volume set: failed: Commit failed on gluster-node2. Please check log
> file for details.
>
> Here is the glusterd.log from gluster-node1:
>
> *[2019-01-15 15:18:36.813034] I [run.c:242:runner_log]
> (-->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0xdad2a)
> [0x7fc24d91cd2a]
> -->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0xda81c)
> [0x7fc24d91c81c] -->/usr/lib64/libglusterfs.so.0(runner_log+0x105)
> [0x7fc253dce0b5] ) 0-management: Ran script:
> /var/lib/glusterd/hooks/1/set/post/S30samba-set.sh
> --volname=integration-archive1 -o cluster.use-compound-fops=on
> --gd-workdir=/var/lib/glusterd*
> [2019-01-15 15:18:36.821193] I [run.c:242:runner_log]
> (-->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0xdad2a)
> [0x7fc24d91cd2a]
> -->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0xda81c)
> [0x7fc24d91c81c] -->/usr/lib64/libglusterfs.so.0(runner_log+0x105)
> [0x7fc253dce0b5] ) 0-management: Ran script:
> /var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh
> --volname=integration-archive1 -o cluster.use-compound-fops=on
> --gd-workdir=/var/lib/glusterd
> [2019-01-15 15:18:36.842383] W [socket.c:719:__socket_rwv] 0-management:
> readv on 10.10.12.42:24007 failed (Input/output error)
> *[2019-01-15 15:18:36.842415] E [socket.c:246:ssl_dump_error_stack]
> 0-management:   error:140943F2:SSL routines:ssl3_read_bytes:sslv3 alert
> unexpected message*
> The message "E [MSGID: 101191]
> [event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
> handler" repeated 81 times between [2019-01-15 15:18:30.735508] and
> [2019-01-15 15:18:36.808994]
> [2019-01-15 15:18:36.842439] I [MSGID: 106004]
> [glusterd-handler.c:6430:__glusterd_peer_rpc_notify] 0-management: Peer <
> gluster-node2> (<02724bb6-cb34-4ec3-8306-c2950e0acf9b>), in state  in Cluster>, has disconnected from glusterd.
>

The above shows there was a peer disconnect event received from
gluster-node2 and this sequence might have happened while the commit
operation was in-flight and hence the volume set failed on gluster-node2.
Related to ssl error, I'd request Milind to comment.

[2019-01-15 15:18:36.842638] W
> [glusterd-locks.c:795:glusterd_mgmt_v3_unlock]
> (-->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0x24349)
> [0x7fc24d866349]
> -->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0x2d950)
> [0x7fc24d86f950]
> -->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0xe0239)
> [0x7fc24d922239] ) 0-management: Lock for vol archive1 not held
> [2019-01-15 15:18:36.842656] W [MSGID: 106117]
> [glusterd-handler.c:6451:__glusterd_peer_rpc_notify] 0-management: Lock not
> released for archive1
> [2019-01-15 15:18:36.842674] W
> [glusterd-locks.c:795:glusterd_mgmt_v3_unlock]
> (-->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0x24349)
> [0x7fc24d866349]
> -->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0x2d950)
> [0x7fc24d86f950]
> -->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0xe0239)
> [0x7fc24d922239] ) 0-management: Lock for vol archive2 not held
> [2019-01-15 15:18:36.842680] W [MSGID: 106117]
> [glusterd-handler.c:6451:__glusterd_peer_rpc_notify] 0-management: Lock not
> released for archive2
> [2019-01-15 15:18:36.842694] W
> [glusterd-locks.c:795:glusterd_mgmt_v3_unlock]
> (-->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0x24349)
> [0x7fc24d866349]
> -->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0x2d950)
> [0x7fc24d86f950]
> -->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0xe0239)
> [0x7fc24d922239] ) 0-management: Lock for vol gluster_shared_storage not
> held
> [2019-01-15 15:18:36.842702] W [MSGID: 106117]
> [glusterd-handler.c:6451:__glusterd_peer_rpc_notify] 0-management: Lock not
> released for gluster_shared_storage
> [2019-01-15 15:18:36.842719] W
> [glusterd-locks.c:806:glusterd_mgmt_v3_unlock]
> (-->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0x24349)
> [0x7fc24d866349]
> -->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0x2d950)
> [0x7fc24d86f950]
> -->/usr/lib64/glusterfs/5.2/xlator/mgmt/glusterd.so(+0xe0074)
> [0x7fc24d922074] ) 0-management: Lock owner mismatch. Lock for vol
> integration-archive1 held by ffdaa400-82cc-4ada-8ea7-144bf3714269
> [2019-01-15 15:18:36.842727] W [MSGID: 106117]
> [glusterd-handler.c:6451:__glusterd_peer_rpc_notify] 0-management: Lock not
> released for integration-archive1
> [2019-01-15 15:18:36.842970] E [rpc-clnt.c:346:saved_frames_unwind] (-->
> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x17d)[0x7fc253d7f18d] (-->
> /usr/lib64/libgfrpc.so.0(+0xca3d)[0x7fc253b46a3d] (-->
> /usr/lib64/libgfrpc.so.0(+0xcb5e)[0x7fc253b46b5e] (-->
> /usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x8b)[0x7fc253b480bb]
> (--> /usr/lib64/libgfrpc.so.0(+0xec68