[Gluster-devel] glusterd.log file - few observations

2018-09-09 Thread Atin Mukherjee
As highlighted in the last maintainers meeting that I'm seeing some log
entries in the glusterd log file which are (a) informative logs in one way
but can cause excessive logging and potentially may run an user with out of
space issue (b) some logs might not be errors or be avoided.
Even though these logs are captured in glusterd.log file, but they don't
originate from glusterd APIs. So request all of devs to go through the
below. I will eventually convert this to a BZ to track this better, so this
is a heads up for now.

I believe as a practice working on code changes, we need to look at the
aspects of balancing out the log entries i.e. (a) don't log everything &
(b) log only meaningful things.

===
[2018-09-10 03:55:19.236387] I [dict.c:2838:dict_get_str_boolean]
(-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_reconnect+0xc2) [0x7ff7a83d0452]
-->/usr/local/lib/glusterfs/4.2dev/rpc-transport/socket.so(+0x65b0)
[0x7ff7a06cf5b0]
-->/usr/local/lib/libglusterfs.so.0(dict_get_str_boolean+0xcf)
[0x7ff7a85fc58f] ) 0-dict: key transport.socket.ignore-enoent, integer type
asked, has string type [Invalid argument] <==* seen in volume start*

[2018-09-10 03:55:21.583508] I [run.c:241:runner_log]
(-->/usr/local/lib/glusterfs/4.2dev/xlator/mgmt/glusterd.so(+0xd166a)
[0x7ff7a34d766a]
-->/usr/local/lib/glusterfs/4.2dev/xlator/mgmt/glusterd.so(+0xd119c)
[0x7ff7a34d719c] -->/usr/local/lib/libglusterfs.so.0(runner_log+0x105)
[0x7ff7a8651805] ) 0-management: Ran script:
/var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh --volname=test-vol1
--first=yes --version=1 --volume-op=start --gd-workdir=/var/lib/glusterd
<== *seen in volume start*

[2018-09-10 03:55:13.647675] E [MSGID: 101191]
[event-epoll.c:689:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
handler <== *seen in almost all the volume operations*


[2018-09-10 03:57:14.370861] E [MSGID: 106061]
[glusterd-utils.c:10584:glusterd_max_opversion_use_rsp_dict] 0-management:
Maximum supported op-version not set in destination dictionary
<=== *seen while running gluster v get all all*

[2018-09-10 03:58:24.307305] W [MSGID: 101095]
[xlator.c:181:xlator_volopt_dynload] 0-xlator:
/usr/local/lib/glusterfs/4.2dev/xlator/features/cloudsync.so: cannot open
shared object file: No such file or directory
[2018-09-10 03:58:24.307322] E [MSGID: 106434]
[glusterd-utils.c:13301:glusterd_get_value_for_vme_entry] 0-management:
xlator_volopt_dynload error (-1)
[2018-09-10 03:58:24.307336] W [MSGID: 101095]
[xlator.c:181:xlator_volopt_dynload] 0-xlator:
/usr/local/lib/glusterfs/4.2dev/xlator/features/cloudsync.so: cannot open
shared object file: No such file or directory
[2018-09-10 03:58:24.307340] E [MSGID: 106434]
[glusterd-utils.c:13301:glusterd_get_value_for_vme_entry] 0-management:
xlator_volopt_dynload error (-1)
[2018-09-10 03:58:24.307350] W [MSGID: 101095]
[xlator.c:181:xlator_volopt_dynload] 0-xlator:
/usr/local/lib/glusterfs/4.2dev/xlator/features/cloudsync.so: cannot open
shared object file: No such file or directory
[2018-09-10 03:58:24.307355] E [MSGID: 106434]
[glusterd-utils.c:13301:glusterd_get_value_for_vme_entry] 0-management:
xlator_volopt_dynload error (-1)
[2018-09-10 03:58:24.307365] W [MSGID: 101095]
[xlator.c:181:xlator_volopt_dynload] 0-xlator:
/usr/local/lib/glusterfs/4.2dev/xlator/features/cloudsync.so: cannot open
shared object file: No such file or directory
[2018-09-10 03:58:24.307369] E [MSGID: 106434]
[glusterd-utils.c:13301:glusterd_get_value_for_vme_entry] 0-management:
xlator_volopt_dynload error (-1)
[2018-09-10 03:58:24.307378] W [MSGID: 101095]
[xlator.c:181:xlator_volopt_dynload] 0-xlator:
/usr/local/lib/glusterfs/4.2dev/xlator/features/cloudsync.so: cannot open
shared object file: No such file or directory

<===* seen while running gluster v get  all, cloudsync xlator
seems to be the culprit here*.


[2018-09-10 05:00:16.082968] I [MSGID: 101097]
[xlator.c:334:xlator_dynload_newway] 0-xlator: dlsym(xlator_api) on
/usr/local/lib/glusterfs/4.2dev/xlator/storage/posix.so: undefined symbol:
xlator_api. Fall back to old symbols
[2018-09-10 05:00:16.083388] I [MSGID: 101097]
[xlator.c:334:xlator_dynload_newway] 0-xlator: dlsym(xlator_api) on
/usr/local/lib/glusterfs/4.2dev/xlator/features/trash.so: undefined symbol:
xlator_api. Fall back to old symbols
[2018-09-10 05:00:16.084201] I [MSGID: 101097]
[xlator.c:334:xlator_dynload_newway] 0-xlator: dlsym(xlator_api) on
/usr/local/lib/glusterfs/4.2dev/xlator/features/changetimerecorder.so:
undefined symbol: xlator_api. Fall back to old symbols
[2018-09-10 05:00:16.084597] I [MSGID: 101097]
[xlator.c:334:xlator_dynload_newway] 0-xlator: dlsym(xlator_api) on
/usr/local/lib/glusterfs/4.2dev/xlator/features/changelog.so: undefined
symbol: xlator_api. Fall back to old symbols
[2018-09-10 05:00:16.084917] I [MSGID: 101097]
[xlator.c:334:xlator_dynload_newway] 0-xlator: dlsym(xlator_api) on

[Gluster-devel] Weekly Untriaged Bugs

2018-09-09 Thread jenkins
[...truncated 6 lines...]
https://bugzilla.redhat.com/1620116 / cli: Gluster cli doesn't show an error 
message when creating a disperse volume using IP addresses
https://bugzilla.redhat.com/1624006 / core: /var/run/gluster/metrics/ wasn't 
created automatically
https://bugzilla.redhat.com/1623107 / fuse: FUSE client's memory leak
https://bugzilla.redhat.com/1626085 / fuse: "glusterfs --process-name fuse" 
crashes and leads to "Transport endpoint is not connected"
https://bugzilla.redhat.com/1626043 / geo-replication: Geo-replication setup 
fails,  glusterd crahes when replication is setup
https://bugzilla.redhat.com/1620377 / project-infrastructure: Coverity scan 
setup for gluster-block and related projects
https://bugzilla.redhat.com/1625501 / project-infrastructure: gd2 smoke tests 
fail with cannot create directory ‘/var/lib/glusterd’: Permission denied
https://bugzilla.redhat.com/1617903 / project-infrastructure: Geo-rep tests 
fails with Permission denied even though keys are distributed
https://bugzilla.redhat.com/1623596 / project-infrastructure: Git plugin might 
be suffering from memory leak
https://bugzilla.redhat.com/1626453 / project-infrastructure: 
glusterd2-containers nightly job failing
https://bugzilla.redhat.com/1623452 / project-infrastructure: Mock does not 
exit cleanly when a job is aborted
https://bugzilla.redhat.com/1619013 / project-infrastructure: Rawhide builds 
are failing again
https://bugzilla.redhat.com/1620358 / project-infrastructure: smoke job failures
https://bugzilla.redhat.com/1618407 / project-infrastructure: Smoke jobs fail 
when commit messages have commented BZ or other meta information
https://bugzilla.redhat.com/1622814 / rpc: kvm lock problem
https://bugzilla.redhat.com/1623618 / rpc: mgmt/glusterd: ABRT from Fedora 
(CentOS) in glusterd_rpcsvc_notify()
https://bugzilla.redhat.com/1626610 / snapshot: [USS]: Change gf_log to gf_msg
https://bugzilla.redhat.com/1618915 / tests: Spurious failure in 
tests/basic/ec/ec-1468261.t
https://bugzilla.redhat.com/1615307 / transport: Error disabling sockopt 
IPV6_V6ONLY
https://bugzilla.redhat.com/1620580 / unclassified: Deleted a volume and 
created a new volume with similar but not the same name. The kubernetes pod 
still keeps on running and doesn't crash. Still possible to write to gluster 
mount
https://bugzilla.redhat.com/1618932 / unclassified: dht-selfheal.c: Directory 
selfheal failed
[...truncated 2 lines...]

build.log
Description: Binary data
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel