Re: [Gluster-devel] glusterfsd memory leak issue found after enable ssl

2019-05-05 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
Hi,
Sorry, I am so busy with other issues these days, could you help me to submit 
my patch for review? It is based on glusterfs3.12.15 code. But even with this 
patch , memory leak still exists, from memory leak tool it should be related 
with ssl_accept, not sure if it is because of openssl library or because 
improper use of ssl interfaces.
--- a/rpc/rpc-transport/socket/src/socket.c
+++ b/rpc/rpc-transport/socket/src/socket.c
@@ -1019,7 +1019,16 @@ static void __socket_reset(rpc_transport_t *this) {
   memset(>incoming, 0, sizeof(priv->incoming));
   event_unregister_close(this->ctx->event_pool, priv->sock, priv->idx);
-
+  if(priv->use_ssl&& priv->ssl_ssl)
+  {
+gf_log(this->name, GF_LOG_INFO,
+   "clear and reset for socket(%d), free ssl ",
+   priv->sock);
+SSL_shutdown(priv->ssl_ssl);
+SSL_clear(priv->ssl_ssl);
+SSL_free(priv->ssl_ssl);
+priv->ssl_ssl = NULL;
+  }
   priv->sock = -1;
   priv->idx = -1;
   priv->connected = -1;
@@ -4238,6 +4250,16 @@ void fini(rpc_transport_t *this) {
 pthread_mutex_destroy(>out_lock);
 pthread_mutex_destroy(>cond_lock);
 pthread_cond_destroy(>cond);
+ if(priv->use_ssl&& priv->ssl_ssl)
+  {
+gf_log(this->name, GF_LOG_INFO,
+   "clear and reset for socket(%d), free ssl ",
+   priv->sock);
+SSL_shutdown(priv->ssl_ssl);
+SSL_clear(priv->ssl_ssl);
+SSL_free(priv->ssl_ssl);
+priv->ssl_ssl = NULL;
+  }
 if (priv->ssl_private_key) {
   GF_FREE(priv->ssl_private_key);
 }


From: Amar Tumballi Suryanarayan 
Sent: Wednesday, May 01, 2019 8:43 PM
To: Zhou, Cynthia (NSB - CN/Hangzhou) 
Cc: Milind Changire ; gluster-devel@gluster.org
Subject: Re: [Gluster-devel] glusterfsd memory leak issue found after enable ssl

Hi Cynthia Zhou,

Can you post the patch which fixes the issue of missing free? We will continue 
to investigate the leak further, but would really appreciate getting the patch 
which is already worked on land into upstream master.

-Amar

On Mon, Apr 22, 2019 at 1:38 PM Zhou, Cynthia (NSB - CN/Hangzhou) 
mailto:cynthia.z...@nokia-sbell.com>> wrote:
Ok, I am clear now.
I’ve added ssl_free in socket reset and socket finish function, though 
glusterfsd memory leak is not that much, still it is leaking, from source code 
I can not find anything else,
Could you help to check if this issue exists in your env? If not I may have a 
try to merge your patch .
Step

1>   while true;do gluster v heal  info,

2>   check the vol-name glusterfsd memory usage, it is obviously increasing.
cynthia

From: Milind Changire mailto:mchan...@redhat.com>>
Sent: Monday, April 22, 2019 2:36 PM
To: Zhou, Cynthia (NSB - CN/Hangzhou) 
mailto:cynthia.z...@nokia-sbell.com>>
Cc: Atin Mukherjee mailto:amukh...@redhat.com>>; 
gluster-devel@gluster.org
Subject: Re: [Gluster-devel] glusterfsd memory leak issue found after enable ssl

According to BIO_new_socket() man page ...

If the close flag is set then the socket is shut down and closed when the BIO 
is freed.

For Gluster to have more control over the socket shutdown, the BIO_NOCLOSE flag 
is set. Otherwise, SSL takes control of socket shutdown whenever BIO is freed.

___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


--
Amar Tumballi (amarts)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Weekly Untriaged Bugs

2019-05-05 Thread jenkins
[...truncated 6 lines...]
https://bugzilla.redhat.com/1702316 / core: Cannot upgrade 5.x volume to 6.1 
because of unused 'crypt' and 'bd' xlators
https://bugzilla.redhat.com/1700295 / core: The data couldn't be flushed 
immediately even with O_SYNC in glfs_create or with glfs_fsync/glfs_fdatasync 
after glfs_write.
https://bugzilla.redhat.com/1698861 / disperse: Renaming a directory when 2 
bricks of multiple disperse subvols are down leaves both old and new dirs on 
the bricks.
https://bugzilla.redhat.com/1697293 / distribute: DHT: print hash and layout 
values in hexadecimal format in the logs
https://bugzilla.redhat.com/1703322 / doc: Need to document about 
fips-mode-rchecksum in gluster-7 release notes.
https://bugzilla.redhat.com/1702043 / fuse: Newly created files are 
inaccessible via FUSE
https://bugzilla.redhat.com/1703007 / glusterd: The telnet or something would 
cause high memory usage for glusterd & glusterfsd
https://bugzilla.redhat.com/1705351 / HDFS: glusterfsd crash after days of 
running
https://bugzilla.redhat.com/1703433 / project-infrastructure: gluster-block: 
setup GCOV & LCOV job
https://bugzilla.redhat.com/1703435 / project-infrastructure: gluster-block: 
Upstream Jenkins job which get triggered at PR level
https://bugzilla.redhat.com/1703329 / project-infrastructure: [gluster-infra]: 
Please create repo for plus one scale work
https://bugzilla.redhat.com/1699309 / snapshot: Gluster snapshot fails with 
systemd autmounted bricks
https://bugzilla.redhat.com/1702289 / tiering: Promotion failed for 
a0afd3e3-0109-49b7-9b74-ba77bf653aba.11229
https://bugzilla.redhat.com/1697812 / website: mention a pointer to all the 
mailing lists available under glusterfs 
project(https://www.gluster.org/community/)
[...truncated 2 lines...]

build.log
Description: Binary data
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel