Re: [Gluster-devel] gluster-block v0.4 is alive!

2019-05-06 Thread Niels de Vos
On Thu, May 02, 2019 at 11:04:41PM +0530, Prasanna Kalever wrote: > Hello Gluster folks, > > Gluster-block team is happy to announce the v0.4 release [1]. > > This is the new stable version of gluster-block, lots of new and > exciting features and interesting bug fixes are made available as part

Re: [Gluster-devel] [Gluster-users] Proposing to previous ganesha HA clustersolution back to gluster code as gluster-7 feature

2019-05-06 Thread Jiffin Tony Thottan
Hi On 04/05/19 12:04 PM, Strahil wrote: Hi Jiffin, No vendor will support your corosync/pacemaker stack if you do not have proper fencing. As Gluster is already a cluster of its own, it makes sense to control everything from there. Best Regards, Yeah I agree with your point. What I meant

[Gluster-devel] New in GlusterFS

2019-05-06 Thread Rajib Hossen
Hello all, I am new in glusterfs development. I would like to contribute in Erasure Coding part of glusterfs. I already studied non-systematic code and its theory. Now, I want to know how erasure coding read/write works in terms of coding. Can you please give me any documentation that'll help to

Re: [Gluster-devel] [Gluster-users] gluster-block v0.4 is alive!

2019-05-06 Thread Amar Tumballi Suryanarayan
On Thu, May 2, 2019 at 1:35 PM Prasanna Kalever wrote: > Hello Gluster folks, > > Gluster-block team is happy to announce the v0.4 release [1]. > > This is the new stable version of gluster-block, lots of new and > exciting features and interesting bug fixes are made available as part > of this

Re: [Gluster-devel] glusterfsd memory leak issue found after enable ssl

2019-05-06 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
Hi, From our test valgrind and libleak all blame ssl3_accept ///from valgrind attached to glusterfds/// ==16673== 198,720 bytes in 12 blocks are definitely lost in loss record 1,114 of 1,123 ==16673==at 0x4C2EB7B: malloc