[Gluster-devel] Just a thought, a better way to rebuild replica when some bricks go down rather than replace-brick

2017-05-26 Thread Jaden Liang
down, it can just modify those storage graphs of files which lost replica, then rebuild can be run which replace-brick operations. Just a thought, any suggestion would be great! Best regards, Jaden Liang 5/25/2017 ___ Gluster-devel mailing list Gluster

[Gluster-devel] In what kind of circumstance, the changlog trusted.afr.xxx of a file will become 0xFFFFFFFF

2014-11-20 Thread Jaden Liang
Hi all, I have a glusterfs-3.4.5 build with 6 x 2 Distributed-Replicate volume for KVM storage. And found one of the file is not in a consistant state. I checked the extent attributes of every replica file on brick as below: # file:

Re: [Gluster-devel] [Gluster-users]glusterfs crashed lead by liblvm2app.so with BD xlator

2014-11-10 Thread Jaden Liang
On Monday, November 10, 2014, Vijay Bellur vbel...@redhat.com wrote: On 11/08/2014 03:50 PM, Jaden Liang wrote: Hi all, We are testing BD xlator to verify the KVM running with gluster. After some simple tests, we encountered a coredump of glusterfs lead by liblvm2app.so. Hope some one

Re: [Gluster-devel] [Gluster-users]glusterfs crashed lead by liblvm2app.so with BD xlator

2014-11-10 Thread Jaden Liang
On Monday, November 10, 2014, Vijay Bellur vbel...@redhat.com wrote: On 11/08/2014 03:50 PM, Jaden Liang wrote: Hi all, We are testing BD xlator to verify the KVM running with gluster. After some simple tests, we encountered a coredump of glusterfs lead by liblvm2app.so. Hope some one

[Gluster-devel] [Gluster-users]glusterfs crashed lead by liblvm2app.so with BD xlator

2014-11-08 Thread Jaden Liang
Hi all, We are testing BD xlator to verify the KVM running with gluster. After some simple tests, we encountered a coredump of glusterfs lead by liblvm2app.so. Hope some one here might give some advises about this issue. We have debug for some time, and found out this coredump is triggered by a

Re: [Gluster-devel] Any review is appreciated. Reason about gluster server_connection_cleanup uncleanly, file flocks leaks in frequently network disconnection

2014-09-20 Thread Jaden Liang
till then. Cheers, Vijay On 09/19/2014 12:44 PM, Jaden Liang wrote: Hi all, Here is a patch for this file flocks uncleanly disconnect issue of gluster-3.4.5. I am totally new guy in the gluster development work flow, and still trying to understand how to submit this patch to Gerrit. So I

Re: [Gluster-devel] Any review is appreciated. Reason about gluster server_connection_cleanup uncleanly, file flocks leaks in frequently network disconnection

2014-09-19 Thread Jaden Liang
+in process_uuid, start with 0, +increase once a new connection */ } clnt_conf_t; typedef struct _client_fd_ctx { On Wednesday, September 17, 2014, Jaden Liang jaden1...@gmail.com wrote: Hi all, By several days tracking, we finally pinpointed the reason

[Gluster-devel] Any review is appreciated. Reason about gluster server_connection_cleanup uncleanly, file flocks leaks in frequently network disconnection

2014-09-17 Thread Jaden Liang
Hi all, By several days tracking, we finally pinpointed the reason of glusterfs uncleanly detach file flocks in frequently network disconnection. We are now working on a patch to submit. And here is this issue details. Any suggestions will be appreciated! First of all, as I mentioned in

Re: [Gluster-devel] [Gluster-users] Regarding the write performance in replica 1 volume in 1Gbps Ethernet, get about 50MB/s while writing single file.

2014-09-10 Thread Jaden Liang
SATA disk spinning at 7200rpm reaches around 115MBps). Can you, please, explain which type of bricks do you have on each server node? I'll try to emulate your setup and test it. Thank you! El 04/09/14 a les 03:20, Jaden Liang ha escrit: Hi Ramon, I am running on gluster FUSE client

Re: [Gluster-devel] [Gluster-users] Regarding the write performance in replica 1 volume in 1Gbps Ethernet, get about 50MB/s while writing single file.

2014-09-03 Thread Jaden Liang
, write data goes simultaneously to both server nodes using half bandwidth for each of the client's 1GbE port because replica is done by client side, that results on a writing speed around 50MBps(60MBps). I hope this helps. El 03/09/14 a les 07:02, Jaden Liang ha escrit: Hi all, We did some

[Gluster-devel] [Gluster-users] Regarding the write performance in replica 1 volume in 1Gbps Ethernet, get about 50MB/s while writing single file.

2014-09-02 Thread Jaden Liang
. Now we are heading to the rpc mechanism in glusterfs. Still, we think this issue maybe encountered in gluster-devel or gluster-users teams. Therefor, any suggestions would be grateful. Or have anyone know such issue? Best regards, Jaden Liang 9/2/2014 -- Best regards, Jaden Liang

Re: [Gluster-devel] About file descriptor leak in glusterfsd daemon after network failure

2014-08-25 Thread Jaden Liang
not released even stop the file process. Why does glusterfsd open a new fd instead of reusing the original reopened fd? Does glusterfsd have any kind of mechanism retrieve such fds? 2014-08-20 21:54 GMT+08:00 Niels de Vos nde...@redhat.com: On Wed, Aug 20, 2014 at 07:16:16PM +0800, Jaden Liang wrote

[Gluster-devel] About file descriptor leak in glusterfsd daemon after network failure

2014-08-20 Thread Jaden Liang
to look for some help in here. Here are our questions: 1. Has this issue been solved? Or is it a known issue? 2. Does anyone know the file descriptor maintenance logic in glusterfsd(server-side)? When the fd will be closed or held? Thank you very much. -- Best regards, Jaden Liang