down, it can just modify those storage graphs
of files which lost replica, then rebuild can be run which replace-brick
operations.
Just a thought, any suggestion would be great!
Best regards,
Jaden Liang
5/25/2017
___
Gluster-devel mailing list
Gluster
Hi all,
I have a glusterfs-3.4.5 build with 6 x 2 Distributed-Replicate volume for
KVM storage. And found one of the file is not in a consistant state. I
checked the extent attributes of every replica file on brick as below:
# file:
On Monday, November 10, 2014, Vijay Bellur vbel...@redhat.com wrote:
On 11/08/2014 03:50 PM, Jaden Liang wrote:
Hi all,
We are testing BD xlator to verify the KVM running with gluster. After
some
simple tests, we encountered a coredump of glusterfs lead by
liblvm2app.so.
Hope some one
On Monday, November 10, 2014, Vijay Bellur vbel...@redhat.com wrote:
On 11/08/2014 03:50 PM, Jaden Liang wrote:
Hi all,
We are testing BD xlator to verify the KVM running with gluster. After
some
simple tests, we encountered a coredump of glusterfs lead by
liblvm2app.so.
Hope some one
Hi all,
We are testing BD xlator to verify the KVM running with gluster. After some
simple tests, we encountered a coredump of glusterfs lead by liblvm2app.so.
Hope some one here might give some advises about this issue.
We have debug for some time, and found out this coredump is triggered by a
till then.
Cheers,
Vijay
On 09/19/2014 12:44 PM, Jaden Liang wrote:
Hi all,
Here is a patch for this file flocks uncleanly disconnect issue of
gluster-3.4.5.
I am totally new guy in the gluster development work flow, and still
trying to
understand how to submit this patch to Gerrit. So I
+in process_uuid, start
with 0,
+increase once a new
connection */
} clnt_conf_t;
typedef struct _client_fd_ctx {
On Wednesday, September 17, 2014, Jaden Liang jaden1...@gmail.com wrote:
Hi all,
By several days tracking, we finally pinpointed the reason
Hi all,
By several days tracking, we finally pinpointed the reason of glusterfs
uncleanly
detach file flocks in frequently network disconnection. We are now working
on
a patch to submit. And here is this issue details. Any suggestions will be
appreciated!
First of all, as I mentioned in
SATA disk spinning at
7200rpm reaches around 115MBps).
Can you, please, explain which type of bricks do you have on each server
node?
I'll try to emulate your setup and test it.
Thank you!
El 04/09/14 a les 03:20, Jaden Liang ha escrit:
Hi Ramon,
I am running on gluster FUSE client
, write data goes simultaneously to both
server nodes using half bandwidth for each of the client's 1GbE port
because replica is done by client side, that results on a writing speed
around 50MBps(60MBps).
I hope this helps.
El 03/09/14 a les 07:02, Jaden Liang ha escrit:
Hi all,
We did some
.
Now we are heading to the rpc mechanism in glusterfs. Still, we think this
issue maybe encountered in gluster-devel or gluster-users teams. Therefor,
any suggestions would be grateful. Or have anyone know such issue?
Best regards,
Jaden Liang
9/2/2014
--
Best regards,
Jaden Liang
not released even stop the file
process.
Why does glusterfsd open a new fd instead of reusing the original reopened
fd?
Does glusterfsd have any kind of mechanism retrieve such fds?
2014-08-20 21:54 GMT+08:00 Niels de Vos nde...@redhat.com:
On Wed, Aug 20, 2014 at 07:16:16PM +0800, Jaden Liang wrote
to look for some help in here. Here are our questions:
1. Has this issue been solved? Or is it a known issue?
2. Does anyone know the file descriptor maintenance logic in
glusterfsd(server-side)? When the fd will be closed or held?
Thank you very much.
--
Best regards,
Jaden Liang
13 matches
Mail list logo