Re: [Gluster-users] Help

2020-04-06 Thread Suvendu Mitra
Unsubscribe On Mon, Apr 6, 2020 at 6:10 PM Oskar Pienkos wrote: > Unsubscribe > > Sent from Outlook > > -- > *From:* gluster-users-boun...@gluster.org < > gluster-users-boun...@gluster.org> on behalf of > gluster-users-requ...@gluster.org

[Gluster-users] Help

2020-04-06 Thread Oskar Pienkos
Unsubscribe Sent from Outlook From: gluster-users-boun...@gluster.org on behalf of gluster-users-requ...@gluster.org Sent: April 6, 2020 5:00 AM To: gluster-users@gluster.org Subject: Gluster-users Digest, Vol 144, Issue 6 Send

Re: [Gluster-users] Help: gluster-block

2019-04-10 Thread Karim Roumani
Actually we have a question. We did two tests as follows. Test 1 - iSCSI target on the glusterFS server Test 2 - iSCSI target on a separate server with gluster client Test 2 performed a read speed of <1GB/second while Test 1 about 300MB/second Any reason you see to why this may be the case? ᐧ

Re: [Gluster-users] Help: gluster-block

2019-04-10 Thread Karim Roumani
Thank you Prasanna for your quick response very much appreaciated we will review and get back to you. ᐧ On Mon, Mar 25, 2019 at 9:00 AM Prasanna Kalever wrote: > [ adding +gluster-users for archive purpose ] > > On Sat, Mar 23, 2019 at 1:51 AM Jeffrey Chin > wrote: > > > > Hello Mr. Kalever, >

Re: [Gluster-users] Help: gluster-block

2019-04-03 Thread Prasanna Kalever
On Tue, Apr 2, 2019 at 1:34 AM Karim Roumani wrote: > Actually we have a question. > > We did two tests as follows. > > Test 1 - iSCSI target on the glusterFS server > Test 2 - iSCSI target on a separate server with gluster client > > Test 2 performed a read speed of <1GB/second while Test 1

Re: [Gluster-users] Help: gluster-block

2019-03-25 Thread Prasanna Kalever
[ adding +gluster-users for archive purpose ] On Sat, Mar 23, 2019 at 1:51 AM Jeffrey Chin wrote: > > Hello Mr. Kalever, Hello Jeffrey, > > I am currently working on a project to utilize GlusterFS for VMWare VMs. In > our research, we found that utilizing block devices with GlusterFS would be

Re: [Gluster-users] Help analise statedumps

2019-03-19 Thread Amar Tumballi Suryanarayan
It is really good to hear the good news. The one thing we did in 5.4 (and which is present in 6.0 too), is implementing garbage collection logic in fuse module, which keeps a check on memory usage. Looks like the feature is working as expected. Regards, Amar On Wed, Mar 20, 2019 at 7:24 AM

Re: [Gluster-users] Help analise statedumps

2019-03-19 Thread Sankarshan Mukhopadhyay
On Tue, Mar 19, 2019 at 11:09 PM Pedro Costa wrote: > Sorry to revive old thread, but just to let you know that with the latest 5.4 > version this has virtually stopped happening. > > I can’t ascertain for sure yet, but since the update the memory footprint of > Gluster has been massively

Re: [Gluster-users] Help analise statedumps

2019-03-19 Thread Pedro Costa
: Pedro Costa Sent: 04 February 2019 11:28 To: 'Sanju Rakonde' Cc: 'gluster-users' Subject: RE: [Gluster-users] Help analise statedumps Hi Sanju, If it helps, here’s also a statedump (taken just now) since the reboot’s: https://pmcdigital.sharepoint.com/:u:/g/EbsT2RZsuc5BsRrf7F-fw-4BocyeogW

Re: [Gluster-users] Help analise statedumps

2019-02-05 Thread Pedro Costa
Hi Sanju, The process was `glusterfs`, yes I took the statedump for the same process (different PID since it was rebooted). Cheers, P. From: Sanju Rakonde Sent: 04 February 2019 06:10 To: Pedro Costa Cc: gluster-users Subject: Re: [Gluster-users] Help analise statedumps Hi, Can you please

Re: [Gluster-users] Help analise statedumps

2019-02-04 Thread Pedro Costa
: [Gluster-users] Help analise statedumps Hi Sanju, The process was `glusterfs`, yes I took the statedump for the same process (different PID since it was rebooted). Cheers, P. From: Sanju Rakonde Sent: 04 February 2019 06:10 To: Pedro Costa Cc: gluster-users Subject: Re: [Gluster-users

[Gluster-users] Help analise statedumps

2019-02-02 Thread Pedro Costa
Hi, I have a 3x replicated cluster running 4.1.7 on ubuntu 16.04.5, all 3 replicas are also clients hosting a Node.js/Nginx web server. The current configuration is as such: Volume Name: gvol1 Type: Replicate Volume ID: XX Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3

[Gluster-users] HELP: Commit failed on localhost. Please check the log file for more details

2019-01-14 Thread Mauro Gatti
Hello,I'm just installed Gluster on my raspberry Pi.I am trying to do some test on a USB stick I mounted.Unfortunally I'm stuck on a error that says: root@europa:/var/log/glusterfs# gluster volume create prova transport tcp europa:/mnt/usb1/prova volume create: prova: failed: Commit failed on

[Gluster-users] Help diagnosing poor performance on mirrored pair

2018-07-30 Thread Arthur Pemberton
I have a pair of nodes configured with GlusterFS. I am trying to otimize the read performance of the setup. with smallfiles_cli.py test I'm getting 4MiB/s on the CREATE and 14MiB/s on the READ I'm using the FUSE mount, mounting to the local GlustertFS server: localhost:/gv_wordpress_std_var

Re: [Gluster-users] Help with reconnecting a faulty brick

2017-11-20 Thread Daniel Berteaud
Le 17/11/2017 à 13:10, Ravishankar N a écrit : The parent directory will have afr xattrs indicating good and back bricks. All gfids not present in good will be deleted from bad if present. All gfids present in good will be created on the bad if not present.

Re: [Gluster-users] Help with reconnecting a faulty brick

2017-11-17 Thread Ravishankar N
On 11/17/2017 03:41 PM, Daniel Berteaud wrote: Le Jeudi, Novembre 16, 2017 13:07 CET, Ravishankar N a écrit: On 11/16/2017 12:54 PM, Daniel Berteaud wrote: Any way in this situation to check which file will be healed from which brick before reconnecting ? Using

Re: [Gluster-users] Help with reconnecting a faulty brick

2017-11-16 Thread Ravishankar N
On 11/16/2017 12:54 PM, Daniel Berteaud wrote: Le 15/11/2017 à 09:45, Ravishankar N a écrit : If it is only the brick that is faulty on the bad node, but everything else is fine, like glusterd running, the node being a part of the trusted storage pool etc,  you could just kill the brick

Re: [Gluster-users] Help with reconnecting a faulty brick

2017-11-15 Thread Daniel Berteaud
Le 15/11/2017 à 09:45, Ravishankar N a écrit : If it is only the brick that is faulty on the bad node, but everything else is fine, like glusterd running, the node being a part of the trusted storage pool etc,  you could just kill the brick first and do step-13 in "10.6.2. Replacing a Host

Re: [Gluster-users] Help with reconnecting a faulty brick

2017-11-15 Thread Ravishankar N
On 11/15/2017 12:54 PM, Daniel Berteaud wrote: Le 13/11/2017 à 21:07, Daniel Berteaud a écrit : Le 13/11/2017 à 10:04, Daniel Berteaud a écrit : Could I just remove the content of the brick (including the .glusterfs directory) and reconnect ? If it is only the brick that is faulty

Re: [Gluster-users] Help with reconnecting a faulty brick

2017-11-14 Thread Daniel Berteaud
Le 13/11/2017 à 21:07, Daniel Berteaud a écrit : Le 13/11/2017 à 10:04, Daniel Berteaud a écrit : Could I just remove the content of the brick (including the .glusterfs directory) and reconnect ? In fact, what would be the difference between reconnecting the brick with a wiped FS, and

Re: [Gluster-users] Help with reconnecting a faulty brick

2017-11-13 Thread Daniel Berteaud
Le 13/11/2017 à 10:04, Daniel Berteaud a écrit : Could I just remove the content of the brick (including the .glusterfs directory) and reconnect ? In fact, what would be the difference between reconnecting the brick with a wiped FS, and using gluster volume remove-brick vmstore replica

[Gluster-users] Help with reconnecting a faulty brick

2017-11-13 Thread Daniel Berteaud
Hi everyone. I'm running a simple Gluster setup like this:   * Replicate 2x1   * Only 2 nodes, with one brick each   * Nodes are CentOS 7.0, uising GlusterFS 3.5.3 (yes, I know it's old, I just can't upgrade right now) No sharding or anything "fancy". This Gluster volume is used to host VM

Re: [Gluster-users] help

2016-11-05 Thread Joe Julian
On 11/05/2016 05:47 AM, Fariborz Mafakheri wrote: Hi all, I have a gluster volume with 4 bricks(srv1, srv2, srv3 and srv4). srv2 is replicate of srv1 and srv3 is replicate of srv3. each of this bricks has 1.7TB data. I am gonna replace srv2 and srv4 ​ with two new servers(srvP2 and srvP4).

[Gluster-users] help

2016-11-05 Thread Fariborz Mafakheri
Hi all, I have a gluster volume with 4 bricks(srv1, srv2, srv3 and srv4). srv2 is replicate of srv1 and srv3 is replicate of srv3. each of this bricks has 1.7TB data. I am gonna replace srv2 and srv4 ​ with two new servers(srvP2 and srvP4). srvP2 and srvP4 are in another datacenter and as I said

Re: [Gluster-users] Help needed in debugging a glusted-vol.log file..

2016-07-26 Thread Atin Mukherjee
On Wed, Jul 27, 2016 at 10:06 AM, B.K.Raghuram wrote: > A request for a quick clarification Atin.. The bind-insecure and > allow-insecure seem to be turned on by default from 3.7.3 so if I install > 3.7.13, then can I safely use samba/gfapi-vfs without modifying any >

Re: [Gluster-users] Help needed in debugging a glusted-vol.log file..

2016-07-26 Thread B.K.Raghuram
A request for a quick clarification Atin.. The bind-insecure and allow-insecure seem to be turned on by default from 3.7.3 so if I install 3.7.13, then can I safely use samba/gfapi-vfs without modifying any parameters? On Mon, Jul 25, 2016 at 9:53 AM, Atin Mukherjee wrote:

Re: [Gluster-users] Help needed in debugging a glusted-vol.log file..

2016-07-25 Thread Atin Mukherjee
They are side effects to the same problem. GlusterD is unable to process any of the incoming connection as well. On Mon, Jul 25, 2016 at 1:47 PM, B.K.Raghuram wrote: > Thanks Atin, > > We're in the process of upgrading to 3.7 by working our way through some > upgrade issues

Re: [Gluster-users] Help needed in debugging a glusted-vol.log file..

2016-07-25 Thread B.K.Raghuram
Thanks Atin, We're in the process of upgrading to 3.7 by working our way through some upgrade issues but in the meanwhile we saw these issues is 3.6.. There also seemed to be tons of errors like the following starting from line 29691. Do these point to some communication problems on the network?

Re: [Gluster-users] Help needed in debugging a glusted-vol.log file..

2016-07-24 Thread Atin Mukherjee
This doesn't look like abnormal given you are running gluster 3.6.1 version. In 3.6 "allow-insecure" option is turned off by default which means glusterd will only entertain requests received on privileged ports and this node has exhausted all the privileged ports by that time. If you are willing

Re: [Gluster-users] Help needed in debugging a glusted-vol.log file..

2016-07-24 Thread Atin Mukherjee
Will have a look at the logs tomorrow. On Sunday 24 July 2016, B.K.Raghuram wrote: > Some issues seem to have cropped up at a remote location and I'm trying to > make sense of what the issues are. Could someone help in throwing some > light on the potential issues here? I

[Gluster-users] Help needed to upgrade procedure 3.27 to current version

2016-04-14 Thread Axel Jacobs
Hello I'm new to gluster. A couple of weeks ago I installed a two node gluster cluster with one volume (500GB - replicated only) on debian 7. I have 5 gluster cllients connecting to this volume. I realize that the installed version is quite old : 3.2.7 (compared to the current stable version

[Gluster-users] Help with debugging EC volume

2016-03-30 Thread jayakrishnan mm
Hi Any help? -- Forwarded message -- From: "jayakrishnan mm" Date: Mar 29, 2016 11:13 AM Subject: Help with debugging EC volume To: Cc: Hi I am trying to debug the EC translator using gdb, following the method suggested

Re: [Gluster-users] Help, peer probe seems to get stuck on large cluster.

2015-09-01 Thread Vijay Bellur
On Tuesday 01 September 2015 09:10 AM, Yiping Peng wrote: Even if I'm seeing disconnected nodes (also from already-in-pool nodes), my volume is still intact and available. So I'm guessing that glusterd has few to do with volume/brick service? Am I safe to kill all glusterd on all servers and

Re: [Gluster-users] Help, peer probe seems to get stuck on large cluster.

2015-09-01 Thread Yiping Peng
> Is this setup on bare metal or does it involve virtual machines? I'm running GlusterFS on physical machines. No virtual machines involved. Have you checked if port 24007 is reachable on all peers? Yes, I did "nc -z server.xxx 24007" on all servers. All succeeded. Additionally, if you are

[Gluster-users] Help, peer probe seems to get stuck on large cluster.

2015-08-31 Thread Yiping Peng
Hi guys, I've been running GlusterFS for a couple of days and it's been nice and steady, except a minor problem: the peer probing on my relatively large cluster seems to stuck for a long time. Last time atinm told me in IRC (I was barius.2333 in IRC) that a cluster as large as 50+ nodes might

Re: [Gluster-users] Help, peer probe seems to get stuck on large cluster.

2015-08-31 Thread Atin Mukherjee
On 08/31/2015 01:10 PM, Yiping Peng wrote: > Hi guys, > > > I've been running GlusterFS for a couple of days and it's been nice and > steady, except a minor problem: the peer probing on my relatively large > cluster seems to stuck for a long time. > > > Last time atinm told me in IRC (I was

Re: [Gluster-users] Help, peer probe seems to get stuck on large cluster.

2015-08-31 Thread Yiping Peng
The "Disconnected" state of nodes randomly changes, so I randomly picked a node and tailed last several lines of /var/log/glusterfs/etc-glusterfs-glusterd.vol.log (is it the right log file?). I can still access the cluster from servers already in pool, either reading or writing is fine. The log

Re: [Gluster-users] Help, peer probe seems to get stuck on large cluster.

2015-08-31 Thread Yiping Peng
Even if I'm seeing disconnected nodes (also from already-in-pool nodes), my volume is still intact and available. So I'm guessing that glusterd has few to do with volume/brick service? Am I safe to kill all glusterd on all servers and start this whole peer probing process all over again? If I do

[Gluster-users] Help with degraded volume recovery

2015-07-27 Thread Jon
Hello all, I’m having a problem with one of my Gluster volumes and would appreciate some help.  My setup is an 8-node cluster set up as 4x2 replication, with 20TB per node for 88TB total. OS is CentOS 7.1, there is 1 20 TB brick per node on its own XFS partition, separate from the OS. A few

[Gluster-users] Help: Folder hangs on concurrent accesses (Gluster 3.7 / Ubuntu 14.04)

2015-06-15 Thread Dr. Sven Abels
Hi, we got a setup of 3 machines running gluster and acting as both client and server. We upgraded from gluster 3.4 to 3.7 yesterday evening /using the ppa from Launchpad for Ubuntu 14.04). We got a web application, which creates a .locked file in a shared folder to synchronize. This file is

[Gluster-users] Help interpreting profile results

2014-12-19 Thread Paul E Stallworth
Hello everyone, I have been tasked with helping to find out why we are having issues with our website page load times. Our webstack consists of 3 apache servers, 3 glusterfs servers, and 3 mysql servers, backed by Nimble storage. On the glusterfs machines the gluster disk is mounted as ext4

Re: [Gluster-users] HELP on Geo-Replication Faulty rsync failed with ENOENT

2014-03-29 Thread Venky Shankar
rsync is in /usr/bin of Slave VM. Why rsync failed with ENOENT (No file or directory)? what about the mater node? I basically follow the Gluster_FS_3.3.0_admin guide. What did I miss? I have tried on Debian and CentOS, all failed. BTW, under CentOS 6.4, I cannot even stop the

Re: [Gluster-users] HELP on Geo-Replication Faulty rsync failed with ENOENT

2014-03-29 Thread Venky Shankar
I meant master node On Sat, Mar 29, 2014 at 11:53 AM, Venky Shankar yknev.shan...@gmail.comwrote: rsync is in /usr/bin of Slave VM. Why rsync failed with ENOENT (No file or directory)? what about the mater node? I basically follow the Gluster_FS_3.3.0_admin guide. What did I miss? I

[Gluster-users] HELP on Geo-Replication Faulty rsync failed with ENOENT

2014-03-28 Thread Cary Tsai
Setup 2 VMs and they are all CentOS 6.4. All two are installed with GlusterFS 3.4.2 (server, client, and geo-replication) No Firewalls, all using 'root' account. All are in the same subnet. After start the geo-replication, keep getting: 2014-03-26 20:38:26.401585] I [monitor(monitor):80:monitor]

Re: [Gluster-users] help, all latency comes from FINODELK

2014-02-06 Thread Mingfan Lu
Message - From: Mingfan Lu mingfan...@gmail.com To: Pranith Kumar Karampuri pkara...@redhat.com Cc: haiwei.xie-soulinfo haiwei@soulinfo.com, Gluster-users@gluster.org List gluster-users@gluster.org Sent: Thursday, January 30, 2014 12:43:44 PM Subject: Re: [Gluster-users] help, all

Re: [Gluster-users] help, all latency comes from FINODELK

2014-02-05 Thread Mingfan Lu
-soulinfo haiwei@soulinfo.com, Gluster-users@gluster.org List gluster-users@gluster.org Sent: Thursday, January 30, 2014 12:43:44 PM Subject: Re: [Gluster-users] help, all latency comes from FINODELK When I triedto execute the statedump, the brick server of the BAD node crashed

Re: [Gluster-users] help, all latency comes from FINODELK

2014-01-29 Thread Pranith Kumar Karampuri
@gluster.org List gluster-users@gluster.org Sent: Tuesday, January 28, 2014 7:44:14 AM Subject: Re: [Gluster-users] help, all latency comes from FINODELK About 200+ clients How to print print lock info in bricks, print lock requst info in afr dht? thanks. On Tue, Jan 28, 2014 at 9:51 AM

Re: [Gluster-users] help, all latency comes from FINODELK

2014-01-29 Thread Mingfan Lu
the notifications immediately. Pranith - Original Message - From: Mingfan Lu mingfan...@gmail.com To: haiwei.xie-soulinfo haiwei@soulinfo.com Cc: Gluster-users@gluster.org List gluster-users@gluster.org Sent: Tuesday, January 28, 2014 7:44:14 AM Subject: Re: [Gluster-users] help, all

Re: [Gluster-users] help, all latency comes from FINODELK

2014-01-29 Thread Pranith Kumar Karampuri
, 2014 12:43:44 PM Subject: Re: [Gluster-users] help, all latency comes from FINODELK When I triedto execute the statedump, the brick server of the BAD node crashed. On Wed, Jan 29, 2014 at 8:13 PM, Pranith Kumar Karampuri pkara...@redhat.com wrote: Could you take statedump of bricks

[Gluster-users] help, all latency comes from FINODELK

2014-01-27 Thread Mingfan Lu
I ran glusterfs volume profile my_volume info, I got some thing likes: 0.00 0.00 us 0.00 us 0.00 us 30 FORGET 0.00 0.00 us 0.00 us 0.00 us185 RELEASE 0.00 0.00 us 0.00 us 0.00 us 11

Re: [Gluster-users] help, all latency comes from FINODELK

2014-01-27 Thread Mingfan Lu
all my clients hang when they creating dir On Mon, Jan 27, 2014 at 11:33 PM, Mingfan Lu mingfan...@gmail.com wrote: I ran glusterfs volume profile my_volume info, I got some thing likes: 0.00 0.00 us 0.00 us 0.00 us 30 FORGET 0.00 0.00 us

Re: [Gluster-users] help, all latency comes from FINODELK

2014-01-27 Thread haiwei.xie-soulinfo
hi, looks FINODELK deak lock. how many clients, and nfs or fuse? Maybe the best way is to print lock info in bricks, print lock requst info in afr dht. all my clients hang when they creating dir On Mon, Jan 27, 2014 at 11:33 PM, Mingfan Lu mingfan...@gmail.com wrote: I ran

Re: [Gluster-users] help, all latency comes from FINODELK

2014-01-27 Thread Mingfan Lu
About 200+ clients How to print print lock info in bricks, print lock requst info in afr dht? thanks. On Tue, Jan 28, 2014 at 9:51 AM, haiwei.xie-soulinfo haiwei@soulinfo.com wrote: hi, looks FINODELK deak lock. how many clients, and nfs or fuse? Maybe the best way is to print

Re: [Gluster-users] help with replace-brick migrate

2013-12-18 Thread Mariusz Sobisiak
Hello,  Hello, I Have 2 servers in replication mode in production. The volume has 2.0Tb in use. [...] So, the status say Migration complete, but has just 1.3Tb in the new server, and these 1.3Tb just in the .glusterfs folder. What version of gluster? I found that version 3.4 have a big

Re: [Gluster-users] help with replace-brick migrate

2013-12-18 Thread Raphael Rabelo
Hello Mariuz, My glusterfs version is 3.3.1. I don't knew that can be a lot of trash (orphan) files in .glusterfs, so here what i do: # du -hs * 3.5G documents 341G home 58G archives 808G secure_folder 93G secure_folder2 1.3T .glusterfs/ So, i have 1.3Tb in gluster!! So, i think that

Re: [Gluster-users] help with replace-brick migrate

2013-12-18 Thread Mariusz Sobisiak
I don't knew that can be a lot of trash (orphan) files in .glusterfs, so here what i do: I think you can easily check by this command (on old gluster server): find .glusterfs/ -type f -links 1 If something returns that means file have only one link and doesn't have a real file on the brick so it

Re: [Gluster-users] help with replace-brick migrate

2013-12-18 Thread Raphael Rabelo
I think you can easily check by this command (on old gluster server): find .glusterfs/ -type f -links 1 If something returns that means file have only one link and doesn't have a real file on the brick so it unintended (and it's orphan file). The result of # find .glusterfs/ -type f -links 1 is

Re: [Gluster-users] help with replace-brick migrate

2013-12-18 Thread Mariusz Sobisiak
The result of  # find .glusterfs/ -type f -links 1 is empty ...  You run it on old gluster (where is 2TB)? It may take a long time. So in fact it very strange. You can use that other find command to compare amount of data. I thinked that add these 2 new bricks in the same volume with replica

[Gluster-users] Help needed - Lots of RDMA changes in git repo, please test!

2013-05-20 Thread Justin Clift
Hi all, Some really large patch sets for improving RDMA (Infiniband) have been merged into GlusterFS's git repo (master branch) over the last few days. If you're a user of Infiniband and have some time to build the packages from source (it's easy) then test stuff out, that would be really,

Re: [Gluster-users] Help needed - Lots of RDMA changes in git repo, please test!

2013-05-20 Thread John Walker
I would check to see if they've been merged into the 3.4 branch, because that's a major point of emphasis for this release. -JM Justin Clift jcl...@redhat.com wrote: Hi all, Some really large patch sets for improving RDMA (Infiniband) have been merged into GlusterFS's git repo (master

Re: [Gluster-users] Help needed - Lots of RDMA changes in git repo, please test!

2013-05-20 Thread Vijay Bellur
On 05/20/2013 05:39 PM, John Walker wrote: I would check to see if they've been merged into the 3.4 branch, because that's a major point of emphasis for this release. The patchset will be backported to release-3.4. It is not yet part of 3.4 branch. -Vijay

Re: [Gluster-users] help, avoid glusterfs client from occuping rsync port 873

2013-03-12 Thread 符永涛
Finally I find the answer though it's not very convinient but doable. 1 sudo gluster volume set volume server.allow-insecure on 2 get the volume file from glusterfs backend server the file is volume-fuse.vol 3 edit the volume file add option client-bind-insecure and set it's value to on for every

Re: [Gluster-users] help, avoid glusterfs client from occuping rsync port 873

2013-03-12 Thread 符永涛
Gives above steps are too complex and it's not easy to maintain volume files especially if the volume backend brick changes. I intend to make it a patch and only make it configurable throw mount option on the client side. Any body agree with me? Thank you. 2013/3/12, 符永涛 yongta...@gmail.com:

Re: [Gluster-users] help, avoid glusterfs client from occuping rsync port 873

2013-03-12 Thread John Mark Walker
Was looking through your steps - it would be interesting to see if there are any unforeseen ramifications from doing this. Hoping to see some others chime in who've tried it. -JM - Original Message - Gives above steps are too complex and it's not easy to maintain volume files

Re: [Gluster-users] help, avoid glusterfs client from occuping rsync port 873

2013-03-12 Thread 符永涛
I have finished a patch on my local repo based on glusterfs3.3.0.5rhs source rpm. The patch export client-bind-insecure option to fuse mount. To use it the command would be like the following: mount -t glusterfs -o client-bind-insecure server:volume mountpoint This involves very few changes on

[Gluster-users] help, avoid glusterfs client from occuping rsync port 873

2013-03-11 Thread 符永涛
Dear gluster experts, I recently run into an problem related to glusterfs client process occupy rsync port 873. Since there're serveral volumes in our client machines the port conflicts may occur. Is ther any easy way to solve this problem? Thank you very much. -- 符永涛

Re: [Gluster-users] help, avoid glusterfs client from occuping rsync port 873

2013-03-11 Thread 符永涛
dear gluster experts, any suggestions? Thank you very much. 2013/3/11, 符永涛 yongta...@gmail.com: Dear gluster experts, I recently run into an problem related to glusterfs client process occupy rsync port 873. Since there're serveral volumes in our client machines the port conflicts may occur.

[Gluster-users] help posix_fallocate too slow on glusterfs client

2013-02-22 Thread 符永涛
Dear gluster experts, Recently I have encountered a problem about posix_fallocate performance on glusterfs client. I use posix_fallocate to allocate a file with specified size on glusterfs client. For example if I create a file with size of 1907658896, it will take about 20 seconds on glusterfs

Re: [Gluster-users] help me, glusterfs 3.3 doen't support fopen-keep-cache?

2013-01-09 Thread 符永涛
Hi Brian, I check the review of c1fe8b7fd7 and find there's only one line code change. I change accordingly in glusterfs3.3 and find page cache works which boost performance for my usecase. 1 Besides make it a configurable mount option do I need to change any other code? 2 Is there any side effect

Re: [Gluster-users] help me, glusterfs 3.3 doen't support fopen-keep-cache?

2013-01-09 Thread Brian Foster
On 01/09/2013 08:06 AM, 符永涛 wrote: Hi Brian, I check the review of c1fe8b7fd7 and find there's only one line code change. I change accordingly in glusterfs3.3 and find page cache works which boost performance for my usecase. Well c1fe8b7fd7 is a bit more than a one line change, but it is a

Re: [Gluster-users] help me, glusterfs 3.3 doen't support fopen-keep-cache?

2013-01-09 Thread 符永涛
Hi Brian, Thank you and you have helped me a lot! 2013/1/9, Brian Foster bfos...@redhat.com: On 01/09/2013 08:06 AM, 符永涛 wrote: Hi Brian, I check the review of c1fe8b7fd7 and find there's only one line code change. I change accordingly in glusterfs3.3 and find page cache works which boost

[Gluster-users] help me, glusterfs 3.3 doen't support fopen-keep-cache?

2013-01-08 Thread 符永涛
Dear gluster experts, I search through glusterfs 3.3 source tree and can't find any fuse open option FOPEN_KEEP_CACHE related code. Does this mean that glusterfs 3.3 doen't support fuse keep cache feature? However I did find keep cache code in mainline. My question is: 1 does glusterfs 3.3

Re: [Gluster-users] help me, glusterfs 3.3 doen't support fopen-keep-cache?

2013-01-08 Thread Brian Foster
On 01/08/2013 09:49 AM, 符永涛 wrote: Dear gluster experts, I search through glusterfs 3.3 source tree and can't find any fuse open option FOPEN_KEEP_CACHE related code. Does this mean that glusterfs 3.3 doen't support fuse keep cache feature? However I did find keep cache code in mainline. My

Re: [Gluster-users] help me, glusterfs 3.3 doen't support fopen-keep-cache?

2013-01-08 Thread yongtaofu
Hi brian Thank you very much for your info. I have already tried io-cache it helps a little but it can't compete with page cache. BTW what's the effort if I want to back port fopen keep cache to 3.3? 发自我的 iPhone 在 2013-1-8,23:21,Brian Foster bfos...@redhat.com 写道: On 01/08/2013 09:49 AM, 符永涛

Re: [Gluster-users] help me, glusterfs 3.3 doen't support fopen-keep-cache?

2013-01-08 Thread Brian Foster
On 01/08/2013 06:42 PM, yongtaofu wrote: Hi brian Thank you very much for your info. I have already tried io-cache it helps a little but it can't compete with page cache. BTW what's the effort if I want to back port fopen keep cache to 3.3? No problem. I don't think it should be that

Re: [Gluster-users] help, glusterfs replica can't handle brick filesystem crash and shutdown

2013-01-04 Thread Brian Foster
On 01/04/2013 01:00 AM, 符永涛 wrote: Dear gluster experts, Glusterfs replica is supposed to handle hardware failure of one brick.(For example power outage etc). However we recently encounter an issue related to xfs file system crash and shutdown. When it happens the whole volume dones't work.

Re: [Gluster-users] help, glusterfs replica can't handle brick filesystem crash and shutdown

2013-01-04 Thread 符永涛
Yes the filesystem shutdown cause glusterfs confuse since glusterfsd is still live for that brick and it tries to retrive file extended attributes and fails. When access some of the files from client side Input output error occur, the symptom is same as underlying filesystem doesn't support

[Gluster-users] Help! Can't replace brick.

2012-10-24 Thread Tao Lin
Hello, glusterfs experts: I'v been using glusterfs-3.2.6 for moths, and it works fine.Now i'm facing a problem of disk(brick) full.For some resons, I have to expand space using replace current bricks to new bricks instead of add new bricks. It seemed okay when i used this command : gluster

Re: [Gluster-users] Help! Can't replace brick.

2012-10-24 Thread Tao Lin
I've already detached 10.67.15.27 when i found i could not get replace move on. 2012/10/25 Tao Lin linba...@gmail.com Hello, glusterfs experts: I'v been using glusterfs-3.2.6 for moths, and it works fine.Now i'm facing a problem of disk(brick) full.For some resons, I have to expand space

[Gluster-users] help?

2012-10-20 Thread Doug Schouten
Hello, I've been fiddling with GlusterFS 3.3 to try and improve performance. It seems something funny has happened in my hacking. Now I am seeing many file access errors like: rsync: read errors mapping /global/...: No data available (61) An ls -lR shows the files to be there ... Having

Re: [Gluster-users] Help with some socket related logwarnings

2012-03-30 Thread Carl Boberg
Yes. I have checked all os related settings and yes by default redhat os:es usually mess up the /etc/hosts file like that so it one of the first things to check when I set up a new system. --- Carl Boberg Operations Memnon Networks AB Tegnérgatan 34, SE-113 59 Stockholm Mobile: +46(0)70 467 27

Re: [Gluster-users] Help with some socket related logwarnings

2012-02-25 Thread Dan Bretherton
...@memnonnetworks.com Subject: [Gluster-users] Help with some socket related logwarnings To:gluster-users@gluster.org Message-ID: cannflqwzhpjrvkhcrme0wwfbufs_zjqv1dwvhbqri6hsktd...@mail.gmail.com Content-Type: text/plain; charset=iso-8859-1 Hello I have just started to prepare a smallish

[Gluster-users] help for different uid gid

2012-02-23 Thread Az
Hi,all I have serval several servers , they all have a user “a” ,I try to mount glusterfs client on them, but user “a” on two of them has different uid gid with others, So ,mount cause different permission. I saw there was a solution: features/filter: * root-squashing

[Gluster-users] Help with some socket related logwarnings

2012-02-23 Thread Carl Boberg
Hello I have just started to prepare a smallish production setup (nothing critical running on it yet). I have 2 gluster servers with 8 volumes and Im getting a lot of theese warnings in the cli.log [2012-02-23 22:32:15.808271] W [rpc-transport.c:606:rpc_transport_load] 0-rpc-transport: missing

Re: [Gluster-users] Help about mounting via Fuse client

2012-02-10 Thread vietbh
Dear all, I am using Gluster glusterfs 3.2.5 to share a storage, running on Red Hat Enterprise Linux Server release 6.0 (Santiago) When a client mounts to Gluster server, the gluster log file on client side are shown below. Then after around 7 day, the mount gets hung up. I tried around 3.2.4

[Gluster-users] Help Gluster client bug

2012-01-23 Thread Bùi Hùng Việt
Dear experts, I am using Redhat (Linux ivr_media_2 2.6.32-71.el6.x86_64 #1 SMP Wed Sep 1 01:33:01 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux) and glusterfs-3.2.4 to create file sharing system cross 4 media servers. On client side, I often see these messages: [2012-01-23 17:16:38.742896] W

[Gluster-users] help! - cant bring 1 of 4 servers up..

2011-08-08 Thread paul simpson
hi gluster gurus, i have 4 servers g1,g2,g3 g4 with 24T each running gluster 3.1.5 on opensuse 11.3. they have been running well for the last few months in a distributed+replicated setup. i just found that the nfs log had filled up my root disk of g4 (my bad). so, i removed the log file - and

Re: [Gluster-users] help! - cant bring 1 of 4 servers up..

2011-08-08 Thread Pranith Kumar K
zip /etc/glusterd and send across Pranith On 08/08/2011 10:15 PM, paul simpson wrote: hi gluster gurus, i have 4 servers g1,g2,g3 g4 with 24T each running gluster 3.1.5 on opensuse 11.3. they have been running well for the last few months in a distributed+replicated setup. i just found

Re: [Gluster-users] help! - cant bring 1 of 4 servers up..

2011-08-08 Thread John Mark Walker
-users] help! - cant bring 1 of 4 servers up.. After debugging the problem with paul on IRC, we found that because his disk had no free space, the subsequent writes on one of the peer files (used for recovering run-time information) failed so the file became empty. Because of this glusterd could

Re: [Gluster-users] help! - cant bring 1 of 4 servers up..

2011-08-08 Thread paul simpson
pranith - a huge and heartfelt thanks for your super prompt attention. a very scary event turned into a non-event. :) regards, -p On 8 August 2011 18:35, Pranith Kumar K prani...@gluster.com wrote: ** After debugging the problem with paul on IRC, we found that because his disk had no

[Gluster-users] Help configuring Gluster 3.1

2010-11-16 Thread Jeremy Mann
I am getting stuck at creating the volume and do not know what to do next. I follow the instructions to set up a 4 server node trusted pool. Each node has their ext3 formatted drives on /data and glusterd is running on each node. All for are seen with peer status: [r...@structure glusterfs]#

Re: [Gluster-users] Help configuring Gluster 3.1

2010-11-16 Thread Uwe Kastens
Hi Jeremy, Try the IP addresses of the server by creating the volume. This seems to be a know DNS bug. BR Uwe Am 16.11.2010 um 19:39 schrieb Jeremy Mann: I am getting stuck at creating the volume and do not know what to do next. I follow the instructions to set up a 4 server node trusted

Re: [Gluster-users] Help configuring Gluster 3.1

2010-11-16 Thread Jeremy Mann
Uwe Kastens wrote: Hi Jeremy, Try the IP addresses of the server by creating the volume. This seems to be a know DNS bug. Thanks! That worked like a charm. -- Jeremy Mann jer...@biochem.uthscsa.edu University of Texas Health Science Center Bioinformatics Core Facility

Re: [Gluster-users] Help configuring Gluster 3.1

2010-11-16 Thread Craig Carl
On 11/16/2010 10:58 AM, Jeremy Mann wrote: Uwe Kastens wrote: Hi Jeremy, Try the IP addresses of the server by creating the volume. This seems to be a know DNS bug. Thanks! That worked like a charm. If you are still testing Gluster this bug has been fixed is a QA build, you can get it here

Re: [Gluster-users] Help Installing Storage Platform on Intel Atom hardware

2010-11-15 Thread Bala.JA
Hi Tariq, Can you remove 'rhgb quiet' options from boot option? This will give boot message on console. You can press Tab key during boot menu to get access to boot parameters. Thanks, Regards, Bala Tariq Islam wrote: Hey guys, I'm having issues installing gluster storage platform.

[Gluster-users] help on AFR translator

2009-07-14 Thread maurizio oggiano
Hi, I' m using AFR to keep aligned a file system on two servers. This file system contain oracle data. Oracle works properly on the file system managed by AFR either on the server A or server B. But if I switch off server A while oracle is runnig and I launch oracle on the second server all works

Re: [Gluster-users] HELP : Files lost after DHT expansion

2009-07-05 Thread eagleeyes
* # Allow access to brick volume end-volume 2009-07-06 eagleeyes 发件人: Sachidananda 发送时间: 2009-07-04 11:39:03 收件人: eagleeyes 抄送: gluster-users 主题: Re: [Gluster-users] HELP : Files lost after DHT expansion Hi, eagleeyes wrote: When i update to gluster2.0.3 ,after dht expansion

Re: [Gluster-users] HELP : Files lost after DHT expansion

2009-07-05 Thread eagleeyes
auth.addr.brick8.allow * # Allow access to brick volume end-volume 2009-07-06 eagleeyes 发件人: Sachidananda 发送时间: 2009-07-04 11:39:03 收件人: eagleeyes 抄送: gluster-users 主题: Re: [Gluster-users] HELP : Files lost after DHT expansion Hi, eagleeyes wrote: When i update to gluster2.0.3

Re: [Gluster-users] HELP : Files lost after DHT expansion

2009-07-03 Thread eagleeyes
When i update to gluster2.0.3 ,after dht expansion ,double directorys appear in the gluster directory ,why ? client configure volume dht type cluster/dht option lookup-unhashed yes option min-free-disk 10% subvolumes client1 client2 client3 client4 client5 client6 client7 client8

Re: [Gluster-users] HELP : Files lost after DHT expansion

2009-07-03 Thread Sachidananda
Hi, eagleeyes wrote: When i update to gluster2.0.3 ,after dht expansion ,double directorys appear in the gluster directory ,why ? client configure volume dht type cluster/dht option lookup-unhashed yes option min-free-disk 10% subvolumes client1 client2 client3 client4

  1   2   >