[Gluster-users] CANNOT Install gluster on aarch64: ./configure: syntax error near upexpected token 'UUID, ' PKG_CHECK_MODULES(UUID, '

2017-04-22 Thread Zhitao Li
Hello, everyone, I am installing glusterfs release 3.8.11 now on my aarch64 computer. It will fail in execute configure. [cid:0c2c69f8-aaad-400a-b0b8-9635f173ed31] Configure file is this: [cid:2ccf15ef-5d33-4dd5-ab15-81003c863878] I think architectures result in this error because on my

Re: [Gluster-users] Issue installing Gluster on CentOS 7.2

2017-04-22 Thread Eric K. Miller
> We build the GlusterFS packages against the current version of CentOS 7. > The dependencies that are installed during the package building may not be > correct for older CentOS versions. There is no guarantee that the compiled > binaries work correctly on previous versions. The backwards

Re: [Gluster-users] Replica 2 Quorum and arbiter

2017-04-22 Thread Mahdi Adnan
Thank you very much. -- Respectfully Mahdi A. Mahdi From: Karthik Subrahmanya Sent: Wednesday, April 19, 2017 4:30:30 PM To: Mahdi Adnan Cc: gluster-users@gluster.org Subject: Re: [Gluster-users] Replica 2 Quorum and arbiter Hi, Comments

Re: [Gluster-users] Quorum replica 2 and arbiter

2017-04-22 Thread Pranith Kumar Karampuri
Use the latest 3.8.x though for doing this process. I remember there was some performance issue with arbtier+sharding which is fixed in latest 3.8.x release. On Sat, Apr 22, 2017 at 5:16 PM, Ravishankar N wrote: > On 04/22/2017 12:54 PM, Pranith Kumar Karampuri wrote: >

Re: [Gluster-users] Add single server

2017-04-22 Thread Serkan Çoban
In EC if you have m+n configuration, you have to grow by m+n bricks. If you have 6+2 you need to add another 8 bricks. On Sat, Apr 22, 2017 at 3:02 PM, Gandalf Corvotempesta wrote: > I'm still trying to figure out if adding a single server to an > existing

[Gluster-users] Add single server

2017-04-22 Thread Gandalf Corvotempesta
I'm still trying to figure out if adding a single server to an existing gluster cluster is possible or not, based on EC or standard replica. I don't think so, because with replica 3, when each server is already full (no more slots for disks), I need to add 3 server at once. Is this the same even

Re: [Gluster-users] Quorum replica 2 and arbiter

2017-04-22 Thread Ravishankar N
On 04/22/2017 12:54 PM, Pranith Kumar Karampuri wrote: Also, can we add an arbiter node to the current replica 2 volume without losing data ? if yes, does the re-balance bug "Bug 1440635" affect this process ? I remember we had one user in Redhat who wanted to do the

Re: [Gluster-users] web folder on glusterfs

2017-04-22 Thread lemonnierk
Hi, I can't talk about performances on newer versions, but as of gluster 3.7 / 3.8, performances for small files (like a website) are pretty bad. It does work well though as long as you configure OPCache to keep everything in memory (bump the cache size and disable stat). As for storing a disk

Re: [Gluster-users] Replica 2 Quorum and arbiter

2017-04-22 Thread Mahdi Adnan
Thank you very much. -- Respectfully Mahdi A. Mahdi From: Karthik Subrahmanya Sent: Wednesday, April 19, 2017 4:30:30 PM To: Mahdi Adnan Cc: gluster-users@gluster.org Subject: Re: [Gluster-users] Replica 2 Quorum and arbiter Hi, Comments

Re: [Gluster-users] Bugfix release GlusterFS 3.8.11 has landed

2017-04-22 Thread mabi
Thanks for the precisions regarding the healing during the online upgrade procedure. To be on the safe side I will follow the offline upgrade procedure. I am indeed using replication with two nodes. Original Message Subject: Re: [Gluster-users] Bugfix release GlusterFS 3.8.11

Re: [Gluster-users] Quorum replica 2 and arbiter

2017-04-22 Thread Pranith Kumar Karampuri
On Sat, Apr 22, 2017 at 12:48 PM, Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > We are evolving a document to answer this question, the wip document is at > https://github.com/karthik-us/glusterdocs/blob/ > 1c97001d482923c2e6a9c566b3faf89d0c32b269/Administrator% >

Re: [Gluster-users] Quorum replica 2 and arbiter

2017-04-22 Thread Pranith Kumar Karampuri
We are evolving a document to answer this question, the wip document is at https://github.com/karthik-us/glusterdocs/blob/1c97001d482923c2e6a9c566b3faf89d0c32b269/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it.md Let us know if you still have doubts after reading this.

Re: [Gluster-users] Bugfix release GlusterFS 3.8.11 has landed

2017-04-22 Thread Pranith Kumar Karampuri
If your volume has replication/erasure coding then it is mandatory. On Fri, Apr 21, 2017 at 1:05 AM, mabi wrote: > Thanks for pointing me to the documentation. That's perfect, I can now > plan my upgrade to 3.8.11. By the way I was wondering why is a self-heal > part of the

Re: [Gluster-users] High load on glusterfsd process

2017-04-22 Thread Pranith Kumar Karampuri
+Kotresh who seems to have worked on the bug you mentioned. On Fri, Apr 21, 2017 at 12:21 PM, ABHISHEK PALIWAL wrote: > > If the patch provided in that case will resolve my bug as well then please > provide the patch so that I will backport it on 3.7.6 > > On Fri, Apr

Re: [Gluster-users] GlusterFS Shard Feature: Max number of files in .shard-Folder

2017-04-22 Thread Pranith Kumar Karampuri
+Krutika for any other inputs you may need. On Sat, Apr 22, 2017 at 12:21 PM, Pranith Kumar Karampuri < pkara...@redhat.com> wrote: > Sorry for the delay. The only internal process that we know would take > more time is self-heal and we implemented a feature called granular entry > self-heal

Re: [Gluster-users] web folder on glusterfs

2017-04-22 Thread Pranith Kumar Karampuri
On Tue, Apr 11, 2017 at 8:30 PM, Umarzuki Mochlis wrote: > Hi, > > I'm planning to install glusterfs on 3 nodes of Joomla 3 with nginx & > php-fpm on buntu 16.04. > > /var/www will be used as storage volume on every node. > > Each node has a secondary network interface only

Re: [Gluster-users] GlusterFS Shard Feature: Max number of files in .shard-Folder

2017-04-22 Thread Pranith Kumar Karampuri
Sorry for the delay. The only internal process that we know would take more time is self-heal and we implemented a feature called granular entry self-heal which should be enabled with sharded volumes to get the benefits. So when a brick goes down and say only 1 in those million entries is