[Gluster-users] pnfs(glusterfs-3.7.1 + ganesha-2.2) problem for clients layout commit

2015-06-24 Thread 莊尚豪
Hi all, I test some perfomance from pnfs (gluster-3.7.1 + ganesha-2.2) in Fedora 22. There are 4 glusterfs nodes with ganesha. I reference from https://gluster.readthedocs.org/en/latest/Features/mount_gluster_volume_usin g_pnfs/ The clients(Fedora 21) are fine to mount and commit some small

[Gluster-users] Looking for Fedora Package Maintainers.

2015-06-24 Thread Humble Devassy Chirammal
Hi All, As we maintain 3 releases ( currently 3.5, 3.6 and 3.7) of GlusterFS and having an average of one release per week , we need more helping hands on this task. The responsibility includes building fedora and epel rpms using koji build system and deploying the rpms to

Re: [Gluster-users] Volume ID mismatch for brick after extending LVM

2015-06-24 Thread Urgen Sherpa
Hello, for testing and scaling purpose i gave it a try with 2 ubuntu 14.04 machines on vmware workstation.i installed glusterfs 3.5 on both of them. i started out by creating identical LVM on both of them,formatted with XFS and created replicated volumes #gluster volume create gvola replica 2

Re: [Gluster-users] pnfs(glusterfs-3.7.1 + ganesha-2.2) problem for clients layout commit

2015-06-24 Thread Jiffin Tony Thottan
Hi, Comments inline. On 24/06/15 14:39, 莊尚豪 wrote: Hi all, I test some perfomance from pnfs (gluster-3.7.1 + ganesha-2.2) in Fedora 22. There are 4 glusterfs nodes with ganesha. I reference from https://gluster.readthedocs.org/en/latest/Features/mount_gluster_volume_using_pnfs/ Can

Re: [Gluster-users] [Gluster-devel] Looking for Fedora Package Maintainers.

2015-06-24 Thread M S Vishwanath Bhat
On 24 June 2015 at 14:56, Humble Devassy Chirammal humble.deva...@gmail.com wrote: Hi All, As we maintain 3 releases ( currently 3.5, 3.6 and 3.7) of GlusterFS and having an average of one release per week , we need more helping hands on this task. The responsibility includes building

Re: [Gluster-users] Epel7 Repo Data error

2015-06-24 Thread Humble Devassy Chirammal
Hi Frank, Can you please retry now ? --Humble 2015-06-24 15:35 GMT+05:30 Frank Rothenstein f.rothenst...@bodden-kliniken.de: Hi there, trying yum update and getting error about missing files. Seems like /pub/gluster/glusterfs/LATEST/EPEL.repo/epel -7/arch/repodata/repodata.xml have

[Gluster-users] Epel7 Repo Data error

2015-06-24 Thread Frank Rothenstein
Hi there, trying yum update and getting error about missing files. Seems like /pub/gluster/glusterfs/LATEST/EPEL.repo/epel -7/arch/repodata/repodata.xml have wrong entries - filename missmatch with dir entries. yum: ovirt-3.5-glusterfs-epel/7/x86 FAILED

Re: [Gluster-users] [Gluster-devel] Looking for Fedora Package Maintainers.

2015-06-24 Thread Raghavendra Talur
On 06/24/2015 03:17 PM, M S Vishwanath Bhat wrote: On 24 June 2015 at 14:56, Humble Devassy Chirammal humble.deva...@gmail.com mailto:humble.deva...@gmail.com wrote: Hi All, As we maintain 3 releases ( currently 3.5, 3.6 and 3.7) of GlusterFS and having an average of one

Re: [Gluster-users] Epel7 Repo Data error

2015-06-24 Thread Frank Rothenstein
Hi Humble I did but error persists. I tried the CentOS-Repo, it s ok, the EPEL.repo still gets error 404. The filenames differ from the repodata.xml, eg: data type=primary_dbchecksum type=sha256e0a86d586e6a64f58d9d08ce77e75cda4d17da839a464c3ebdb726ffd 2a6c087/checksumopen-checksum

[Gluster-users] Recovering missing files after remove-brick with brick failure

2015-06-24 Thread Andrew Roberts
Hello, this is my first post. In a test environment, I have been testing likely failures and how to recover from them. I was testing the event that a disc begins to fail, and we decide to remove the brick, and its counterpart replica brick from the volume. Steps: 1. Call 'gluster volume

Re: [Gluster-users] [Gluster-devel] Looking for Fedora Package Maintainers.

2015-06-24 Thread Saravanakumar Arumugam
humble.deva...@gmail.com mailto:humble.deva...@gmail.com wrote: Hi All, As we maintain 3 releases ( currently 3.5, 3.6 and 3.7) of GlusterFS and having an average of one release per week , we need more helping hands on this task. The responsibility includes building

[Gluster-users] Minutes of todays Gluster Community Meeting

2015-06-24 Thread Venky Shankar
As every week, we had our Gluster Community Meeting earlier today. The agenda for next week can be found here: https://public.pad.fsfe.org/p/gluster-community-meetings Please add topics to the Open Floor / BYOT item around line 66 of the etherpad and attend the meeting next week to discuss

[Gluster-users] Brick logs filling with messages on gluster 3.7.2

2015-06-24 Thread Alessandro De Salvo
Hi, I just upgraded to gluster 3.7.2 from 3.7.1, and now the brick logs of my replicated volumes are filling very quickly with messages like this: [2015-06-24 14:08:13.989147] I [dict.c:467:dict_get] (-- /lib64/libglusterfs.so.0(_gf_log_callingfn+0x186)[0x7f02d7cf7ef6] (--

Re: [Gluster-users] Safely remove one replica

2015-06-24 Thread John Gardeniers
Hi Atin, On 25/06/15 14:34, Atin Mukherjee wrote: On 06/25/2015 10:01 AM, John Gardeniers wrote: Hi Atin, On 25/06/15 14:24, Atin Mukherjee wrote: On 06/25/2015 03:07 AM, John Gardeniers wrote: No takers on this one? On 22/06/15 14:37, John Gardeniers wrote: Until last weekend we had a

Re: [Gluster-users] Safely remove one replica

2015-06-24 Thread Atin Mukherjee
On 06/25/2015 10:01 AM, John Gardeniers wrote: Hi Atin, On 25/06/15 14:24, Atin Mukherjee wrote: On 06/25/2015 03:07 AM, John Gardeniers wrote: No takers on this one? On 22/06/15 14:37, John Gardeniers wrote: Until last weekend we had a simple 1x2 replicated volume, consisting of a

Re: [Gluster-users] Safely remove one replica

2015-06-24 Thread Atin Mukherjee
On 06/25/2015 03:07 AM, John Gardeniers wrote: No takers on this one? On 22/06/15 14:37, John Gardeniers wrote: Until last weekend we had a simple 1x2 replicated volume, consisting of a single brick on each peer. After a drive failure screwed the brick on one peer we decided to create a

Re: [Gluster-users] Safely remove one replica

2015-06-24 Thread John Gardeniers
Hi Atin, On 25/06/15 14:24, Atin Mukherjee wrote: On 06/25/2015 03:07 AM, John Gardeniers wrote: No takers on this one? On 22/06/15 14:37, John Gardeniers wrote: Until last weekend we had a simple 1x2 replicated volume, consisting of a single brick on each peer. After a drive failure

Re: [Gluster-users] Safely remove one replica

2015-06-24 Thread Ravishankar N
On 06/25/2015 03:07 AM, John Gardeniers wrote: No takers on this one? Hi John, If you either replace a brick of a replica or increase the replica count by adding another brick, you will need to perform `gluster volume heal volname full` to sync the data into the new/replaced brick. If

[Gluster-users] split brain

2015-06-24 Thread p...@email.cz
Hello , split brain happened a few hours before, would you define which copy is the newest ?? # gluster volume heal 1KVM12_P3 info Brick 1kvm1:/STORAGES/g1r5p3/GFS/ /7e5ca629-5e97-4220-a6b2-b93242e8f314/dom_md/ids - Is in split-brain Number of entries: 1 Brick 1kvm2:/STORAGES/g1r5p3/GFS/