Hi all,
I test some perfomance from pnfs (gluster-3.7.1 + ganesha-2.2) in Fedora 22.
There are 4 glusterfs nodes with ganesha.
I reference from
https://gluster.readthedocs.org/en/latest/Features/mount_gluster_volume_usin
g_pnfs/
The clients(Fedora 21) are fine to mount and commit some small
Hi All,
As we maintain 3 releases ( currently 3.5, 3.6 and 3.7) of GlusterFS and
having an average of one release per week , we need more helping hands on
this task.
The responsibility includes building fedora and epel rpms using koji build
system and deploying the rpms to
Hello,
for testing and scaling purpose i gave it a try with 2 ubuntu 14.04
machines on vmware workstation.i installed glusterfs 3.5 on both of them. i
started out by creating identical LVM on both of them,formatted with XFS
and created replicated volumes
#gluster volume create gvola replica 2
Hi,
Comments inline.
On 24/06/15 14:39, 莊尚豪 wrote:
Hi all,
I test some perfomance from pnfs (gluster-3.7.1 + ganesha-2.2) in
Fedora 22.
There are 4 glusterfs nodes with ganesha.
I reference from
https://gluster.readthedocs.org/en/latest/Features/mount_gluster_volume_using_pnfs/
Can
On 24 June 2015 at 14:56, Humble Devassy Chirammal humble.deva...@gmail.com
wrote:
Hi All,
As we maintain 3 releases ( currently 3.5, 3.6 and 3.7) of GlusterFS and
having an average of one release per week , we need more helping hands on
this task.
The responsibility includes building
Hi Frank,
Can you please retry now ?
--Humble
2015-06-24 15:35 GMT+05:30 Frank Rothenstein
f.rothenst...@bodden-kliniken.de:
Hi there,
trying yum update and getting error about missing files. Seems like
/pub/gluster/glusterfs/LATEST/EPEL.repo/epel
-7/arch/repodata/repodata.xml have
Hi there,
trying yum update and getting error about missing files. Seems like
/pub/gluster/glusterfs/LATEST/EPEL.repo/epel
-7/arch/repodata/repodata.xml have wrong entries - filename
missmatch with dir entries.
yum:
ovirt-3.5-glusterfs-epel/7/x86 FAILED
On 06/24/2015 03:17 PM, M S Vishwanath Bhat wrote:
On 24 June 2015 at 14:56, Humble Devassy Chirammal
humble.deva...@gmail.com mailto:humble.deva...@gmail.com wrote:
Hi All,
As we maintain 3 releases ( currently 3.5, 3.6 and 3.7) of
GlusterFS and having an average of one
Hi Humble
I did but error persists. I tried the CentOS-Repo, it s ok, the
EPEL.repo still gets error 404. The filenames differ from the
repodata.xml, eg:
data type=primary_dbchecksum
type=sha256e0a86d586e6a64f58d9d08ce77e75cda4d17da839a464c3ebdb726ffd
2a6c087/checksumopen-checksum
Hello, this is my first post.
In a test environment, I have been testing likely failures and how to recover from them.
I was testing the event that a disc begins to fail, and we decide to remove the brick, and its counterpart replica brick from the volume.
Steps:
1. Call 'gluster volume
humble.deva...@gmail.com mailto:humble.deva...@gmail.com wrote:
Hi All,
As we maintain 3 releases ( currently 3.5, 3.6 and 3.7) of
GlusterFS and having an average of one release per week , we need
more helping hands on this task.
The responsibility includes building
As every week, we had our Gluster Community Meeting earlier today. The
agenda for next week can be found here:
https://public.pad.fsfe.org/p/gluster-community-meetings
Please add topics to the Open Floor / BYOT item around line 66 of
the etherpad and attend the meeting next week to discuss
Hi,
I just upgraded to gluster 3.7.2 from 3.7.1, and now the brick logs of
my replicated volumes are filling very quickly with messages like this:
[2015-06-24 14:08:13.989147] I [dict.c:467:dict_get]
(-- /lib64/libglusterfs.so.0(_gf_log_callingfn+0x186)[0x7f02d7cf7ef6]
(--
Hi Atin,
On 25/06/15 14:34, Atin Mukherjee wrote:
On 06/25/2015 10:01 AM, John Gardeniers wrote:
Hi Atin,
On 25/06/15 14:24, Atin Mukherjee wrote:
On 06/25/2015 03:07 AM, John Gardeniers wrote:
No takers on this one?
On 22/06/15 14:37, John Gardeniers wrote:
Until last weekend we had a
On 06/25/2015 10:01 AM, John Gardeniers wrote:
Hi Atin,
On 25/06/15 14:24, Atin Mukherjee wrote:
On 06/25/2015 03:07 AM, John Gardeniers wrote:
No takers on this one?
On 22/06/15 14:37, John Gardeniers wrote:
Until last weekend we had a simple 1x2 replicated volume, consisting
of a
On 06/25/2015 03:07 AM, John Gardeniers wrote:
No takers on this one?
On 22/06/15 14:37, John Gardeniers wrote:
Until last weekend we had a simple 1x2 replicated volume, consisting
of a single brick on each peer. After a drive failure screwed the
brick on one peer we decided to create a
Hi Atin,
On 25/06/15 14:24, Atin Mukherjee wrote:
On 06/25/2015 03:07 AM, John Gardeniers wrote:
No takers on this one?
On 22/06/15 14:37, John Gardeniers wrote:
Until last weekend we had a simple 1x2 replicated volume, consisting
of a single brick on each peer. After a drive failure
On 06/25/2015 03:07 AM, John Gardeniers wrote:
No takers on this one?
Hi John,
If you either replace a brick of a replica or increase the replica count
by adding another brick, you will need to perform `gluster volume heal
volname full` to sync the data into the new/replaced brick.
If
Hello ,
split brain happened a few hours before, would you define which copy is
the newest ??
# gluster volume heal 1KVM12_P3 info
Brick 1kvm1:/STORAGES/g1r5p3/GFS/
/7e5ca629-5e97-4220-a6b2-b93242e8f314/dom_md/ids - Is in split-brain
Number of entries: 1
Brick 1kvm2:/STORAGES/g1r5p3/GFS/
19 matches
Mail list logo