[Gluster-users] [CentOS 6] Upgrade to the glusterfs version in base or in glusterfs-epel

2013-12-09 Thread Diep Pham Van
Hi, I'm using glusterfs version 3.4.0 from gluster-epel[1]. Recently, I find out that there's a glusterfs version in base repo (3.4.0.36rhs). So, is it recommend to use that version instead of gluster-epel version? If yes, is there a guide to make the switch with no downtime? When run yum update

Re: [Gluster-users] [CentOS 6] Upgrade to the glusterfs version in base or in glusterfs-epel

2013-12-09 Thread Nguyen Viet Cuong
Hi Diep, There is no glusterfs-server in the base repository, just client. Regards, Cuong On Mon, Dec 9, 2013 at 7:31 PM, Diep Pham Van i...@favadi.com wrote: Hi, I'm using glusterfs version 3.4.0 from gluster-epel[1]. Recently, I find out that there's a glusterfs version in base repo

Re: [Gluster-users] Testing failover and recovery

2013-12-09 Thread Per Hallsmark
Hello, Interesting, we seems to be several users with issues regarding recovery but there is no to little replies... ;-) I did some more testing over the weekend. Same initial workload (two glusterfs servers, one client that continuesly updates a file with timestamps) and then two easy

[Gluster-users] Gluster - replica - Unable to self-heal contents of '/' (possible split-brain)

2013-12-09 Thread Alexandru Coseru
Hello, I'm trying to build a replica volume, on two servers. The servers are: blade6 and blade7. (another blade1 in the peer, but with no volumes) The volume seems ok, but I cannot mount it from NFS. Here are some logs: [root@blade6 stor1]# df -h /dev/mapper/gluster_stor1

[Gluster-users] Gluster infrastructure question

2013-12-09 Thread Heiko Krämer
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Heyho guys, I'm running since years glusterfs in a small environment without big problems. Now I'm going to use glusterFS for a bigger cluster but I've some questions :) Environment: * 4 Servers * 20 x 2TB HDD, each * Raidcontroller * Raid 10 * 4x

Re: [Gluster-users] How reliable is XFS under Gluster?

2013-12-09 Thread Kal Black
Thank you all for the wonderful input, I haven't used extensively XFS so far and my concerns primarily came from reading an article (mostly the discussion after it) by Jonathan Corbetrom on LWN (http://lwn.net/Articles/476263/) and another one http://toruonu.blogspot.ca/2012/12/xfs-vs-ext4.html.

Re: [Gluster-users] Gluster infrastructure question

2013-12-09 Thread Nux!
On 09.12.2013 13:18, Heiko Krämer wrote: 1) I'm asking me, if I can delete the raid10 on each server and create for each HDD a separate brick. In this case have a volume 80 Bricks so 4 Server x 20 HDD's. Is there any experience about the write throughput in a production system with many of

[Gluster-users] Scalability - File system or Object Store

2013-12-09 Thread Randy Breunling
From any experience...which has shown to scale better...a file system or an object store? --Randy San Jose CA ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster infrastructure question

2013-12-09 Thread Joe Julian
Nux! n...@li.nux.ro wrote: On 09.12.2013 13:18, Heiko Krämer wrote: 1) I'm asking me, if I can delete the raid10 on each server and create for each HDD a separate brick. In this case have a volume 80 Bricks so 4 Server x 20 HDD's. Is there any experience about the write throughput in a

Re: [Gluster-users] [Gluster-devel] GlusterFest Test Weekend - 3.5 Test #1

2013-12-09 Thread John Mark Walker
Incidentally, we're wrapping this up today. If you want to be included in the list of swag-receivers (t-shirt, USB car charger, and stickers), you still have a couple of hours to file a bug and have it verified by the dev team. Thanks, everyone :) -JM - Original Message - On

Re: [Gluster-users] Gluster infrastructure question

2013-12-09 Thread Nux!
On 09.12.2013 16:09, Joe Julian wrote: Brick disruption has been addressed in 3.4. Good to know! What exactly happens when the brick goes unresponsive? Additionally, if a brick goes bad gluster won't do anything about it, the affected volumes will just slow down or stop working at all.

[Gluster-users] compatibility between 3.3 and 3.4

2013-12-09 Thread samuel
Hi all, We're playing around with new versions and uprading options. We currently have a 2x2x2 stripped-distributed-replicated volume based on 3.3.0 and we're planning to upgrade to 3.4 version. We've tried upgrading fist the clients and we've tried with 3.4.0, 3.4.1 and 3.4.2qa2 but all of them

Re: [Gluster-users] Gluster infrastructure question

2013-12-09 Thread bernhard glomm
Hi Heiko, some years ago I had to deliver a reliable storage that should be easy to grow in size over time. For that I was in close contact with presto prime who produced a lot of interesting research results accessible to the public. http://www.prestoprime.org/project/public.en.html what was

Re: [Gluster-users] Gluster infrastructure question

2013-12-09 Thread Ben Turner
- Original Message - From: Heiko Krämer hkrae...@anynines.de To: gluster-users@gluster.org List gluster-users@gluster.org Sent: Monday, December 9, 2013 8:18:28 AM Subject: [Gluster-users] Gluster infrastructure question -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Heyho guys,

Re: [Gluster-users] Gluster infrastructure question

2013-12-09 Thread Ben Turner
- Original Message - From: Ben Turner btur...@redhat.com To: Heiko Krämer hkrae...@anynines.de Cc: gluster-users@gluster.org List gluster-users@gluster.org Sent: Monday, December 9, 2013 2:26:45 PM Subject: Re: [Gluster-users] Gluster infrastructure question - Original Message

Re: [Gluster-users] Gluster infrastructure question

2013-12-09 Thread Dan Mons
I went with big RAID on each node (16x 3TB SATA disks in RAID6 with a hot spare per node) rather than brick-per-disk. The simple reason being that I wanted to configure distribute+replicate at the GlusterFS level, and be 100% guaranteed that the replication happened across to another node, and

Re: [Gluster-users] Gluster infrastructure question

2013-12-09 Thread Joe Julian
Replicas are defined in the order bricks are listed in the volume create command. So gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will replicate between server1 and server2 and replicate between server3 and server4.

Re: [Gluster-users] Gluster infrastructure question

2013-12-09 Thread Dan Mons
On 10 December 2013 08:09, Joe Julian j...@julianfamily.org wrote: Replicas are defined in the order bricks are listed in the volume create command. So gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will replicate

Re: [Gluster-users] [CentOS 6] Upgrade to the glusterfs version in base or in glusterfs-epel

2013-12-09 Thread Diep Pham Van
On Mon, 9 Dec 2013 19:53:20 +0900 Nguyen Viet Cuong mrcuon...@gmail.com wrote: There is no glusterfs-server in the base repository, just client. Silly me. After install and attempt to mount with base version of glusterfs-fuse, I realize that I have to change 'backupvolfile-server' mount option

[Gluster-users] Where does the 'date' string in '/var/log/glusterfs/gl.log' come from?

2013-12-09 Thread harry mangalam
Admittedly I should search the source, but I wonder if anyone knows this offhand. Background: of our 84 ROCKS (6.1) -provisioned compute nodes, 4 have picked up an 'advanced date' in the /var/log/glusterfs/gl.log file - that date string is running about 5-6 hours ahead of the system date

[Gluster-users] FW: Self Heal Issue GlusterFS 3.3.1

2013-12-09 Thread Bobby Jacob
Hi, Can someone please advise on this issue. ?? Urgent. Selfheal is working every 10 minutes only. ?? Thanks Regards, Bobby Jacob From: Bobby Jacob Sent: Tuesday, December 03, 2013 8:51 AM To: gluster-users@gluster.org Subject: FW: Self Heal Issue GlusterFS 3.3.1 Just and addition: on the

Re: [Gluster-users] Where does the 'date' string in '/var/log/glusterfs/gl.log' come from?

2013-12-09 Thread Sharuzzaman Ahmat Raslan
Hi Harry, Did you setup ntp on each of the node, and sync the time to one single source? Thanks. On Tue, Dec 10, 2013 at 12:44 PM, harry mangalam harry.manga...@uci.eduwrote: Admittedly I should search the source, but I wonder if anyone knows this offhand. Background: of our 84 ROCKS

Re: [Gluster-users] Self Heal Issue GlusterFS 3.3.1

2013-12-09 Thread Joe Julian
On Tue, 2013-12-03 at 05:47 +, Bobby Jacob wrote: Hi, I’m running glusterFS 3.3.1 on Centos 6.4. Ø Gluster volume status Status of volume: glustervol Gluster process PortOnline Pid

[Gluster-users] Pausing rebalance

2013-12-09 Thread Franco Broi
Before attempting a rebalance on my existing distributed Gluster volume I thought I'd do some testing with my new storage. I created a volume consisting of 4 bricks on the same server and wrote some data to it. I then added a new brick from a another server. I ran the fix-layout and wrote some

Re: [Gluster-users] Where does the 'date' string in '/var/log/glusterfs/gl.log' come from?

2013-12-09 Thread Vijay Bellur
On 12/10/2013 10:14 AM, harry mangalam wrote: Admittedly I should search the source, but I wonder if anyone knows this offhand. Background: of our 84 ROCKS (6.1) -provisioned compute nodes, 4 have picked up an 'advanced date' in the /var/log/glusterfs/gl.log file - that date string is running

Re: [Gluster-users] Pausing rebalance

2013-12-09 Thread shishir gowda
Hi Franco, If a file is under migration, and a rebalance stop is encountered, then rebalance process exits only after the completion of the migration. That might be one of the reasons why you saw rebalance in progress message while trying to add the brick Could you please share the average file

Re: [Gluster-users] replace-brick failing - transport.address-family not specified

2013-12-09 Thread Vijay Bellur
On 12/08/2013 05:44 PM, Alex Pearson wrote: Hi All, Just to assist anyone else having this issue, and so people can correct me if I'm wrong... It would appear that replace-brick is 'horribly broken' and should not be used in Gluster 3.4. Instead a combination of remove-brick ... count X ...

Re: [Gluster-users] [CentOS 6] Upgrade to the glusterfs version in base or in glusterfs-epel

2013-12-09 Thread Vijay Bellur
On 12/10/2013 09:36 AM, Diep Pham Van wrote: On Mon, 9 Dec 2013 19:53:20 +0900 Nguyen Viet Cuong mrcuon...@gmail.com wrote: There is no glusterfs-server in the base repository, just client. Silly me. After install and attempt to mount with base version of glusterfs-fuse, I realize that I have

Re: [Gluster-users] Pausing rebalance

2013-12-09 Thread Franco Broi
On Tue, 2013-12-10 at 10:56 +0530, shishir gowda wrote: Hi Franco, If a file is under migration, and a rebalance stop is encountered, then rebalance process exits only after the completion of the migration. That might be one of the reasons why you saw rebalance in progress message

Re: [Gluster-users] replace-brick failing - transport.address-family not specified

2013-12-09 Thread Vijay Bellur
On 12/08/2013 07:06 PM, Nguyen Viet Cuong wrote: Thanks for sharing. Btw, I do believe that GlusterFS 3.2.x is much more stable than 3.4.x in production. This is quite contrary to what we have seen in the community. From a development perspective too, we feel much better about 3.4.1. Are

Re: [Gluster-users] Pausing rebalance

2013-12-09 Thread Kaushal M
On Tue, Dec 10, 2013 at 11:09 AM, Franco Broi franco.b...@iongeo.com wrote: On Tue, 2013-12-10 at 10:56 +0530, shishir gowda wrote: Hi Franco, If a file is under migration, and a rebalance stop is encountered, then rebalance process exits only after the completion of the migration. That

Re: [Gluster-users] Pausing rebalance

2013-12-09 Thread Franco Broi
Thanks for clearing that up. I had to wait about 30 minutes for all rebalancing activity to cease, then I was able to add a new brick. What does it use to migrate the files? The copy rate was pretty slow considering both bricks were on the same server, I only saw about 200MB/Sec. Each brick is a

Re: [Gluster-users] Self Heal Issue GlusterFS 3.3.1

2013-12-09 Thread Bobby Jacob
Hi, Thanks Joe, the split brain files have been removed as you recommended. How can we deal with this situation as there is no document which solves such issues. ? [root@KWTOCUATGS001 83]# gluster volume heal glustervol info Gathering Heal info on volume glustervol has been successful Brick