Hi,
I'm using glusterfs version 3.4.0 from gluster-epel[1].
Recently, I find out that there's a glusterfs version in base repo
(3.4.0.36rhs).
So, is it recommend to use that version instead of gluster-epel version?
If yes, is there a guide to make the switch with no downtime?
When run yum update
Hi Diep,
There is no glusterfs-server in the base repository, just client.
Regards,
Cuong
On Mon, Dec 9, 2013 at 7:31 PM, Diep Pham Van i...@favadi.com wrote:
Hi,
I'm using glusterfs version 3.4.0 from gluster-epel[1].
Recently, I find out that there's a glusterfs version in base repo
Hello,
Interesting, we seems to be several users with issues regarding recovery
but there is no to little replies... ;-)
I did some more testing over the weekend. Same initial workload (two
glusterfs servers, one client that continuesly
updates a file with timestamps) and then two easy
Hello,
I'm trying to build a replica volume, on two servers.
The servers are: blade6 and blade7. (another blade1 in the peer, but with
no volumes)
The volume seems ok, but I cannot mount it from NFS.
Here are some logs:
[root@blade6 stor1]# df -h
/dev/mapper/gluster_stor1
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Heyho guys,
I'm running since years glusterfs in a small environment without big
problems.
Now I'm going to use glusterFS for a bigger cluster but I've some
questions :)
Environment:
* 4 Servers
* 20 x 2TB HDD, each
* Raidcontroller
* Raid 10
* 4x
Thank you all for the wonderful input,
I haven't used extensively XFS so far and my concerns primarily came from
reading an article (mostly the discussion after it) by Jonathan Corbetrom
on LWN (http://lwn.net/Articles/476263/) and another one
http://toruonu.blogspot.ca/2012/12/xfs-vs-ext4.html.
On 09.12.2013 13:18, Heiko Krämer wrote:
1)
I'm asking me, if I can delete the raid10 on each server and create
for each HDD a separate brick.
In this case have a volume 80 Bricks so 4 Server x 20 HDD's. Is there
any experience about the write throughput in a production system with
many of
From any experience...which has shown to scale better...a file system or an
object store?
--Randy
San Jose CA
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
Nux! n...@li.nux.ro wrote:
On 09.12.2013 13:18, Heiko Krämer wrote:
1)
I'm asking me, if I can delete the raid10 on each server and create
for each HDD a separate brick.
In this case have a volume 80 Bricks so 4 Server x 20 HDD's. Is there
any experience about the write throughput in a
Incidentally, we're wrapping this up today. If you want to be included in the
list of swag-receivers (t-shirt, USB car charger, and stickers), you still have
a couple of hours to file a bug and have it verified by the dev team.
Thanks, everyone :)
-JM
- Original Message -
On
On 09.12.2013 16:09, Joe Julian wrote:
Brick disruption has been addressed in 3.4.
Good to know! What exactly happens when the brick goes unresponsive?
Additionally, if a brick goes bad gluster won't do anything about it,
the affected volumes will just slow down or stop working at all.
Hi all,
We're playing around with new versions and uprading options. We currently
have a 2x2x2 stripped-distributed-replicated volume based on 3.3.0 and
we're planning to upgrade to 3.4 version.
We've tried upgrading fist the clients and we've tried with 3.4.0, 3.4.1
and 3.4.2qa2 but all of them
Hi Heiko,
some years ago I had to deliver a reliable storage that should be easy to grow
in size over time.
For that I was in close contact with
presto prime who produced a lot of interesting research results accessible to
the public.
http://www.prestoprime.org/project/public.en.html
what was
- Original Message -
From: Heiko Krämer hkrae...@anynines.de
To: gluster-users@gluster.org List gluster-users@gluster.org
Sent: Monday, December 9, 2013 8:18:28 AM
Subject: [Gluster-users] Gluster infrastructure question
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Heyho guys,
- Original Message -
From: Ben Turner btur...@redhat.com
To: Heiko Krämer hkrae...@anynines.de
Cc: gluster-users@gluster.org List gluster-users@gluster.org
Sent: Monday, December 9, 2013 2:26:45 PM
Subject: Re: [Gluster-users] Gluster infrastructure question
- Original Message
I went with big RAID on each node (16x 3TB SATA disks in RAID6 with a
hot spare per node) rather than brick-per-disk. The simple reason
being that I wanted to configure distribute+replicate at the GlusterFS
level, and be 100% guaranteed that the replication happened across to
another node, and
Replicas are defined in the order bricks are listed in the volume create
command. So
gluster volume create myvol replica 2 server1:/data/brick1
server2:/data/brick1 server3:/data/brick1 server4:/data/brick1
will replicate between server1 and server2 and replicate between server3
and server4.
On 10 December 2013 08:09, Joe Julian j...@julianfamily.org wrote:
Replicas are defined in the order bricks are listed in the volume create
command. So
gluster volume create myvol replica 2 server1:/data/brick1
server2:/data/brick1 server3:/data/brick1 server4:/data/brick1
will replicate
On Mon, 9 Dec 2013 19:53:20 +0900
Nguyen Viet Cuong mrcuon...@gmail.com wrote:
There is no glusterfs-server in the base repository, just client.
Silly me.
After install and attempt to mount with base version of glusterfs-fuse,
I realize that I have to change 'backupvolfile-server' mount option
Admittedly I should search the source, but I wonder if anyone knows this
offhand.
Background: of our 84 ROCKS (6.1) -provisioned compute nodes, 4 have picked
up an 'advanced date' in the /var/log/glusterfs/gl.log file - that date
string is running about 5-6 hours ahead of the system date
Hi,
Can someone please advise on this issue. ?? Urgent. Selfheal is working every
10 minutes only. ??
Thanks Regards,
Bobby Jacob
From: Bobby Jacob
Sent: Tuesday, December 03, 2013 8:51 AM
To: gluster-users@gluster.org
Subject: FW: Self Heal Issue GlusterFS 3.3.1
Just and addition: on the
Hi Harry,
Did you setup ntp on each of the node, and sync the time to one single
source?
Thanks.
On Tue, Dec 10, 2013 at 12:44 PM, harry mangalam harry.manga...@uci.eduwrote:
Admittedly I should search the source, but I wonder if anyone knows this
offhand.
Background: of our 84 ROCKS
On Tue, 2013-12-03 at 05:47 +, Bobby Jacob wrote:
Hi,
I’m running glusterFS 3.3.1 on Centos 6.4.
Ø Gluster volume status
Status of volume: glustervol
Gluster process PortOnline
Pid
Before attempting a rebalance on my existing distributed Gluster volume
I thought I'd do some testing with my new storage. I created a volume
consisting of 4 bricks on the same server and wrote some data to it. I
then added a new brick from a another server. I ran the fix-layout and
wrote some
On 12/10/2013 10:14 AM, harry mangalam wrote:
Admittedly I should search the source, but I wonder if anyone knows this
offhand.
Background: of our 84 ROCKS (6.1) -provisioned compute nodes, 4 have
picked up an 'advanced date' in the /var/log/glusterfs/gl.log file -
that date string is running
Hi Franco,
If a file is under migration, and a rebalance stop is encountered, then
rebalance process exits only after the completion of the migration.
That might be one of the reasons why you saw rebalance in progress message
while trying to add the brick
Could you please share the average file
On 12/08/2013 05:44 PM, Alex Pearson wrote:
Hi All,
Just to assist anyone else having this issue, and so people can correct me if
I'm wrong...
It would appear that replace-brick is 'horribly broken' and should not be used in Gluster 3.4.
Instead a combination of remove-brick ... count X ...
On 12/10/2013 09:36 AM, Diep Pham Van wrote:
On Mon, 9 Dec 2013 19:53:20 +0900
Nguyen Viet Cuong mrcuon...@gmail.com wrote:
There is no glusterfs-server in the base repository, just client.
Silly me.
After install and attempt to mount with base version of glusterfs-fuse,
I realize that I have
On Tue, 2013-12-10 at 10:56 +0530, shishir gowda wrote:
Hi Franco,
If a file is under migration, and a rebalance stop is encountered,
then rebalance process exits only after the completion of the
migration.
That might be one of the reasons why you saw rebalance in progress
message
On 12/08/2013 07:06 PM, Nguyen Viet Cuong wrote:
Thanks for sharing.
Btw, I do believe that GlusterFS 3.2.x is much more stable than 3.4.x in
production.
This is quite contrary to what we have seen in the community. From a
development perspective too, we feel much better about 3.4.1. Are
On Tue, Dec 10, 2013 at 11:09 AM, Franco Broi franco.b...@iongeo.com wrote:
On Tue, 2013-12-10 at 10:56 +0530, shishir gowda wrote:
Hi Franco,
If a file is under migration, and a rebalance stop is encountered,
then rebalance process exits only after the completion of the
migration.
That
Thanks for clearing that up. I had to wait about 30 minutes for all
rebalancing activity to cease, then I was able to add a new brick.
What does it use to migrate the files? The copy rate was pretty slow
considering both bricks were on the same server, I only saw about
200MB/Sec. Each brick is a
Hi,
Thanks Joe, the split brain files have been removed as you recommended. How can
we deal with this situation as there is no document which solves such issues. ?
[root@KWTOCUATGS001 83]# gluster volume heal glustervol info
Gathering Heal info on volume glustervol has been successful
Brick
33 matches
Mail list logo