Hi All,
As discussed yesterday in the community meeting [1], I have updated the
3.5 schedule. The changes include:
1. Moving all dates by a week.
2. Adding a documentation hackathon day.
I have also updated the features section to reflect what is going to be
available in 3.5.0. [2]
The fi
On 11/28/2013 12:52 PM, Patrick Haley wrote:
Hi Ravi,
Thanks for the reply. If I interpret the output of gluster volume status
correctly, glusterfsd was running
[root@mseas-data ~]# gluster volume status
Status of volume: gdata
Gluster process PortOn
I'm not expert, but I can guess that to make the whole thing scalable,
gluster needs to go through gluster client, then network, then gluster
server, then native disk write, while native write does native disk only.
Again, it doesn't make sense to do such a comparison, or if single node
fit yo
1) Usage of gluster volume heal command :
To see list of files that requires self-heal : "gluster volume heal
info"
To see files that were self-healed : "gluster volume heal
info healed"
To see files which were failed to self-heal : "gluster volume heal
info heal-failed"
To see if fi
On 11/28/2013 03:12 AM, Pat Haley wrote:
Hi,
We are currently using gluster with 3 bricks. We just
rebooted one of the bricks (mseas-data, also identified
as gluster-data) which is actually the main server. After
rebooting this brick, our client machine (mseas) only sees
the files on the othe
 Hi Couilles-de-Loups!
After a few unsuccessful attempts to get answers on the gluster chat, I turn to
email.
I have Glusterfs version 3.4.0.
1) What is the correct usage of command: Â gluster volume heal myvolume info
heal-failed  ?
When I type this command, I get a list of files:
On 27.11.2013 15:21, Nux! wrote:
Hello,
I've caught a glimpse of the new RHEL 6.5 (well, new packages landed
in CentOS) and I notice it now provides quite a nice version of
Gluster 3.4.0 with a lot of backports.
I've got 2 problems now:
- I've got gluster installed from
http://download.gluster.
On Wed, Nov 27, 2013 at 5:14 PM, Marcus Bointon
wrote:
> On 27 Nov 2013, at 09:30, lei yang wrote:
>
> I have a machine which have 5 hard disk
>
> I want to use glusterfs to fast my disk
>
>
> You do know that gluster is not about single-node performance? You'll get
> far better performance by us
Hi,
We are currently using gluster with 3 bricks. We just
rebooted one of the bricks (mseas-data, also identified
as gluster-data) which is actually the main server. After
rebooting this brick, our client machine (mseas) only sees
the files on the other 2 bricks. Note that if I mount
the glus
I have create a 2 node replicated cluster with GlusterFS 3.4.1 on Centos 6.4.
Mounting the volume locally on each server using native client works fine,
however I am having issues with a separate client only server that I wish to
use NFS to mount the gluster server volume.
Volume Name: glus
Hello,
I'm trying to get glusterd to shutdown cleanly on a server. I'm running 3.4.1-3.
I fixed an issue in /etc/init.d/glusterd with RETVAL in the stop() function
(apparently also fixed upstream).
I now have one remaining issue: the nfs server does not shut down:
# Running system
[root@ads2 ~]
On 11/26/2013 04:29 PM, Maik Kulbe wrote:
Gluster now has the changelog translator (journaling mechanism) which
records
changes made to the filesystem (on each brick).
In which version will that be included? 3.5? And is there any
documentation on performance of journal Geo-Rep vs. old Geo-Rep?
RPM: http://bits.gluster.org/pub/gluster/glusterfs/3.5qa2/
SRC: http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.5qa2.tar.gz
This release is made off jenkins-release-48
-- Gluster Build System
___
Gluster-users mailing list
Gluster-users@
Hi,
This is along the lines of "tools for sysadmins". I plan on using
these algorithms for puppet-gluster, but will try to maintain them
separately as a standalone tool.
The problem: Given a set of bricks and servers, if they have a logical
naming convention, can an algorithm decide the ideal ord
RPM: http://bits.gluster.org/pub/gluster/glusterfs/3.4.2qa1/
SRC: http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.4.2qa1.tar.gz
This release is made off jenkins-release-47
-- Gluster Build System
___
Gluster-users mailing list
Gluster-us
Today's meeting minutes available here:
http://meetbot.fedoraproject.org/gluster-meeting/2013-11-27/gluster-meeting.2013-11-27-14.01.html
-Vijay
- Original Message -
> The following is a new meeting request:
>
> Subject: Gluster Community Weekly Meeting
> Organizer: "Vijay Bellur"
>
>
Hello,
I've caught a glimpse of the new RHEL 6.5 (well, new packages landed in
CentOS) and I notice it now provides quite a nice version of Gluster
3.4.0 with a lot of backports.
I've got 2 problems now:
- I've got gluster installed from
http://download.gluster.org/pub/gluster/glusterfs/3.4/
Original Message
Subject: Re: [Gluster-devel] [Gluster-users] Gluster Community Weekly
Meeting
Date: Wed, 27 Nov 2013 07:43:51 -0500
From: Kaleb S. KEITHLEY
To: gluster-de...@nongnu.org
On 11/27/2013 01:55 AM, James wrote:
Sorry for being a bit confused. I'd like to parti
On 27 Nov 2013, at 09:30, lei yang wrote:
> I have a machine which have 5 hard disk
>
> I want to use glusterfs to fast my disk
You do know that gluster is not about single-node performance? You'll get far
better performance by using RAID 0/1/5/10 on your local machine. Gluster gives
you mu
Hi experts
I have a machine which have 5 hard disk
I want to use glusterfs to fast my disk
my machine's IP is 123.224.178.67
my steps
1)create the volume
gluster volume create vol1 123.224.178.67:/buildarea2
123.224.178.67:/buildarea3
123.224.178.67:/buildarea4 123.224.178.67:/buildarea5 123.
To start self-healing data execute : "gluster volume heal datastore1
full" .
To monitor self-heal completion status exeucte : "gluster volume heal
datastore1 info". The number of entries under each brick should be 0.
When the number of entries count becomes 0 the self-heal is completed.
Othe
21 matches
Mail list logo