Allright, thank you Soumya. I actually did do the cleanup every time
(gluster nfs-ganesha disable), but it didn't always finish succesfully.
Sometimes it would just time out. I'll try with the second command tomorrow.
Good to know that it should work with two nodes as well.
On 22 September 2015
Hello,
As You will already have 5 copies (one per server), adding rendundancy on
storage is wasty.
I would do a 2 node distributed + replication:
2 servers * 4 disk = 8TB replicated to 2 others servers with same setup.
The last server can be abitrer for split brain maybe, or you have to get
On 2015-09-16 06:05, Kaleb S. KEITHLEY wrote:
The main difference is whether existing bugs are reassigned or simply
closed.
Speaking as a user (and not a developer), little is more annoying than
encountering a bug, spending time to research, reproduce and document a
bug experienced in the
On 2015-09-21 21:17, Atin Mukherjee wrote:
On 09/22/2015 02:07 AM, Gluster Admin wrote:
>Gluster users,
>
>We have a multiple node setup where each server has a single XFS brick
>(underlying storage is hardware battery backed raid6). Are there any
>issues creating multiple gluster volumes
You could perhaps use LVM on your RAID 6, and create two Logical Volumes,
one per brick?
On 22 Sep 2015 5:17 am, "Atin Mukherjee" wrote:
>
>
> On 09/22/2015 02:07 AM, Gluster Admin wrote:
> > Gluster users,
> >
> > We have a multiple node setup where each server has a single
Hi Tiemen,
Have added the steps to configure HA NFS in the below doc. Please verify
if you have all the pre-requisites done & steps performed right.
https://github.com/soumyakoduri/glusterdocs/blob/ha_guide/Administrator%20Guide/Configuring%20HA%20NFS%20Server.md
Thanks,
Soumya
On
Hi,
This doc may come in handy for you to configure HA NFS -
https://github.com/soumyakoduri/glusterdocs/blob/ha_guide
/Administrator%20Guide/Configuring%20HA%20NFS%20Server.md
Thanks,
Soumya
On 09/21/2015 11:24 PM, Gluster Admin wrote:
Hi,
Can someone point me to the howto/docs on setting
On 19/08/15 15:57, Niels de Vos wrote:
On Tue, Aug 18, 2015 at 04:51:58PM +0530, Jiffin Tony Thottan wrote:
Comments inline.
On 18/08/15 09:54, Niels de Vos wrote:
On Mon, Aug 17, 2015 at 06:20:50PM +0530, Anoop C S wrote:
Hi all,
As we move forward, in order to fix the limitations with
Hi,
Replies inline.
Thanks,
Saravana
On 09/21/2015 03:56 PM, ML mail wrote:
That's right, the earlier error I've posted with ZFS actually only appeared
during the setup of the geo replication and does not appear anymore. In fact
ZFS does not have any inodes so I guess you would need to adapt
yes this is what I meant although did not convey in the best terms. This
is working fine in testing but my main concern was having multiple volumes
i/o hitting the same disks as a potential bottleneck. What I think I have
settled on is not to subpartition the LVM/Physical disks but create
On 09/22/2015 05:06 PM, Tiemen Ruiten wrote:
That's correct and my original question was actually if a two node +
arbiter setup is possible. The documentation provided by Soumya only
mentions two servers in the example ganesha-ha.sh script. Perhaps that
could be updated as well then, to not
On 09/22/2015 02:35 PM, Tiemen Ruiten wrote:
I missed having passwordless SSH auth for the root user. However it did
not make a difference:
After verifying prerequisites, issued gluster nfs-ganesha enable on node
cobalt:
Sep 22 10:19:56 cobalt systemd: Starting Preprocess NFS
Our configuration is a distributed, replicated volume with 7 pairs of bricks on
2 servers. We are in the process of adding additional storage for another brick
pair. I placed the new disks in one of the servers late last week and used the
LSI storcli command to make a RAID 6 volume of the new
Update: I was able to use the TestDisk program from cgsecurity.org to find and
rewrite the partition info for the LVM partition. I was then able to mount the
disk and restart the gluster volume to bring the brick back online. To make
sure everything was OK, I then rebooted the node with the
HI Krutika,
Thanks for the reply. However I am afraid that its not too late for us. I
already replaced GlusterFS server and copied my data on the new bricks. Now
Again its working flawlessly like it was working before. However, I still
have old server and snapshots I'd try to implement your
Hello.
I have a system with several VMs, two used as glusterfs servers. If I
restart the whole system, as the VMs with the glusterfs volumes are
quite slow to boot, I have a problem with other VMs that are coming up
faster than the glusterfs VMs. In the client VMs the automounter hangs
as it is
I missed having passwordless SSH auth for the root user. However it did not
make a difference:
After verifying prerequisites, issued gluster nfs-ganesha enable on node
cobalt:
Sep 22 10:19:56 cobalt systemd: Starting Preprocess NFS configuration...
Sep 22 10:19:56 cobalt systemd: Starting RPC
Hi,
Are there any restrictions as to when I'm allowed to make changes to the
GlusterFS
volume (for instance: start/stop volume, add/remove brick or peer)? How will it
handle such changes when one of my two replicated servers is down? How will
GlusterFS
know which set of configuration files it
Hi,
Are there any restrictions as to when I'm allowed to make changes to the
GlusterFS
volume (for instance: start/stop volume, add/remove brick or peer)? How will it
handle such changes when one of my two replicated servers is down? How will
GlusterFS
know which set of configuration files it
On 21/09/15 21:21, Tiemen Ruiten wrote:
Whoops, replied off-list.
Additionally I noticed that the generated corosync config is not
valid, as there is no interface section:
/etc/corosync/corosync.conf
totem {
version: 2
secauth: off
cluster_name: rd-ganesha-ha
transport: udpu
}
nodelist {
Hi & thanks,
Well, obviously my intention is not to create a mess, but the reason for
redundancy is probably that servers may disappear from the cluster for a
while. In case I run any glusterfs commands related to the volume
definition at that time, those changes will not reach the
Hi,
IIRC, the setup is two nodes gluster+ganesha nodes plus the arbiter node for
gluster quorum.
Have I remembered that correctly?
The Ganesha HA in 3.7 requires a minimum of three servers running ganesha and
pacemaker. Two might work if you change the ganesha-ha.sh to not enable
pacemaker
glusterd should handle syncing any changes you make with the "gluster"
command to the peers, obviously if you make local changes to the volume
file on one server you are likely to break things unless you copy rsync the
changes to the other server.
On 22 September 2015 at 04:02, Andreas Hollaus
Hi Ankur,
It looks like some of the files/directories are in gfid split-brain.
>From the logs that you attached, here is the list of gfids of directories in
>gfid split-brain, based on the message id for gfid split-brain log message
>(108008):
[kdhananjay@dhcp35-215 logs]$ grep -iarT
you mean create multiple top level directories on a mounted files system
and use each directory as a brick in a different volume?
if the underlying block device is a thinly provisioned lvm2 volume and you
want to use snapshots you cannot do this, otherwise I don't think there is
a technical
That's correct and my original question was actually if a two node +
arbiter setup is possible. The documentation provided by Soumya only
mentions two servers in the example ganesha-ha.sh script. Perhaps that
could be updated as well then, to not give the wrong impression.
I could try to change
On 2015-09-22 08:03, Thibault Godouet wrote:
You could perhaps use LVM on your RAID 6, and create two Logical
Volumes,
one per brick?
Actually I am testing this setup and i can confirm that is possible, in
a testing environment, 6 machines each with 1 HD dedicated as a brick, 2
LV in each
Hi,
http://gluster.readthedocs.org/en/latest/
Regards,
J
On 2015-09-22 10:04, Andreas Hollaus wrote:
Hi,
Are there any restrictions as to when I'm allowed to make changes to
the GlusterFS
volume (for instance: start/stop volume, add/remove brick or peer)? How
will it
handle such changes
On 2015-09-22 02:59, Krutika Dhananjay wrote:
-
FROM: hm...@t-hamel.fr
Thank you, this solved the issue (after a umount/mount). The
question
now is: what's the catch? Why is this not the default?
https://partner-bugzilla.redhat.com/show_bug.cgi?id=1203122
The above
Hi Andreas,
>> Are there any restrictions as to when I'm allowed to make changes to the
>> GlusterFS
volume (for instance: start/stop volume, add/remove brick or peer)?
There are no restriction to make changes to the GlusterFS volume its all depend
on your need whether you want to start/stop
That's right, the earlier error I've posted with ZFS actually only appeared
during the setup of the geo replication and does not appear anymore. In fact
ZFS does not have any inodes so I guess you would need to adapt the GlusterFS
code to check if the FS is ZFS or not.
Now regarding the
Hi there,
We are setting up 5 new servers with 4 disks of 1 TB each.
We are planning of using Glusterfs Distributed Replicated so we have
protection over our data.
My question is, if I lose a disk, since we are using LVM mirror, are we
still able to access the data normally until we replace the
On 09/22/15 14:50, Gaurav Garg wrote:
> Hi Andreas,
>
>>> Are there any restrictions as to when I'm allowed to make changes to the
>>> GlusterFS
> volume (for instance: start/stop volume, add/remove brick or peer)?
>
> There are no restriction to make changes to the GlusterFS volume its all
>
- Original Message -
> From: hm...@t-hamel.fr
> To: "Krutika Dhananjay"
> Cc: gluster-users@gluster.org
> Sent: Tuesday, September 22, 2015 6:09:52 PM
> Subject: Re: [Gluster-users] "file changed as we read it" in gluster 3.7.4
> On 2015-09-22 02:59, Krutika
34 matches
Mail list logo