Thank you all for this. Unfortunately I wasn't able to try this. In my
attempt to change the brick name, I noticed I'm on Gluster 3.13 (Not a
typo) . So I proceeded to update to 6.10 then from there, to 9.4. To
6.10 it went fine however from 6.10 to 9.4 everything blew up. I got a
Hey All,
Is there a way to rename a brick within an existing gluster volume?
/bricks/0/gv01 -> /bricks/0/abc-gv01
--
Thx,
TK.
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users
Wanted to run some stats by you guy's to see if this is the sort of IO
expected off the GlusterFS w/ oVirt that I have:
[root@mdskvm-p01 master]# iperf3 -s
---
Server listening on 5201
.
Cheers,
TK
On 9/25/2019 7:47 AM, TomK wrote:
Thanks Thorgeir. Since then I upgraded to Gluster 6. Though this issue
remaind the same, anything in the way of new options to change what's
displayed?
Reason for the ask is that this get's inherited by oVirt when doing
discovery of existing
.
On Sun, Sep 29, 2019 at 8:53 AM TomK <mailto:tomk...@mdevsys.com>> wrote:
Hello All,
I'm not able to remove the last brick and consequently, the volume.
How
do I go about deleting it?
[root@mdskvm-p01 ~]# gluster volume delete mdsgv01
Deleting volume will
Hello All,
I'm not able to remove the last brick and consequently, the volume. How
do I go about deleting it?
[root@mdskvm-p01 ~]# gluster volume delete mdsgv01
Deleting volume will erase all information about the volume. Do you want
to continue? (y/n) y
volume delete: mdsgv01: failed:
st regards
*THORGEIR MARTHINUSSEN*
-----Original Message-
*From*: TomK mailto:tomk%20%3ctomk...@mdevsys.com%3e>>
*Reply-To*: tomk...@mdevsys.com <mailto:tomk...@mdevsys.com>
*To*: gluster-users@gluster.org <mailto:gluster-users@gluster.org>
*Subject*: Re: [Gluster-users] Where does Gluste
Hey All,
( sending to the right group: gluster-users@gluster.org )
I'm getting the below error when trying to start a 2 node Gluster cluster.
I had the quorum enabled when I was at version 3.12 . However with this
version it needed the quorum disabled. So I did so however now see the
o,
It's more safe to have static entries for your cluster - after all if
DNS fails for some reason - you don't want to loose your cluster.A
kind of 'Best Practice'.
Best Regards,
Strahil NikolovOn Sep 23, 2019 15:01, TomK wrote:
Do I *really* need specific /etc/hosts entries when
shouldn't need too. ( Ref below, everything resolves fine. )
Cheers,
TK
On 9/23/2019 1:32 AM, Strahil wrote:
Check your /etc/hosts for an entry like:
192.168.0.60 mdskvm-p01.nix.mds.xyz mdskvm-p01
Best Regards,
Strahil NikolovOn Sep 23, 2019 06:58, TomK wrote:
Hey All,
Take the two hosts below
Hey All,
Take the two hosts below as example. One host shows NFS Server on
192.168.0.60 (FQDN is mdskvm-p01.nix.mds.xyz).
The other shows mdskvm-p02 (FQDN is mdskvm-p02.nix.mds.xyz).
Why is there no consistency or correct hostname resolution? Where does
gluster get the hostnames from?
0xf0
May 21 23:53:13 psql01 kernel: []
system_call_fastpath+0x1c/0x21
May 21 23:53:13 psql01 kernel: [] ?
system_call_after_swapgs+0xae/0x146
On 5/7/2018 10:28 PM, TomK wrote:
This list has been deprecated. Please subscribe to the new support list
at lists.nfs-ganesha.org.
On 4/
test
--
Cheers,
Tom K.
-
Living on earth is expensive, but it includes a free trip around the sun.
___
Gluster-users mailing list
Gluster-users@gluster.org
timer event queue
[root@nfs02 ganesha]#
[root@nfs02 ganesha]#
On Wed, Apr 11, 2018 at 4:35 AM, TomK <tomk...@mdevsys.com
<mailto:tomk...@mdevsys.com>> wrote:
On 4/9/2018 2:45 AM, Alex K wrote:
Hey Alex,
With two nodes, the setup works but both sides go dow
timer event queue
[root@nfs02 ganesha]#
[root@nfs02 ganesha]#
On Wed, Apr 11, 2018 at 4:35 AM, TomK <tomk...@mdevsys.com
<mailto:tomk...@mdevsys.com>> wrote:
On 4/9/2018 2:45 AM, Alex K wrote:
Hey Alex,
With two nodes, the setup works but both sides go dow
,
You need 3 nodes at least to have quorum enabled. In 2 node setup you
need to disable quorum so as to be able to still use the volume when one
of the nodes go down.
On Mon, Apr 9, 2018, 09:02 TomK <tomk...@mdevsys.com
<mailto:tomk...@mdevsys.com>> wrote:
Hey All,
I
Hey All,
In a two node glusterfs setup, with one node down, can't use the second
node to mount the volume. I understand this is expected behaviour?
Anyway to allow the secondary node to function then replicate what
changed to the first (primary) when it's back online? Or should I just
go
On 3/19/2018 10:52 AM, Rik Theys wrote:
Hi,
On 03/19/2018 03:42 PM, TomK wrote:
On 3/19/2018 5:42 AM, Ondrej Valousek wrote:
Removing NFS or NFS Ganesha from the equation, not very impressed on my
own setup either. For the writes it's doing, that's alot of CPU usage
in top. Seems bottle
On 3/19/2018 5:42 AM, Ondrej Valousek wrote:
Removing NFS or NFS Ganesha from the equation, not very impressed on my
own setup either. For the writes it's doing, that's alot of CPU usage
in top. Seems bottle-necked via a single execution core somewhere trying
to facilitate read / writes to
On 3/18/2018 6:13 PM, Sam McLeod wrote:
Even your NFS transfers are 12.5 or so MB per second or less.
1) Did you use fdisk and LVM under that XFS filesystem?
2) Did you benchmark the XFS with something like bonnie++? (There's
probably newer benchmark suites now.)
3) Did you benchmark your
. Couldn't get this working without
all the work you guy's do!
Cheers,
Tom
On 02/25/2018 08:29 PM, TomK wrote:
Hey Guy's,
A success story instead of a question.
With your help, managed to get the HA component working with HAPROXY and
keepalived to build a fairly resilient NFS v4 VM cluster. ( Used
your work, please PM me for the
written up post or I could just post here if the lists allow it.
Cheers,
Tom
On 2/19/2018 12:25 PM, TomK wrote:
On 2/19/2018 12:09 PM, Kaleb S. KEITHLEY wrote:
Sounds good and no problem at all. Will look out for this update in the
future. In the meantime
. Appreciated!
Cheers,
Tom
On 02/19/2018 11:37 AM, TomK wrote:
On 2/19/2018 10:55 AM, Kaleb S. KEITHLEY wrote:
Yep, I noticed a couple of pages including this for 'storhaug
configuration' off google. Adding 'mailing list' to the search didn't
help alot:
https://sourceforge.net/p/nfs-ganesha
@yes01 ~]# cd /n
-bash: cd: /n: Stale file handle
[root@yes01 ~]#
Cheers,
Tom
On 02/19/2018 10:24 AM, TomK wrote:
On 2/19/2018 2:39 AM, TomK wrote:
+ gluster users as well. Just read another post on the mailing lists
about a similar ask from Nov which didn't really have a clear answer.
That's
On 2/19/2018 2:39 AM, TomK wrote:
+ gluster users as well. Just read another post on the mailing lists
about a similar ask from Nov which didn't really have a clear answer.
Perhaps there's a way to get NFSv4 work with GlusterFS without NFS
Ganesha then?
Cheers,
Tom
Hey All,
I've setup
On 2/18/2018 1:05 AM, TomK wrote:
Never mind. Found what I needed. Ty.
Cheers,
Tom
Hey All,
Trying to get GlusterFS w/ NFS-Ganesha installed but when following the
page below, I get a few 404 errors, glusterfs-server is missing in the
repos etc.
Is there a more recent version
Hey All,
Trying to get GlusterFS w/ NFS-Ganesha installed but when following the
page below, I get a few 404 errors, glusterfs-server is missing in the
repos etc.
Is there a more recent version of the same document for CentOS7 that can
be recommended?
Current link with issues:
Hey All,
Still new and learning GlusterFS. Thanks for bearing with me.
I cannot writes files from nodes to the glusterfs and see them
replicated. I can only write from the master node opennebula01 and see
the file distributed and replicated. How do I replicate from the nodes?
Or am I just
:
http://www.gluster.org/community/documentation/index.php/RDMA_Transport#Changing_Transport_of_Volume
Best regards,
Chen
On 5/3/2016 9:55 AM, TomK wrote:
Hey All,
New here and first time posting. I've made a typo in configuration and
entered:
gluster volume create mdsglusterv01 transport
Hey All,
New here and first time posting. I've made a typo and entered:
gluster volume create mdsglusterv01 transport rdma
mdskvm-p01:/mnt/p01-d01/glusterv01/
but couldn't start since rdma didn't exist:
[root@mdskvm-p01 glusterfs]# ls -altri
/usr/lib64/glusterfs/3.7.11/rpc-transport/*
Hey All,
New here and first time posting. I've made a typo in configuration and
entered:
gluster volume create mdsglusterv01 transport rdma
mdskvm-p01:/mnt/p01-d01/glusterv01/
but couldn't start since rdma didn't exist:
[root@mdskvm-p01 glusterfs]# ls -altri
31 matches
Mail list logo