I have this issue, for a few days with my new setup. I will have to get
back to you on versions but it was centos7.6 patched yesterday (27/3/2019).
On Thu, 28 Mar 2019 at 12:58, Sankarshan Mukhopadhyay <
sankarshan.mukhopadh...@gmail.com> wrote:
> On Wed, Mar 27, 2019 at 9:46 PM wrote:
> >
Hi,
I am trying to get the above combo working but I cant get nfs from a
RHEL7.6 client to work. I keep getting,
"[root@vuwunicodsktop2 ~]# showmount -e 10.180.48.186
clnt_create: RPC: Program not registered
[root@vuwunicodsktop2 ~]#"
nmap and verbose mounting shows,
===
am beginning to think I am better off doing 3 x 2TB
separate volumes. rather interesting trying to understand this stuff...!
On 12 June 2018 at 23:10, Dave Sherohman wrote:
> On Tue, Jun 12, 2018 at 03:04:14PM +1200, Thing wrote:
> > What I would like to do I think is a,
> >
&
Hi,
I would like to understand how gluster works better than I do know and in
particular the architecture.
So I have a test configuration of 6 desktops, each has 2 x 1TB disks in a
raid 0 on an esata channel.
What I would like to do I think is a,
*Distributed-Replicated volume*
a) have 1 and
I would like to know how if so. I tried with 4 nodes and couldnt make it
work. I think I need groups of 3 so 6 or 9 nodes? dunno docs are so vague.
On 23 May 2018 at 13:12, Brian Andrus wrote:
> All,
>
> With Gluster 4.0.2, is it possible to take an existing
/gv0/.glusterfs/ea/fb/eafb8799-4e7a-4264-9213-26997c5a4693
[root@glusterp3 ~]#
8><---
So none of the 3 nodes has an actual file?
On 23 May 2018 at 08:35, Thing <thing.th...@gmail.com> wrote:
> I tried looking for a file of the same size and the gfid doesnt show up,
>
> 8&g
brick1/gv0/.glusterfs/a5/27/a5279083-4d24-4dc8-be2d-4f507c5372cf
20974976 /bricks/brick1/gv0/.glusterfs/a5/27
20974976
/bricks/brick1/gv0/.glusterfs/74/75/7475bd15-05a6-45c2-b395-bc9fd3d1763f
[root@glusterp2 fb]#
8><---
On 23 May 2018 at 08:29, Thing <thing.th...@gmail.com> wrote:
>
97c5a4693" is?
>>
>> https://docs.gluster.org/en/v3/Troubleshooting/gfid-to-path/
>>
>>
>> On May 21, 2018 3:22:01 PM PDT, Thing <thing.th...@gmail.com> wrote:
>> >Hi,
>> >
>> >I seem to have a split brain issue, but I cannot figur
Hi,
I seem to have a split brain issue, but I cannot figure out where this is
and what it is, can someone help me pls, I cant find what to fix here.
==
root@salt-001:~# salt gluster* cmd.run 'df -h'
glusterp2.graywitch.co.nz:
Filesystem
Hi,
I have a 3 way gluster 4 setup. I had a "mishap" of some sort and I lost
node no2. There was and is 660gb spare on no1 and no3 but no2 is so full I
cannot mount it at boot, nor manually. I was making a 80gb VM so no idea
what happened, nor how to fix it?
=0x0001
trusted.glusterfs.volume-id=0xcfceb3535f0e4cf18b533ccfb1f091d3
[root@glusterp3 isos]#
On 10 May 2018 at 13:22, Thing <thing.th...@gmail.com> wrote:
> Whatever repair happened has now finished but I still have this,
>
> I cant find anything so far
rp1/images/kworker01.qcow2
/glusterp1/images/kworker02.qcow2
Status: Connected
Number of entries: 5
[root@glusterp1 gv0]#
On 10 May 2018 at 12:20, Thing <thing.th...@gmail.com> wrote:
> [root@glusterp1 gv0]# !737
> gluster v status
> Status of volume: gv0
> Gluster process
n again.
>
> Check: gluster v status
>
> Does it show the brick up?
>
> HTH,
>
> Diego
>
>
> On Wed, May 9, 2018, 20:01 Thing <thing.th...@gmail.com> wrote:
>
>> Hi,
>>
>> I have 3 Centos7.4 machines setup as a 3 way raid 1.
>>
&g
Hi,
I have 3 Centos7.4 machines setup as a 3 way raid 1.
Due to an oopsie on my part for glusterp1 /bricks/brick1/gv0 didnt mount on
boot and as a result its empty.
Meanwhile I have data on glusterp2 /bricks/brick1/gv0 and glusterp3
/bricks/brick1/gv0 as expected.
Is there a way to get
isos
[root@glustep1 libvirt]#
Is this a version mis-match thing? or what am I doing wrongly please?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
Hi,
So is the KVM or Vmware as the host(s)? I basically have the same setup ie
3 x 1TB "raid1" nodes and VMs, but 1gb networking. I do notice with vmware
using NFS disk was pretty slow (40% of a single disk) but this was over 1gb
networking which was clearly saturating. Hence I am moving to
; volume
> by selecting *y* for the warning message. It will create the replica 2
> configuration as you wanted.
>
> HTH,
> Karthik
>
> On Fri, Apr 27, 2018 at 10:56 AM, Thing <thing.th...@gmail.com> wrote:
>
>> Hi,
>>
>> I have 4 servers each with 1
Hi,
I have 4 servers each with 1TB of storage set as /dev/sdb1, I would like to
set these up in a raid 10 which will? give me 2TB useable. So Mirrored and
concatenated?
The command I am running is as per documents but I get a warning error,
how do I get this to proceed please as the documents
Hi,
I am on centos 7.4 with gluster 4.
I am trying to a distributed and replicated volume on the 4 nodes
I am getting this un-expected qualification,
[root@glustep1 brick1]# gluster volume create gv0 replica 2
glusterp1:/bricks/brick1/gv0 glusterp2:/bricks/brick1/gv0
2% /var/lib**tmpfs
771M 12K 771M 1% /run/user/42**tmpfs
771M 0 771M 0%
/run/user/1000**glusterp1:/gv0 932G
751G 181G 81% /gv0*
On 28 March 2018 at 21:47, Niels de Vos <nde...@redhat.com> wrote:
>
xt has links to the CentOS
> Storage SIG where you can find information on installing RPMs from the
> CentOS Storage SIG.
>
>
> On 03/27/2018 08:53 PM, Thing wrote:
> > Hi,
> >
> > Thanks, any howtos/docs/notes for installing gluster4.0.x on Centos 7
> > please?
&
Hi,
Thanks, any howtos/docs/notes for installing gluster4.0.x on Centos 7
please?
On 27 March 2018 at 01:28, Shyam Ranganathan wrote:
> The Gluster community is pleased to announce the release of Gluster
> 4.0.1 (packages available at [1]).
>
> Release notes for the
Hi,
I have a 3 node raid 1 gluster setup on centos7.2. Each node can mount the
2 gluster volumes fine.
[root@glusterp1 volume1]# df -h
Filesystem Size Used Avail Use%
Mounted on
/dev/mapper/centos-root 20G 1.9G 19G 10% /
Hi,
For some reason I cannot get gluster to run on 2 of 3 nodes,
here is my fault finding so far, out of ideas at the moment. Googling
"polkitd[3969]: Unregistered Authentication Agent for
unix-process:7541:985551 (system bus name :1.78, object path
/org/freedesktop/PolicyKit1/Auth"
isnt
Hi,
So was was trying to make a 3 way mirror and it failed. Now I get these
messages,
On glusterp1,
=
[root@glusterp1 ~]# gluster peer status
Number of Peers: 1
Hostname: 192.168.1.32
Uuid: ef780f56-267f-4a6d-8412-4f1bb31fd3ac
State: Peer in Cluster (Connected)
[root@glusterp1 ~]#
Hi,
Has anyone added 3 gluster nodes to ovirt? I dont seem to be able to find
much documentation on how to do this and hence failing.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
Hi,
I there any performance gain (or can you even?) in bonding 2 x 1gb?
regards
Steven
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
sterp1 /]#
=
On 14 October 2016 at 12:40, Thing <thing.th...@gmail.com> wrote:
> So glusterp3 is in a reject state,
>
> [root@glusterp1 /]# gluster peer status
> Number of Peers: 2
>
> Hostname: glusterp2.graywitch.co.nz
> Uuid: 93eebe2c-9564-4bb0-975f-2db49f12058b
&g
glusterfs-3.8.4-1.el7.x86_64
centos-release-gluster38-1.0-1.el7.centos.noarch
glusterfs-libs-3.8.4-1.el7.x86_64
[root@glusterp3 /]#
?
On 14 October 2016 at 12:31, Thing <thing.th...@gmail.com> wrote:
> Hmm seem I have something rather inconsistent,
>
> [root@glusterp1 /]# gluster vo
--
There are no active volume tasks
[root@glusterp1 /]#
On 14 October 2016 at 12:20, Thing <thing.th...@gmail.com> wrote:
> I deleted a gluster volume gv0 as I wanted to make it thin provisioned.
>
> I have rebu
I deleted a gluster volume gv0 as I wanted to make it thin provisioned.
I have rebuilt "gv0" but I am getting a failure,
==
[root@glusterp1 /]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 20G 3.9G 17G 20% /
devtmpfs
Hi,
I have a 3 node gluster setup running with one brick exported, /gv0
Centos 7
[root@glusterp1 ganesha]# rpm -aq |grep gluster
glusterfs-client-xlators-3.8.4-1.el7.x86_64
nfs-ganesha-gluster-2.3.3-1.el7.x86_64
glusterfs-3.8.4-1.el7.x86_64
glusterfs-cli-3.8.4-1.el7.x86_64
Write disk i/o seems very poor with Gluster so you maybe disappointed with
trying to run multiple VMs on the same disks all doing i/o.
On 31 March 2016 at 15:28, Pawan Devaiah wrote:
> Hi All,
>
> I am planning to build highly available Clustered NAS using GlusterFS,
In principle, a very bad idea even if it works. Always keep the OSes the
same revision together with the applications.
On 21 January 2016 at 21:20, jayakrishnan mm
wrote:
> Hi All,
>
> I am new to glusterFS. I have two machies which I want to host as
> two
I am trying to setup a replication across 3 servers, for some reason I get
a failure,
/data1 is my 120gb test volume, (I am waiting on the arrival of 3 x 1tb
disks)
The failure I get is,
=
[root@ipa1 salt]# salt glusterp1.graywitch.co.nz cmd.run "gluster volume
create glustervol01 replica 3
For some reason glusterp2 isnt showing up correctly? DNS is correct as is
the hosts file, what have I done wrongly and how do I fix it please?
=
[root@ipa1 salt]# salt '*' cmd.run "gluster peer status"
glusterp2.graywitch.co.nz:
Number of Peers: 2
Hostname: 192.168.1.1
09-033e-48d1-809f-2079345caea2
State: Establishing Connection (Disconnected)
Other names:
glusterp1.graywitch.co.nz
Hostname: glusterp3.graywitch.co.nz
Uuid: 5d59b704-e42f-46c6-8c14-cf052c489292
State: Accepted peer request (Disconnected)
[root@ipa1 salt]#
On 14 December
Hi,
No its mounted fine.
--edit--
data1 not data01, doh
On 14 December 2015 at 13:53, Lindsay Mathieson <lindsay.mathie...@gmail.com
> wrote:
> On 14/12/15 10:34, Thing wrote:
>
> I am trying to setup a replication across 3 servers, for some reason I get
> a failure
I seem to have a peer that I cannot detach, network seems fine, I assume I
dont have to bare metal rebuild it to clean it up?
==
[root@ipa1 salt]# salt 'glusterp3.graywitch.co.nz' cmd.run "gluster peer
status"
glusterp3.graywitch.co.nz:
Number of Peers: 1
Hostname:
I am attempting to create a gluster volume but get this error,
[root@glusterp1 data1]# gluster volume create volume1 replica 3 transport
tcp glusterp1.graywitch.co.nz:/data1 glusterp2.graywitch.co.nz:/data1
gvolume create: volume1: failed: Staging failed on glusterp3.graywitch.co.nz.
Hi,
I am just testing a KVM frontend with VMs and using a backend 2 node
gluster setup mount with glusterfs. this runs fine, but my understanding
was that if the gluster node the front end is attached to goes down the
client swaps to the other node and carries on?
So I switched gluster node 1
ode 2 as 1 is
off" sort of thing the Pi might cope. If its like that at all of course,
the Pi cant do any bandwidth and i/o to speak of even with an added sata
port.
On 10 November 2015 at 12:32, Lindsay Mathieson <lindsay.mathie...@gmail.com
> wrote:
> You probably have quorum prob
I am in the exact same boat. Well the front end is RHEL7.1 and the backend
Ubuntu 14.04 LTS.
What I have done however is mount using glusterfs.
No idea which is faster, currently I am testing local disk v nfs v gluster
to get a feel on the performance penalty of gluster.
NFS using UDP might
015 at 06:37, Thing <thing.th...@gmail.com> wrote:
>
>>
>> Looking at running a 2 node gluster setup to feed a small Virtualisation
>> setup. I need it to be low energy use, low purchase cost and small form
>> factor so I am looking at 2 mini-itx motherboards.
>
Hi All,
Looking at running a 2 node gluster setup to feed a small Virtualisation
setup. I need it to be low energy use, low purchase cost and small form
factor so I am looking at 2 mini-itx motherboards.
Does anyone know what sort of throughput I can expect? ie I am looking for
as good as a
wrote:
>
> On 4 November 2015 at 08:39, Thing <thing.th...@gmail.com> wrote:
>
>> Thanks but, your solution doesnt protect for a single PC hardware failure
>> like a PSU blowing ie giving me real time replication to the 2nd site so I
>> can be back up in minutes.
&g
For RHEL6.5 what else do I need to install to allow mount to work?
===8
Installed:
glusterfs.x86_64
0:3.4.0.57rhs-1.el6_5
Complete!
[root@8kxl72s ~]# mount -t glusterfs rhel7rc-004.ods.vuw.ac.nz:gv0
/mnt/gluster1-gv0
mount: unknown filesystem type 'glusterfs'
==
On 6
ctstate NEW
ACCEPT tcp -- 0.0.0.0/00.0.0.0/0tcp dpt:111
ctstate NEW
ACCEPT tcp -- 0.0.0.0/00.0.0.0/0tcp dpt:24007
ctstate NEW
===
On 6 May 2014 13:18, Thing thing.th...@gmail.com wrote:
For RHEL6.5 what else do I need to install
Using iptraf and dd to crate a 2gb file it looks like data is being
transferred from port 970 to port 49152. yet the docs say 34865?
?
On 6 May 2014 14:20, Thing thing.th...@gmail.com wrote:
Seem iptables is blocking sync, so what have I missed please?
Chain IN_public_allow (1
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
Hi,
I am trying to install gluster on rhel7rc, I seem to have done so ok,
status says gluster is running. I have opened up iptables for 24007 and
24009~24012 and portmapper and 34865 : 7
I am unable to probe to peer 2. telnet to 24007 is refused. Gluster looks
to be running. I have set
51 matches
Mail list logo