I already know why peering fails.
GlusterFS needs to use fqdn, through DNS or hosts file to achieve
When I deployed the first few machines, I used the IP without reporting an
error, and I continued to deploy other machines and reported an error.
___
User
You should give an example , as it's not clear enough.
Best Regards,Strahil Nikolov
On Fri, Aug 19, 2022 at 13:53, Facundo Badaracco wrote:
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy S
Thx dave!
Then my problem is the SO. When i activate the 2d nic, the first one goes
innaccesible. I should do something with routing, right?
El vie., 19 de agosto de 2022 01:13, Dave Lennox <
david.len...@frontlinedigital.com.au> escribió:
> Facundo,
>
> That should be fine, the requirement for
Facundo,
That should be fine, the requirement for different networks is aligned with
bandwidth and performance I think more than anything else. As long as you have
two different interfaces and they have valid DNS records oVirt wouldn't
actually know how the networking switching and routing is c
Yes.Keep in mind that the bricks (hostname/ip+mountpoint combo) should be part
of the Gluster network. The IP you use while mounting is used only to retrieve
the volume info (and the bricks) while the brick FQDN/hostname/IP is used for
the actual data transfer.If you use LACP, you can combine al
Under your Datacenter or Cluster (1 of those), there is a tick for enable
gluster service, make sure thats ticked.
What about in the CLI, gluster bricks show there?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovi
Keep in mind that if you have multiple 1Gbit links, you can utilize them
all.For example, you can use LACP with layer3+4 for hashing , and because each
brick uses a different port -> the connectivity will be done on a separate link.
Best Regards,Strahil Nikolov
On Mon, Mar 7, 2022 at 10:35,
Hi all,
just for info: the problem wasn't glusterfs or any ovirt related
configuration, just bandwidth cap and throttling. Doing the maths the
performance is 'Ok'.
Regards,
Francesco
Il 03/03/2022 12:09, francesco--- via Users ha scritto:
Hi all,
I'm running a glusterFS setup v 8.6 with tw
Please be carefull. We have used a prometheus plug in and it has requested
every minute the gluster status. And we habe 50 vm running over three servers.
In case that the system is on load the answer require more than one minute.
After three weeks the glusterfs was overloaded and all vms was qui
I use the nagios check_rhv plugin, it has support for monitoring GlusterFS
as well: https://github.com/rk-it-at/check_rhv
On Tue, Sep 7, 2021 at 8:39 AM Jiří Sléžka wrote:
> Hi,
>
> On 9/7/21 1:05 PM, si...@justconnect.ie wrote:
> > Hi All,
> >
> > Does anyone have recommendations for GlusterFS
Hi,
On 9/7/21 1:05 PM, si...@justconnect.ie wrote:
Hi All,
Does anyone have recommendations for GlusterFS monitoring/alerting software and
or plugins.
I am using Zabbix and this simple plugin
https://github.com/Lelik13a/Zabbix-GluserFS
there are probably more sophisticated solutions but th
I can't call it "resolved" , but it's up to you.
I would look at gluster logs for clues.
Best Regards,Strahil Nikolov
Sent from Yahoo Mail on Android
Its resolved but had to rebuild the cluster and lost some data.
___
Users mailing list -- users@o
Its resolved but had to rebuild the cluster and lost some data.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://ww
Hi ,
first check the brick is mounted. Then, you can force start the volume which
will force the brick to be started.
Best Regards,Strahil Nikolov
Sent from Yahoo Mail on Android
On Fri, Aug 20, 2021 at 22:13,
eev...@digitaldatatechs.com wrote: I have an
ovirt 4.3 on 3 Centos 7 with Glu
Hi Jayme,
On 7/8/21 12:54 PM, Jayme wrote:
I have observed this behaviour recently and in the past on 4.3 and 4.4,
and in my case it’s almost always following an ovirt upgrade. After
upgrade (especially upgrades involving glusterfs) I’d have bricks
randomly go down like your describing for abo
I have observed this behaviour recently and in the past on 4.3 and 4.4, and
in my case it’s almost always following an ovirt upgrade. After upgrade
(especially upgrades involving glusterfs) I’d have bricks randomly go down
like your describing for about a week or so after upgrade and I’d have to
ma
I would recommend asking also on glusterfs users mailing list about this.
Il giorno mer 7 lug 2021 alle ore 21:09 Jiří Sléžka ha
scritto:
> Hello,
>
> I have 3 node HCI cluster with oVirt 4.4.6 and CentOS8.
>
> For time to time (I belive) random brick on random host goes down
> because health-ch
Sent from my iPad
> On May 7, 2021, at 3:06 PM, eev...@digitaldatatechs.com wrote:
>
> This helps RHEL and CentOS machines utilize glusterfs and actually speeds teh
> vm up.
> I hope this will help someone. If you want the URL for the article, just ask.
I (and others) would appreciate the UR
oVirt 4.4.X is using Gluster v7
Best Regards,
Strahil Nikolov
На 17 август 2020 г. 19:15:54 GMT+03:00, supo...@logicworks.pt написа:
>Hello,
>
>What is the compatibility gluster version to work with oVirt 4.4.1 ?
>
>Thanks
___
Users mailing list --
>Digital Data Services LLC.
>304.660.9080
>
>
>-Original Message-
>From: Darrell Budic
>Sent: Friday, February 14, 2020 4:58 PM
>To: eev...@digitaldatatechs.com
>Cc: users
>Subject: [ovirt-users] Re: glusterfs
>
>Hi Eric-
>
>Glad you got thought that part.
italdatatechs.com
Cc: users
Subject: [ovirt-users] Re: glusterfs
Hi Eric-
Glad you got thought that part. I don’t use iscsi backed volumes for my gluster
storage, so I don’t much advice for you there. I’ve cc’d the ovirt users list
back in, someone there may be able to help you futher. It’s goo
ans
> Digital Data Services LLC.
> 304.660.9080
>
>
> -Original Message-
> From: Darrell Budic
> Sent: Friday, February 14, 2020 2:58 PM
> To: eev...@digitaldatatechs.com
> Subject: Re: [ovirt-users] Re: glusterfs
>
> You don’t even need to clean everythi
ssage-
From: Darrell Budic
Sent: Friday, February 14, 2020 11:54 AM
To: eev...@digitaldatatechs.com
Cc: users@ovirt.org
Subject: [ovirt-users] Re: glusterfs
You can add it in to a running ovirt cluster, it just isn’t as automatic. First
you need to enable Gluster in at the cluster settings
On February 14, 2020 6:53:47 PM GMT+02:00, Darrell Budic
wrote:
>You can add it in to a running ovirt cluster, it just isn’t as
>automatic. First you need to enable Gluster in at the cluster settings
>level for a new or existing cluster. Then either install/reinstall your
>nodes, or install glust
You can add it in to a running ovirt cluster, it just isn’t as automatic. First
you need to enable Gluster in at the cluster settings level for a new or
existing cluster. Then either install/reinstall your nodes, or install gluster
manually and add vdsm-gluster packages. You can create a stand a
I'm not an expert but based on my experience I can recommend you to:1. Check time sync (NTP/chrony)2. Check your volumes are configured as described here : https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html-single/configuring_red_hat_virtualization_with_red_hat_gluster_s
Hi,
thanks for helping.
Here gluster vol info:
Volume Name: GlusterVol
Type: Distributed-Replicate
Volume ID: 22ea3d7d-4435-423a-a06c-504fa9b36ada
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 4 = 8
Transport-type: tcp
Bricks:
Brick1: 172.31.100.150:/home/admin/ovirt_3/data
Brick2: 17
On Thu, Dec 27, 2018 at 4:46 PM Sahina Bose wrote:
>
> On Mon, Dec 17, 2018 at 2:01 PM wrote:
> >
> > Hello everyone,
> >
> > I've installed oVirt on 8 nodes of a MacroServer (SuperMicro Microcloud): 7
> > Nodes with oVirt Node installed and 1 Node with Centos 7 and oVirt
> > installed. The las
On Mon, Dec 17, 2018 at 2:01 PM wrote:
>
> Hello everyone,
>
> I've installed oVirt on 8 nodes of a MacroServer (SuperMicro Microcloud): 7
> Nodes with oVirt Node installed and 1 Node with Centos 7 and oVirt installed.
> The last one works like hypervisor and node.
>
> I would use all the storag
Someone could help me?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-gu
18.11.2018 8:16, Shawn Weeks пишет:
Currently when LibgfApiSupported is enabled it looks like the startup
command for the VM has the Gluster hostname always set to the same host.
How does that work if that host is down? In my case GlusterFS has 3x
replication and distribution enabled but if the
On Mon, Nov 5, 2018 at 8:09 PM Sandro Bonazzola wrote:
>
>
>
> Il giorno dom 4 nov 2018 alle ore 16:24 Jarosław Prokopowski
> ha scritto:
>>
>> Hi Guys,
>>
>> I would like to use GlusterFS distributed-replicated with arbiter volume on
>> 4 nodes for oVirt.
>> Can you tell me what tuning paramet
Il giorno dom 4 nov 2018 alle ore 16:24 Jarosław Prokopowski <
jprokopow...@gmail.com> ha scritto:
> Hi Guys,
>
> I would like to use GlusterFS distributed-replicated with arbiter volume
> on 4 nodes for oVirt.
> Can you tell me what tuning parameters should I set for such volume for
> best VM per
2018-07-04 12:23 GMT+02:00 Chris Boot :
> All,
>
> Now that GlusterFS 4.1 LTS has been released, and is the "default"
> version of GlusterFS in CentOS (you get this from
> "centos-release-gluster" now), what's the status with regards to oVirt?
>
> How badly is oVirt 4.2.4 likely to break if one we
Hi,
Your glusterfs version is also necessary to check if its the same memory leak.
You have mentioned that you have used 3.12.x need to know which version is it.
The above bug was fixed with 3.12.2 if you have used 3.12.1 then the fix for
above will work. If you have been using a version higher th
On Mon, Jun 18, 2018 at 11:57 AM, Edward Clay
wrote:
> It looks like we are experiencing a bug in the version of glusterfs
> included with ovirt 4.2.3. It looks like glusterfs 3.12.x has an issue
> where it consumes large amounts of ram which has caused our HV report
> storage errors and pause V
Adding Krutika to look at gluster logs
On Mon, Jun 18, 2018 at 10:39 AM, wrote:
> from glusterfs/rhev_datacenter
> [2018-06-18 12:32:50.854668] W [socket.c:593:__socket_rwv] 0-glusterfs:
> readv on 172.16.224.10:24007 failed (No data available)
> [2018-06-18 12:33:38.194322] C
> [rpc-clnt-ping.
agent.log is here
https://pastebin.com/tGyeBNr3
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/c
interesting part of agent.log is here
https://pastebin.com/tGyeBNr3
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
http
from glusterfs/rhev_datacenter
[2018-06-18 12:32:50.854668] W [socket.c:593:__socket_rwv] 0-glusterfs: readv
on 172.16.224.10:24007 failed (No data available)
[2018-06-18 12:33:38.194322] C
[rpc-clnt-ping.c:166:rpc_clnt_ping_timer_expired] 0-engine-client-0: server
172.16.224.10:49152 has not re
Can you provide the engine mount logs under
/var/log/glusterfs/rhev_data-center*engine.log and also the
ovirt-ha/agent.log?
On Mon, Jun 18, 2018 at 8:42 AM, wrote:
> it seems that reduduncy of glusterfs is working. It doesn't show on mount
> options but it is there in the processes. This must be
it seems that reduduncy of glusterfs is working. It doesn't show on mount
options but it is there in the processes. This must be something else that
caused the engine to pause. So ignore this. Is there a way to debug why the
hosted engine paused?
___
U
42 matches
Mail list logo