[ovirt-users] Re: Move Hosted Engine VM to a different storage domain

2020-04-28 Thread Anton Louw
Hi,

Thank you for the reply. I am building another environment today, so I will run 
through the backup and restore again. I do recall one issue I had with the 
restore was that it gave an error that one of the storage domains were already 
in use. This was a new storage domain I added, with no VMs on.

I just want to make sure, I probably cannot deploy the HE on a storage domain 
that is in maintenance, correct? I will try and create a new storage domain 
again, but remove it from the data center, and perhaps see if the HE will 
deploy to it.

The main goal was to have no downtime, but if it is going to be too complex, I 
think my next option will be to create a new HE, and attach the current storage 
domains. Obviously it is going to take a bit of planning, as I will need to 
configure all the networks etc. from scratch again.

Thanks

From: Strahil Nikolov 
Sent: 26 April 2020 08:55
To: Yedidyah Bar David ; Anton Louw 

Cc: users@ovirt.org
Subject: Re: [ovirt-users] Re: Move Hosted Engine VM to a different storage 
domain

On April 26, 2020 9:39:07 AM GMT+03:00, Yedidyah Bar David 
mailto:d...@redhat.com>> wrote:
>On Fri, Apr 24, 2020 at 1:04 PM Anton Louw
>mailto:anton.l...@voxtelecom.co.za>>
>wrote:
>
>>
>>
>> Hi All,
>>
>>
>>
>> I know this question has been asked before, by myself included. I was
>> hoping that someone has run through the exercise of moving the hosted
>> engine VM to a different storage domain. I have tried many routes,
>but the
>> backup and restore does not work for me.
>>
>
>The "standard answer" is backup and restore. Why does it not work?
>
>
>>
>>
>> Is there anybody that can perhaps give me some guidelines or a
>process I
>> can follow?
>>
>
>I didn't try that myself.
>
>The best guidelines I can give you are: Try first on at a test system.
>Do
>the backup on the real machine, create some isolated VM (isolated so
>that
>it does not interfere with your hosts/storage) somewhere to be used as
>a
>test host (or a physical machine if you have one), some storage
>somewhere,
>and restore on it. Make it work. Document what you needed to do. Ask
>here
>with specific questions if/when you have them. Then do on the
>production
>setup.
>
>Also clarify your needs. Do you need no-downtime for the VMs? If so,
>that's
>more complex. If you don't, it might be enough/simpler to deploy a new
>setup and just import the existing storage. Do you have HA VMs? etc.
>
>
>>
>>
>> The reason I need to move the HE VM is because we are decommissioning
>the
>> current storage array where the HE VM is located.
>>
>
>Good luck!
>
>Best regards,
>
>
>>
>>
>> Thank you very much
>>
>> *Anton Louw*
>> *Cloud Engineer: Storage and Virtualization* at *Vox*
>> --
>> *T:* 087 805  | *D:* 087 805 1572
>> *M:* N/A
>> *E:* anton.l...@voxtelecom.co.za
>> *A:* Rutherford Estate, 1 Scott Street, Waverley, Johannesburg
>> www.vox.co.za
>>
>> [image: F] 
>> >
>> [image: T] 
>> >
>> [image: I] 
>> >
>> [image: L] 
>> >
>> [image: Y] 
>> >
>>
>> [image: #VoxBrand]
>> >
>> *Disclaimer*
>>
>> The contents of this email are confidential to the sender and the
>intended
>> recipient. Unless the contents are clearly and entirely of a personal
>> nature, they are subject to copyright in favour of the holding
>company of
>> the Vox group of companies. Any recipient who receives this email in
>error
>> should immediately report the error to the sender and permanently
>delete
>> this email from all storage devices.
>>
>> This email has been scanned for viruses and malware, and may have
>been
>> automatically archived by *Mimecast Ltd*, an innovator in Software as
>a
>> Service (SaaS) for business. Providing a *safer* and *more useful*
>place
>> for your human generated data. Specializing in; Security, archiving
>and
>> compliance. To find out more Click Here
>> >.
>>
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to 
>> users-le...@ovirt.org
>> Privacy Statement: 
>> https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> 

[ovirt-users] User Groups

2020-04-28 Thread Anton Louw
Hi Everybody,

Is there a way to see My Groups through the Web UI? I have tried looking 
around, but cannot see anything. Can this only be seen via the backend in the 
HE?

Thanks


Anton Louw
Cloud Engineer: Storage and Virtualization
__
D: 087 805 1572 | M: N/A
A: Rutherford Estate, 1 Scott Street, Waverley, Johannesburg
anton.l...@voxtelecom.co.za

www.vox.co.za






___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/B2275QLUWDTKMYR2SEKIENBRHFOKGFUS/


[ovirt-users] Re: Gluster deployment fails with missing UUID

2020-04-28 Thread Jayme
Has the drive been used before, it might have existing partition/filesystem
on it? If you are sure it's fine to overwrite try running wipefs -a
/dev/sdb on all hosts. Also make sure there aren't any filters setup in
lvm.conf (there shouldn't be on fresh install, but worth checking).

On Tue, Apr 28, 2020 at 8:22 PM Shareef Jalloq  wrote:

> Hi,
>
> I'm running the gluster deployment flow and am trying to use a second
> drive as the gluster volume.  It's /dev/sdb on each node and I'm using the
> JBOD mode.
>
> I'm seeing the following gluster ansible task fail and a google search
> doesn't bring up much.
>
> TASK [gluster.infra/roles/backend_setup : Create volume groups]
> 
>
> failed: [ovirt-gluster-01.jalloq.co.uk] (item={u'vgname':
> u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"ansible_loop_var": "item",
> "changed": false, "err": "  Couldn't find device with uuid
> Y8FVs8-LP6w-R6CR-Yosh-c40j-17XP-ttP3Np.\n  Couldn't find device with uuid
> tA4lpO-hM9f-S8ci-BdPh-lTve-0Rh1-3Bcsfy.\n  Couldn't find device with uuid
> RG3w6j-yrxn-2iMw-ngd0-HgMS-i5dP-CGjaRk.\n  Couldn't find device with uuid
> lQV02e-TUZE-PXCd-GWEd-eGqe-c2xC-pauHG7.\n  Device /dev/sdb excluded by a
> filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"},
> "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5}
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5U3K3IPYCFOLUFJ56FGJI3TYWT6NOLAZ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VTE5EPSGAAMXRLFQ75CHDW7MMPO5FGGC/


[ovirt-users] Gluster deployment fails with missing UUID

2020-04-28 Thread Shareef Jalloq
Hi,

I'm running the gluster deployment flow and am trying to use a second drive
as the gluster volume.  It's /dev/sdb on each node and I'm using the JBOD
mode.

I'm seeing the following gluster ansible task fail and a google search
doesn't bring up much.

TASK [gluster.infra/roles/backend_setup : Create volume groups]


failed: [ovirt-gluster-01.jalloq.co.uk] (item={u'vgname':
u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"ansible_loop_var": "item",
"changed": false, "err": "  Couldn't find device with uuid
Y8FVs8-LP6w-R6CR-Yosh-c40j-17XP-ttP3Np.\n  Couldn't find device with uuid
tA4lpO-hM9f-S8ci-BdPh-lTve-0Rh1-3Bcsfy.\n  Couldn't find device with uuid
RG3w6j-yrxn-2iMw-ngd0-HgMS-i5dP-CGjaRk.\n  Couldn't find device with uuid
lQV02e-TUZE-PXCd-GWEd-eGqe-c2xC-pauHG7.\n  Device /dev/sdb excluded by a
filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"},
"msg": "Creating physical volume '/dev/sdb' failed", "rc": 5}
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5U3K3IPYCFOLUFJ56FGJI3TYWT6NOLAZ/


[ovirt-users] Re: Can't ping gluster interfaces for HCI setup

2020-04-28 Thread Shareef Jalloq
OK, that's got it, thanks.  I really need to put some effort in sharpening
my networking knowledge.

On Tue, Apr 28, 2020 at 11:20 PM Jayme  wrote:

> Oh and also gluster interface should not be set as default route either.
>
> On Tue, Apr 28, 2020 at 7:19 PM Jayme  wrote:
>
>> On gluster interface try setting gateway to 10.0.1.1
>>
>> If that doesn’t work let us know where the process is failing currently
>> and with what errors etc.
>>
>> On Tue, Apr 28, 2020 at 6:54 PM Shareef Jalloq 
>> wrote:
>>
>>> Thanks.  I have the DNS but must have my interface config wrong.  On my
>>> first node I have two interfaces in use, em1 for the management interface
>>> and p1p1 for the Gluster interface.
>>>
>>> [root@ovirt-node-00 ~]# cat /etc/sysconfig/network-scripts/ifcfg-em1
>>>
>>> TYPE=Ethernet
>>>
>>> PROXY_METHOD=none
>>>
>>> BROWSER_ONLY=no
>>>
>>> BOOTPROTO=none
>>>
>>> DEFROUTE=yes
>>>
>>> IPV4_FAILURE_FATAL=no
>>>
>>> IPV6INIT=no
>>>
>>> IPV6_AUTOCONF=yes
>>>
>>> IPV6_DEFROUTE=yes
>>>
>>> IPV6_FAILURE_FATAL=no
>>>
>>> IPV6_ADDR_GEN_MODE=stable-privacy
>>>
>>> NAME=em1
>>>
>>> UUID=724cddb2-8ce9-43ea-8c0e-e1aff19e72cc
>>>
>>> DEVICE=em1
>>>
>>> ONBOOT=yes
>>>
>>> IPADDR=10.0.0.31
>>>
>>> PREFIX=24
>>>
>>> GATEWAY=10.0.0.1
>>>
>>> DNS1=10.0.0.1
>>>
>>>
>>> [root@ovirt-node-00 ~]# cat /etc/sysconfig/network-scripts/ifcfg-p1p1
>>>
>>> TYPE=Ethernet
>>>
>>> PROXY_METHOD=none
>>>
>>> BROWSER_ONLY=no
>>>
>>> BOOTPROTO=none
>>>
>>> DEFROUTE=yes
>>>
>>> IPV4_FAILURE_FATAL=no
>>>
>>> IPV6INIT=no
>>>
>>> IPV6_AUTOCONF=yes
>>>
>>> IPV6_DEFROUTE=yes
>>>
>>> IPV6_FAILURE_FATAL=no
>>>
>>> IPV6_ADDR_GEN_MODE=stable-privacy
>>>
>>> NAME=p1p1
>>>
>>> UUID=1adb45d3-4dac-4bac-bb19-257fb9c7016b
>>>
>>> DEVICE=p1p1
>>>
>>> ONBOOT=yes
>>>
>>> IPADDR=10.0.1.31
>>>
>>> PREFIX=24
>>>
>>> GATEWAY=10.0.0.1
>>>
>>> DNS1=10.0.0.1
>>>
>>> On Tue, Apr 28, 2020 at 10:47 PM Jayme  wrote:
>>>
  You should use host names for gluster like gluster1.hostname.com that
 resolve to the ip chosen for gluster.

 For my env I have something like this:

 Server0:
 Host0.example.com 10.10.0.100
 Gluster0.example.com 10.0.1.100

 Same thing for other two severs except hostnames and ips of course.

 Use the gluster hostnames for the first step then the sever hostnames
 for the others.

 I made sure I could ssh to and from both hostX and glusterX on each
 server.

 On Tue, Apr 28, 2020 at 6:34 PM Shareef Jalloq 
 wrote:

> Perhaps it's me, but these two documents seem to disagree on what
> hostnames to use when setting up.  Can someone clarify.
>
> The main documentation here:
> https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html
>  talks
> about copying the SSH keys to the gluster host address but the old blog
> post with an outdated interface here:
> https://blogs.ovirt.org/2018/02/up-and-running-with-ovirt-4-2-and-gluster-storage/
>  uses
> the node address.
>
> In the first step of the hyperconverged Gluster wizard, when it asks
> for "Gluster network address", is this wanting the host IP or the IP of 
> the
> Gluster interface?
>
> On Tue, Apr 28, 2020 at 10:24 PM Shareef Jalloq 
> wrote:
>
>> OK, thanks both, that seems to have fixed that issue.
>>
>> Is there any other config I need to do because the next step in the
>> deployment guide of copying SSH keys seems to take over a minute just to
>> prompt for a password.  Something smells here.
>>
>> On Tue, Apr 28, 2020 at 7:32 PM Jayme  wrote:
>>
>>> You should be using a different subnet for each. I.e. 10.0.0.30 and
>>> 10.0.1.30 for example
>>>
>>> On Tue, Apr 28, 2020 at 2:49 PM Shareef Jalloq 
>>> wrote:
>>>
 Hi,

 I'm in the process of trying to set up an HCI 3 node cluster in my
 homelab to better understand the Gluster setup and have failed at the 
 first
 hurdle. I've set up the node interfaces on the built in NIC and am 
 using a
 PCI NIC for the Gluster traffic - at the moment this is 1Gb until I can
 upgrade - and I've assigned a static IP to both interfaces and also 
 have
 both entries in my DNS.

 From any of the three nodes, I can ping the gateway, the other
 nodes, any external IP but I can't ping any of the Gluster NICs.  What 
 have
 I forgotten to do? Here's the relevant output of 'ip addr show'.  em1 
 is
 the motherboard NIC and p1p1 is port 1 of an Intel NIC.  The
 /etc/sysconfig/network-scripts/ifcfg- scripts are identical aside 
 from
 IPADDR, NAME, DEVICE and UUID fields.

 Thanks, Shareef.

 [root@ovirt-node-00 ~]# ip addr show


 2: p1p1:  mtu 1500 qdisc mq state
 UP group default qlen 1000

[ovirt-users] Re: Can't ping gluster interfaces for HCI setup

2020-04-28 Thread Jayme
Oh and also gluster interface should not be set as default route either.

On Tue, Apr 28, 2020 at 7:19 PM Jayme  wrote:

> On gluster interface try setting gateway to 10.0.1.1
>
> If that doesn’t work let us know where the process is failing currently
> and with what errors etc.
>
> On Tue, Apr 28, 2020 at 6:54 PM Shareef Jalloq 
> wrote:
>
>> Thanks.  I have the DNS but must have my interface config wrong.  On my
>> first node I have two interfaces in use, em1 for the management interface
>> and p1p1 for the Gluster interface.
>>
>> [root@ovirt-node-00 ~]# cat /etc/sysconfig/network-scripts/ifcfg-em1
>>
>> TYPE=Ethernet
>>
>> PROXY_METHOD=none
>>
>> BROWSER_ONLY=no
>>
>> BOOTPROTO=none
>>
>> DEFROUTE=yes
>>
>> IPV4_FAILURE_FATAL=no
>>
>> IPV6INIT=no
>>
>> IPV6_AUTOCONF=yes
>>
>> IPV6_DEFROUTE=yes
>>
>> IPV6_FAILURE_FATAL=no
>>
>> IPV6_ADDR_GEN_MODE=stable-privacy
>>
>> NAME=em1
>>
>> UUID=724cddb2-8ce9-43ea-8c0e-e1aff19e72cc
>>
>> DEVICE=em1
>>
>> ONBOOT=yes
>>
>> IPADDR=10.0.0.31
>>
>> PREFIX=24
>>
>> GATEWAY=10.0.0.1
>>
>> DNS1=10.0.0.1
>>
>>
>> [root@ovirt-node-00 ~]# cat /etc/sysconfig/network-scripts/ifcfg-p1p1
>>
>> TYPE=Ethernet
>>
>> PROXY_METHOD=none
>>
>> BROWSER_ONLY=no
>>
>> BOOTPROTO=none
>>
>> DEFROUTE=yes
>>
>> IPV4_FAILURE_FATAL=no
>>
>> IPV6INIT=no
>>
>> IPV6_AUTOCONF=yes
>>
>> IPV6_DEFROUTE=yes
>>
>> IPV6_FAILURE_FATAL=no
>>
>> IPV6_ADDR_GEN_MODE=stable-privacy
>>
>> NAME=p1p1
>>
>> UUID=1adb45d3-4dac-4bac-bb19-257fb9c7016b
>>
>> DEVICE=p1p1
>>
>> ONBOOT=yes
>>
>> IPADDR=10.0.1.31
>>
>> PREFIX=24
>>
>> GATEWAY=10.0.0.1
>>
>> DNS1=10.0.0.1
>>
>> On Tue, Apr 28, 2020 at 10:47 PM Jayme  wrote:
>>
>>>  You should use host names for gluster like gluster1.hostname.com that
>>> resolve to the ip chosen for gluster.
>>>
>>> For my env I have something like this:
>>>
>>> Server0:
>>> Host0.example.com 10.10.0.100
>>> Gluster0.example.com 10.0.1.100
>>>
>>> Same thing for other two severs except hostnames and ips of course.
>>>
>>> Use the gluster hostnames for the first step then the sever hostnames
>>> for the others.
>>>
>>> I made sure I could ssh to and from both hostX and glusterX on each
>>> server.
>>>
>>> On Tue, Apr 28, 2020 at 6:34 PM Shareef Jalloq 
>>> wrote:
>>>
 Perhaps it's me, but these two documents seem to disagree on what
 hostnames to use when setting up.  Can someone clarify.

 The main documentation here:
 https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html
  talks
 about copying the SSH keys to the gluster host address but the old blog
 post with an outdated interface here:
 https://blogs.ovirt.org/2018/02/up-and-running-with-ovirt-4-2-and-gluster-storage/
  uses
 the node address.

 In the first step of the hyperconverged Gluster wizard, when it asks
 for "Gluster network address", is this wanting the host IP or the IP of the
 Gluster interface?

 On Tue, Apr 28, 2020 at 10:24 PM Shareef Jalloq 
 wrote:

> OK, thanks both, that seems to have fixed that issue.
>
> Is there any other config I need to do because the next step in the
> deployment guide of copying SSH keys seems to take over a minute just to
> prompt for a password.  Something smells here.
>
> On Tue, Apr 28, 2020 at 7:32 PM Jayme  wrote:
>
>> You should be using a different subnet for each. I.e. 10.0.0.30 and
>> 10.0.1.30 for example
>>
>> On Tue, Apr 28, 2020 at 2:49 PM Shareef Jalloq 
>> wrote:
>>
>>> Hi,
>>>
>>> I'm in the process of trying to set up an HCI 3 node cluster in my
>>> homelab to better understand the Gluster setup and have failed at the 
>>> first
>>> hurdle. I've set up the node interfaces on the built in NIC and am 
>>> using a
>>> PCI NIC for the Gluster traffic - at the moment this is 1Gb until I can
>>> upgrade - and I've assigned a static IP to both interfaces and also have
>>> both entries in my DNS.
>>>
>>> From any of the three nodes, I can ping the gateway, the other
>>> nodes, any external IP but I can't ping any of the Gluster NICs.  What 
>>> have
>>> I forgotten to do? Here's the relevant output of 'ip addr show'.  em1 is
>>> the motherboard NIC and p1p1 is port 1 of an Intel NIC.  The
>>> /etc/sysconfig/network-scripts/ifcfg- scripts are identical aside 
>>> from
>>> IPADDR, NAME, DEVICE and UUID fields.
>>>
>>> Thanks, Shareef.
>>>
>>> [root@ovirt-node-00 ~]# ip addr show
>>>
>>>
>>> 2: p1p1:  mtu 1500 qdisc mq state
>>> UP group default qlen 1000
>>>
>>> link/ether a0:36:9f:1f:f9:78 brd ff:ff:ff:ff:ff:ff
>>>
>>> inet 10.0.0.34/24 brd 10.0.0.255 scope global noprefixroute p1p1
>>>
>>>valid_lft forever preferred_lft forever
>>>
>>> inet6 fd4d:e9e3:6f5:1:a236:9fff:fe1f:f978/64 scope global
>>> mngtmpaddr dynamic
>>>
>>>valid_lft 

[ovirt-users] Re: Can't ping gluster interfaces for HCI setup

2020-04-28 Thread Jayme
On gluster interface try setting gateway to 10.0.1.1

If that doesn’t work let us know where the process is failing currently and
with what errors etc.

On Tue, Apr 28, 2020 at 6:54 PM Shareef Jalloq  wrote:

> Thanks.  I have the DNS but must have my interface config wrong.  On my
> first node I have two interfaces in use, em1 for the management interface
> and p1p1 for the Gluster interface.
>
> [root@ovirt-node-00 ~]# cat /etc/sysconfig/network-scripts/ifcfg-em1
>
> TYPE=Ethernet
>
> PROXY_METHOD=none
>
> BROWSER_ONLY=no
>
> BOOTPROTO=none
>
> DEFROUTE=yes
>
> IPV4_FAILURE_FATAL=no
>
> IPV6INIT=no
>
> IPV6_AUTOCONF=yes
>
> IPV6_DEFROUTE=yes
>
> IPV6_FAILURE_FATAL=no
>
> IPV6_ADDR_GEN_MODE=stable-privacy
>
> NAME=em1
>
> UUID=724cddb2-8ce9-43ea-8c0e-e1aff19e72cc
>
> DEVICE=em1
>
> ONBOOT=yes
>
> IPADDR=10.0.0.31
>
> PREFIX=24
>
> GATEWAY=10.0.0.1
>
> DNS1=10.0.0.1
>
>
> [root@ovirt-node-00 ~]# cat /etc/sysconfig/network-scripts/ifcfg-p1p1
>
> TYPE=Ethernet
>
> PROXY_METHOD=none
>
> BROWSER_ONLY=no
>
> BOOTPROTO=none
>
> DEFROUTE=yes
>
> IPV4_FAILURE_FATAL=no
>
> IPV6INIT=no
>
> IPV6_AUTOCONF=yes
>
> IPV6_DEFROUTE=yes
>
> IPV6_FAILURE_FATAL=no
>
> IPV6_ADDR_GEN_MODE=stable-privacy
>
> NAME=p1p1
>
> UUID=1adb45d3-4dac-4bac-bb19-257fb9c7016b
>
> DEVICE=p1p1
>
> ONBOOT=yes
>
> IPADDR=10.0.1.31
>
> PREFIX=24
>
> GATEWAY=10.0.0.1
>
> DNS1=10.0.0.1
>
> On Tue, Apr 28, 2020 at 10:47 PM Jayme  wrote:
>
>>  You should use host names for gluster like gluster1.hostname.com that
>> resolve to the ip chosen for gluster.
>>
>> For my env I have something like this:
>>
>> Server0:
>> Host0.example.com 10.10.0.100
>> Gluster0.example.com 10.0.1.100
>>
>> Same thing for other two severs except hostnames and ips of course.
>>
>> Use the gluster hostnames for the first step then the sever hostnames for
>> the others.
>>
>> I made sure I could ssh to and from both hostX and glusterX on each
>> server.
>>
>> On Tue, Apr 28, 2020 at 6:34 PM Shareef Jalloq 
>> wrote:
>>
>>> Perhaps it's me, but these two documents seem to disagree on what
>>> hostnames to use when setting up.  Can someone clarify.
>>>
>>> The main documentation here:
>>> https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html
>>>  talks
>>> about copying the SSH keys to the gluster host address but the old blog
>>> post with an outdated interface here:
>>> https://blogs.ovirt.org/2018/02/up-and-running-with-ovirt-4-2-and-gluster-storage/
>>>  uses
>>> the node address.
>>>
>>> In the first step of the hyperconverged Gluster wizard, when it asks for
>>> "Gluster network address", is this wanting the host IP or the IP of the
>>> Gluster interface?
>>>
>>> On Tue, Apr 28, 2020 at 10:24 PM Shareef Jalloq 
>>> wrote:
>>>
 OK, thanks both, that seems to have fixed that issue.

 Is there any other config I need to do because the next step in the
 deployment guide of copying SSH keys seems to take over a minute just to
 prompt for a password.  Something smells here.

 On Tue, Apr 28, 2020 at 7:32 PM Jayme  wrote:

> You should be using a different subnet for each. I.e. 10.0.0.30 and
> 10.0.1.30 for example
>
> On Tue, Apr 28, 2020 at 2:49 PM Shareef Jalloq 
> wrote:
>
>> Hi,
>>
>> I'm in the process of trying to set up an HCI 3 node cluster in my
>> homelab to better understand the Gluster setup and have failed at the 
>> first
>> hurdle. I've set up the node interfaces on the built in NIC and am using 
>> a
>> PCI NIC for the Gluster traffic - at the moment this is 1Gb until I can
>> upgrade - and I've assigned a static IP to both interfaces and also have
>> both entries in my DNS.
>>
>> From any of the three nodes, I can ping the gateway, the other nodes,
>> any external IP but I can't ping any of the Gluster NICs.  What have I
>> forgotten to do? Here's the relevant output of 'ip addr show'.  em1 is 
>> the
>> motherboard NIC and p1p1 is port 1 of an Intel NIC.  The
>> /etc/sysconfig/network-scripts/ifcfg- scripts are identical aside 
>> from
>> IPADDR, NAME, DEVICE and UUID fields.
>>
>> Thanks, Shareef.
>>
>> [root@ovirt-node-00 ~]# ip addr show
>>
>>
>> 2: p1p1:  mtu 1500 qdisc mq state UP
>> group default qlen 1000
>>
>> link/ether a0:36:9f:1f:f9:78 brd ff:ff:ff:ff:ff:ff
>>
>> inet 10.0.0.34/24 brd 10.0.0.255 scope global noprefixroute p1p1
>>
>>valid_lft forever preferred_lft forever
>>
>> inet6 fd4d:e9e3:6f5:1:a236:9fff:fe1f:f978/64 scope global
>> mngtmpaddr dynamic
>>
>>valid_lft 7054sec preferred_lft 7054sec
>>
>> inet6 fe80::a236:9fff:fe1f:f978/64 scope link
>>
>>valid_lft forever preferred_lft forever
>>
>>
>> 4: em1:  mtu 1500 qdisc pfifo_fast
>> state UP group default qlen 1000
>>
>> link/ether 98:90:96:a1:16:ad brd 

[ovirt-users] Re: Can't ping gluster interfaces for HCI setup

2020-04-28 Thread Strahil Nikolov
Hey Shareef,

if you have slow ssh login - check the A/ & PTR records of both ssh client.
By default ssh is checking the PTR record of the client and this can take some 
time to timeout (and reach a good dns or fail).

Also, you could have issues with 7.3 systems - as systemd is not cleaning up 
properly orphaned session/scope files. There is an RedHat Solution that almost 
work :)

Best Regards,
Strahil Nikolov






В сряда, 29 април 2020 г., 00:50:14 Гринуич+3, Jayme  написа: 





 You should use host names for gluster like gluster1.hostname.com that resolve 
to the ip chosen for gluster. 

For my env I have something like this:

Server0:
Host0.example.com 10.10.0.100
Gluster0.example.com 10.0.1.100

Same thing for other two severs except hostnames and ips of course. 

Use the gluster hostnames for the first step then the sever hostnames for the 
others. 

I made sure I could ssh to and from both hostX and glusterX on each server. 

On Tue, Apr 28, 2020 at 6:34 PM Shareef Jalloq  wrote:
> Perhaps it's me, but these two documents seem to disagree on what hostnames 
> to use when setting up.  Can someone clarify.
> 
> The main documentation here: 
> https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html
>  talks about copying the SSH keys to the gluster host address but the old 
> blog post with an outdated interface here: 
> https://blogs.ovirt.org/2018/02/up-and-running-with-ovirt-4-2-and-gluster-storage/
>  uses the node address.
> 
> In the first step of the hyperconverged Gluster wizard, when it asks for 
> "Gluster network address", is this wanting the host IP or the IP of the 
> Gluster interface?
> 
> On Tue, Apr 28, 2020 at 10:24 PM Shareef Jalloq  wrote:
>> OK, thanks both, that seems to have fixed that issue.
>> 
>> Is there any other config I need to do because the next step in the 
>> deployment guide of copying SSH keys seems to take over a minute just to 
>> prompt for a password.  Something smells here.
>> 
>> On Tue, Apr 28, 2020 at 7:32 PM Jayme  wrote:
>>> You should be using a different subnet for each. I.e. 10.0.0.30 and 
>>> 10.0.1.30 for example
>>> 
>>> On Tue, Apr 28, 2020 at 2:49 PM Shareef Jalloq  wrote:
 Hi,
 
 I'm in the process of trying to set up an HCI 3 node cluster in my homelab 
 to better understand the Gluster setup and have failed at the first 
 hurdle. I've set up the node interfaces on the built in NIC and am using a 
 PCI NIC for the Gluster traffic - at the moment this is 1Gb until I can 
 upgrade - and I've assigned a static IP to both interfaces and also have 
 both entries in my DNS.
 
 From any of the three nodes, I can ping the gateway, the other nodes, any 
 external IP but I can't ping any of the Gluster NICs.  What have I 
 forgotten to do? Here's the relevant output of 'ip addr show'.  em1 is the 
 motherboard NIC and p1p1 is port 1 of an Intel NIC.  The 
 /etc/sysconfig/network-scripts/ifcfg- scripts are identical aside from 
 IPADDR, NAME, DEVICE and UUID fields.
 
 Thanks, Shareef.
 
  
 [root@ovirt-node-00 ~]# ip addr show
 
 
 
 2: p1p1:  mtu 1500 qdisc mq state UP 
 group default qlen 1000
 
     link/ether a0:36:9f:1f:f9:78 brd ff:ff:ff:ff:ff:ff
 
     inet 10.0.0.34/24 brd 10.0.0.255 scope global noprefixroute p1p1
 
        valid_lft forever preferred_lft forever
 
     inet6 fd4d:e9e3:6f5:1:a236:9fff:fe1f:f978/64 scope global mngtmpaddr 
 dynamic 
 
        valid_lft 7054sec preferred_lft 7054sec
 
     inet6 fe80::a236:9fff:fe1f:f978/64 scope link 
 
        valid_lft forever preferred_lft forever
 
 
 
 4: em1:  mtu 1500 qdisc pfifo_fast state 
 UP group default qlen 1000
 
     link/ether 98:90:96:a1:16:ad brd ff:ff:ff:ff:ff:ff
 
     inet 10.0.0.31/24 brd 10.0.0.255 scope global noprefixroute em1
 
        valid_lft forever preferred_lft forever
 
     inet6 fd4d:e9e3:6f5:1:9a90:96ff:fea1:16ad/64 scope global mngtmpaddr 
 dynamic 
 
        valid_lft 7054sec preferred_lft 7054sec
 
     inet6 fe80::9a90:96ff:fea1:16ad/64 scope link 
 
        valid_lft forever preferred_lft forever
 
 
 ___
 Users mailing list -- users@ovirt.org
 To unsubscribe send an email to users-le...@ovirt.org
 Privacy Statement: https://www.ovirt.org/privacy-policy.html
 oVirt Code of Conduct: 
 https://www.ovirt.org/community/about/community-guidelines/
 List Archives: 
 https://lists.ovirt.org/archives/list/users@ovirt.org/message/S7UESGZ6MJXPVKN2UZJTO4OZYGOQIWHE/
 
>>> 
>> 
> 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 

[ovirt-users] Re: Can't ping gluster interfaces for HCI setup

2020-04-28 Thread Shareef Jalloq
Thanks.  I have the DNS but must have my interface config wrong.  On my
first node I have two interfaces in use, em1 for the management interface
and p1p1 for the Gluster interface.

[root@ovirt-node-00 ~]# cat /etc/sysconfig/network-scripts/ifcfg-em1

TYPE=Ethernet

PROXY_METHOD=none

BROWSER_ONLY=no

BOOTPROTO=none

DEFROUTE=yes

IPV4_FAILURE_FATAL=no

IPV6INIT=no

IPV6_AUTOCONF=yes

IPV6_DEFROUTE=yes

IPV6_FAILURE_FATAL=no

IPV6_ADDR_GEN_MODE=stable-privacy

NAME=em1

UUID=724cddb2-8ce9-43ea-8c0e-e1aff19e72cc

DEVICE=em1

ONBOOT=yes

IPADDR=10.0.0.31

PREFIX=24

GATEWAY=10.0.0.1

DNS1=10.0.0.1


[root@ovirt-node-00 ~]# cat /etc/sysconfig/network-scripts/ifcfg-p1p1

TYPE=Ethernet

PROXY_METHOD=none

BROWSER_ONLY=no

BOOTPROTO=none

DEFROUTE=yes

IPV4_FAILURE_FATAL=no

IPV6INIT=no

IPV6_AUTOCONF=yes

IPV6_DEFROUTE=yes

IPV6_FAILURE_FATAL=no

IPV6_ADDR_GEN_MODE=stable-privacy

NAME=p1p1

UUID=1adb45d3-4dac-4bac-bb19-257fb9c7016b

DEVICE=p1p1

ONBOOT=yes

IPADDR=10.0.1.31

PREFIX=24

GATEWAY=10.0.0.1

DNS1=10.0.0.1

On Tue, Apr 28, 2020 at 10:47 PM Jayme  wrote:

>  You should use host names for gluster like gluster1.hostname.com that
> resolve to the ip chosen for gluster.
>
> For my env I have something like this:
>
> Server0:
> Host0.example.com 10.10.0.100
> Gluster0.example.com 10.0.1.100
>
> Same thing for other two severs except hostnames and ips of course.
>
> Use the gluster hostnames for the first step then the sever hostnames for
> the others.
>
> I made sure I could ssh to and from both hostX and glusterX on each
> server.
>
> On Tue, Apr 28, 2020 at 6:34 PM Shareef Jalloq 
> wrote:
>
>> Perhaps it's me, but these two documents seem to disagree on what
>> hostnames to use when setting up.  Can someone clarify.
>>
>> The main documentation here:
>> https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html
>>  talks
>> about copying the SSH keys to the gluster host address but the old blog
>> post with an outdated interface here:
>> https://blogs.ovirt.org/2018/02/up-and-running-with-ovirt-4-2-and-gluster-storage/
>>  uses
>> the node address.
>>
>> In the first step of the hyperconverged Gluster wizard, when it asks for
>> "Gluster network address", is this wanting the host IP or the IP of the
>> Gluster interface?
>>
>> On Tue, Apr 28, 2020 at 10:24 PM Shareef Jalloq 
>> wrote:
>>
>>> OK, thanks both, that seems to have fixed that issue.
>>>
>>> Is there any other config I need to do because the next step in the
>>> deployment guide of copying SSH keys seems to take over a minute just to
>>> prompt for a password.  Something smells here.
>>>
>>> On Tue, Apr 28, 2020 at 7:32 PM Jayme  wrote:
>>>
 You should be using a different subnet for each. I.e. 10.0.0.30 and
 10.0.1.30 for example

 On Tue, Apr 28, 2020 at 2:49 PM Shareef Jalloq 
 wrote:

> Hi,
>
> I'm in the process of trying to set up an HCI 3 node cluster in my
> homelab to better understand the Gluster setup and have failed at the 
> first
> hurdle. I've set up the node interfaces on the built in NIC and am using a
> PCI NIC for the Gluster traffic - at the moment this is 1Gb until I can
> upgrade - and I've assigned a static IP to both interfaces and also have
> both entries in my DNS.
>
> From any of the three nodes, I can ping the gateway, the other nodes,
> any external IP but I can't ping any of the Gluster NICs.  What have I
> forgotten to do? Here's the relevant output of 'ip addr show'.  em1 is the
> motherboard NIC and p1p1 is port 1 of an Intel NIC.  The
> /etc/sysconfig/network-scripts/ifcfg- scripts are identical aside from
> IPADDR, NAME, DEVICE and UUID fields.
>
> Thanks, Shareef.
>
> [root@ovirt-node-00 ~]# ip addr show
>
>
> 2: p1p1:  mtu 1500 qdisc mq state UP
> group default qlen 1000
>
> link/ether a0:36:9f:1f:f9:78 brd ff:ff:ff:ff:ff:ff
>
> inet 10.0.0.34/24 brd 10.0.0.255 scope global noprefixroute p1p1
>
>valid_lft forever preferred_lft forever
>
> inet6 fd4d:e9e3:6f5:1:a236:9fff:fe1f:f978/64 scope global
> mngtmpaddr dynamic
>
>valid_lft 7054sec preferred_lft 7054sec
>
> inet6 fe80::a236:9fff:fe1f:f978/64 scope link
>
>valid_lft forever preferred_lft forever
>
>
> 4: em1:  mtu 1500 qdisc pfifo_fast
> state UP group default qlen 1000
>
> link/ether 98:90:96:a1:16:ad brd ff:ff:ff:ff:ff:ff
>
> inet 10.0.0.31/24 brd 10.0.0.255 scope global noprefixroute em1
>
>valid_lft forever preferred_lft forever
>
> inet6 fd4d:e9e3:6f5:1:9a90:96ff:fea1:16ad/64 scope global
> mngtmpaddr dynamic
>
>valid_lft 7054sec preferred_lft 7054sec
>
> inet6 fe80::9a90:96ff:fea1:16ad/64 scope link
>
>valid_lft forever preferred_lft forever
>
>
> 

[ovirt-users] Re: Can't ping gluster interfaces for HCI setup

2020-04-28 Thread Jayme
 You should use host names for gluster like gluster1.hostname.com that
resolve to the ip chosen for gluster.

For my env I have something like this:

Server0:
Host0.example.com 10.10.0.100
Gluster0.example.com 10.0.1.100

Same thing for other two severs except hostnames and ips of course.

Use the gluster hostnames for the first step then the sever hostnames for
the others.

I made sure I could ssh to and from both hostX and glusterX on each server.

On Tue, Apr 28, 2020 at 6:34 PM Shareef Jalloq  wrote:

> Perhaps it's me, but these two documents seem to disagree on what
> hostnames to use when setting up.  Can someone clarify.
>
> The main documentation here:
> https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html
>  talks
> about copying the SSH keys to the gluster host address but the old blog
> post with an outdated interface here:
> https://blogs.ovirt.org/2018/02/up-and-running-with-ovirt-4-2-and-gluster-storage/
>  uses
> the node address.
>
> In the first step of the hyperconverged Gluster wizard, when it asks for
> "Gluster network address", is this wanting the host IP or the IP of the
> Gluster interface?
>
> On Tue, Apr 28, 2020 at 10:24 PM Shareef Jalloq 
> wrote:
>
>> OK, thanks both, that seems to have fixed that issue.
>>
>> Is there any other config I need to do because the next step in the
>> deployment guide of copying SSH keys seems to take over a minute just to
>> prompt for a password.  Something smells here.
>>
>> On Tue, Apr 28, 2020 at 7:32 PM Jayme  wrote:
>>
>>> You should be using a different subnet for each. I.e. 10.0.0.30 and
>>> 10.0.1.30 for example
>>>
>>> On Tue, Apr 28, 2020 at 2:49 PM Shareef Jalloq 
>>> wrote:
>>>
 Hi,

 I'm in the process of trying to set up an HCI 3 node cluster in my
 homelab to better understand the Gluster setup and have failed at the first
 hurdle. I've set up the node interfaces on the built in NIC and am using a
 PCI NIC for the Gluster traffic - at the moment this is 1Gb until I can
 upgrade - and I've assigned a static IP to both interfaces and also have
 both entries in my DNS.

 From any of the three nodes, I can ping the gateway, the other nodes,
 any external IP but I can't ping any of the Gluster NICs.  What have I
 forgotten to do? Here's the relevant output of 'ip addr show'.  em1 is the
 motherboard NIC and p1p1 is port 1 of an Intel NIC.  The
 /etc/sysconfig/network-scripts/ifcfg- scripts are identical aside from
 IPADDR, NAME, DEVICE and UUID fields.

 Thanks, Shareef.

 [root@ovirt-node-00 ~]# ip addr show


 2: p1p1:  mtu 1500 qdisc mq state UP
 group default qlen 1000

 link/ether a0:36:9f:1f:f9:78 brd ff:ff:ff:ff:ff:ff

 inet 10.0.0.34/24 brd 10.0.0.255 scope global noprefixroute p1p1

valid_lft forever preferred_lft forever

 inet6 fd4d:e9e3:6f5:1:a236:9fff:fe1f:f978/64 scope global
 mngtmpaddr dynamic

valid_lft 7054sec preferred_lft 7054sec

 inet6 fe80::a236:9fff:fe1f:f978/64 scope link

valid_lft forever preferred_lft forever


 4: em1:  mtu 1500 qdisc pfifo_fast
 state UP group default qlen 1000

 link/ether 98:90:96:a1:16:ad brd ff:ff:ff:ff:ff:ff

 inet 10.0.0.31/24 brd 10.0.0.255 scope global noprefixroute em1

valid_lft forever preferred_lft forever

 inet6 fd4d:e9e3:6f5:1:9a90:96ff:fea1:16ad/64 scope global
 mngtmpaddr dynamic

valid_lft 7054sec preferred_lft 7054sec

 inet6 fe80::9a90:96ff:fea1:16ad/64 scope link

valid_lft forever preferred_lft forever


 ___
 Users mailing list -- users@ovirt.org
 To unsubscribe send an email to users-le...@ovirt.org
 Privacy Statement: https://www.ovirt.org/privacy-policy.html
 oVirt Code of Conduct:
 https://www.ovirt.org/community/about/community-guidelines/
 List Archives:
 https://lists.ovirt.org/archives/list/users@ovirt.org/message/S7UESGZ6MJXPVKN2UZJTO4OZYGOQIWHE/

>>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KVMZSD7YPPSCFO6RKTRKA2BAVJGAFDRE/


[ovirt-users] Re: Can't ping gluster interfaces for HCI setup

2020-04-28 Thread Shareef Jalloq
Perhaps it's me, but these two documents seem to disagree on what hostnames
to use when setting up.  Can someone clarify.

The main documentation here:
https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html
talks
about copying the SSH keys to the gluster host address but the old blog
post with an outdated interface here:
https://blogs.ovirt.org/2018/02/up-and-running-with-ovirt-4-2-and-gluster-storage/
uses
the node address.

In the first step of the hyperconverged Gluster wizard, when it asks for
"Gluster network address", is this wanting the host IP or the IP of the
Gluster interface?

On Tue, Apr 28, 2020 at 10:24 PM Shareef Jalloq 
wrote:

> OK, thanks both, that seems to have fixed that issue.
>
> Is there any other config I need to do because the next step in the
> deployment guide of copying SSH keys seems to take over a minute just to
> prompt for a password.  Something smells here.
>
> On Tue, Apr 28, 2020 at 7:32 PM Jayme  wrote:
>
>> You should be using a different subnet for each. I.e. 10.0.0.30 and
>> 10.0.1.30 for example
>>
>> On Tue, Apr 28, 2020 at 2:49 PM Shareef Jalloq 
>> wrote:
>>
>>> Hi,
>>>
>>> I'm in the process of trying to set up an HCI 3 node cluster in my
>>> homelab to better understand the Gluster setup and have failed at the first
>>> hurdle. I've set up the node interfaces on the built in NIC and am using a
>>> PCI NIC for the Gluster traffic - at the moment this is 1Gb until I can
>>> upgrade - and I've assigned a static IP to both interfaces and also have
>>> both entries in my DNS.
>>>
>>> From any of the three nodes, I can ping the gateway, the other nodes,
>>> any external IP but I can't ping any of the Gluster NICs.  What have I
>>> forgotten to do? Here's the relevant output of 'ip addr show'.  em1 is the
>>> motherboard NIC and p1p1 is port 1 of an Intel NIC.  The
>>> /etc/sysconfig/network-scripts/ifcfg- scripts are identical aside from
>>> IPADDR, NAME, DEVICE and UUID fields.
>>>
>>> Thanks, Shareef.
>>>
>>> [root@ovirt-node-00 ~]# ip addr show
>>>
>>>
>>> 2: p1p1:  mtu 1500 qdisc mq state UP
>>> group default qlen 1000
>>>
>>> link/ether a0:36:9f:1f:f9:78 brd ff:ff:ff:ff:ff:ff
>>>
>>> inet 10.0.0.34/24 brd 10.0.0.255 scope global noprefixroute p1p1
>>>
>>>valid_lft forever preferred_lft forever
>>>
>>> inet6 fd4d:e9e3:6f5:1:a236:9fff:fe1f:f978/64 scope global
>>> mngtmpaddr dynamic
>>>
>>>valid_lft 7054sec preferred_lft 7054sec
>>>
>>> inet6 fe80::a236:9fff:fe1f:f978/64 scope link
>>>
>>>valid_lft forever preferred_lft forever
>>>
>>>
>>> 4: em1:  mtu 1500 qdisc pfifo_fast
>>> state UP group default qlen 1000
>>>
>>> link/ether 98:90:96:a1:16:ad brd ff:ff:ff:ff:ff:ff
>>>
>>> inet 10.0.0.31/24 brd 10.0.0.255 scope global noprefixroute em1
>>>
>>>valid_lft forever preferred_lft forever
>>>
>>> inet6 fd4d:e9e3:6f5:1:9a90:96ff:fea1:16ad/64 scope global
>>> mngtmpaddr dynamic
>>>
>>>valid_lft 7054sec preferred_lft 7054sec
>>>
>>> inet6 fe80::9a90:96ff:fea1:16ad/64 scope link
>>>
>>>valid_lft forever preferred_lft forever
>>>
>>>
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/S7UESGZ6MJXPVKN2UZJTO4OZYGOQIWHE/
>>>
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WDDWF6RH6KNXO72XVCG77I4UKAIR6GAR/


[ovirt-users] Re: Can't ping gluster interfaces for HCI setup

2020-04-28 Thread Shareef Jalloq
OK, thanks both, that seems to have fixed that issue.

Is there any other config I need to do because the next step in the
deployment guide of copying SSH keys seems to take over a minute just to
prompt for a password.  Something smells here.

On Tue, Apr 28, 2020 at 7:32 PM Jayme  wrote:

> You should be using a different subnet for each. I.e. 10.0.0.30 and
> 10.0.1.30 for example
>
> On Tue, Apr 28, 2020 at 2:49 PM Shareef Jalloq 
> wrote:
>
>> Hi,
>>
>> I'm in the process of trying to set up an HCI 3 node cluster in my
>> homelab to better understand the Gluster setup and have failed at the first
>> hurdle. I've set up the node interfaces on the built in NIC and am using a
>> PCI NIC for the Gluster traffic - at the moment this is 1Gb until I can
>> upgrade - and I've assigned a static IP to both interfaces and also have
>> both entries in my DNS.
>>
>> From any of the three nodes, I can ping the gateway, the other nodes, any
>> external IP but I can't ping any of the Gluster NICs.  What have I
>> forgotten to do? Here's the relevant output of 'ip addr show'.  em1 is the
>> motherboard NIC and p1p1 is port 1 of an Intel NIC.  The
>> /etc/sysconfig/network-scripts/ifcfg- scripts are identical aside from
>> IPADDR, NAME, DEVICE and UUID fields.
>>
>> Thanks, Shareef.
>>
>> [root@ovirt-node-00 ~]# ip addr show
>>
>>
>> 2: p1p1:  mtu 1500 qdisc mq state UP
>> group default qlen 1000
>>
>> link/ether a0:36:9f:1f:f9:78 brd ff:ff:ff:ff:ff:ff
>>
>> inet 10.0.0.34/24 brd 10.0.0.255 scope global noprefixroute p1p1
>>
>>valid_lft forever preferred_lft forever
>>
>> inet6 fd4d:e9e3:6f5:1:a236:9fff:fe1f:f978/64 scope global mngtmpaddr
>> dynamic
>>
>>valid_lft 7054sec preferred_lft 7054sec
>>
>> inet6 fe80::a236:9fff:fe1f:f978/64 scope link
>>
>>valid_lft forever preferred_lft forever
>>
>>
>> 4: em1:  mtu 1500 qdisc pfifo_fast state
>> UP group default qlen 1000
>>
>> link/ether 98:90:96:a1:16:ad brd ff:ff:ff:ff:ff:ff
>>
>> inet 10.0.0.31/24 brd 10.0.0.255 scope global noprefixroute em1
>>
>>valid_lft forever preferred_lft forever
>>
>> inet6 fd4d:e9e3:6f5:1:9a90:96ff:fea1:16ad/64 scope global mngtmpaddr
>> dynamic
>>
>>valid_lft 7054sec preferred_lft 7054sec
>>
>> inet6 fe80::9a90:96ff:fea1:16ad/64 scope link
>>
>>valid_lft forever preferred_lft forever
>>
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/S7UESGZ6MJXPVKN2UZJTO4OZYGOQIWHE/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QIKDGOQRLB6NTJYVC2ZBPQPVJAEDFX2G/


[ovirt-users] Re: Can't ping gluster interfaces for HCI setup

2020-04-28 Thread Jayme
You should be using a different subnet for each. I.e. 10.0.0.30 and
10.0.1.30 for example

On Tue, Apr 28, 2020 at 2:49 PM Shareef Jalloq  wrote:

> Hi,
>
> I'm in the process of trying to set up an HCI 3 node cluster in my homelab
> to better understand the Gluster setup and have failed at the first hurdle.
> I've set up the node interfaces on the built in NIC and am using a PCI NIC
> for the Gluster traffic - at the moment this is 1Gb until I can upgrade -
> and I've assigned a static IP to both interfaces and also have both entries
> in my DNS.
>
> From any of the three nodes, I can ping the gateway, the other nodes, any
> external IP but I can't ping any of the Gluster NICs.  What have I
> forgotten to do? Here's the relevant output of 'ip addr show'.  em1 is the
> motherboard NIC and p1p1 is port 1 of an Intel NIC.  The
> /etc/sysconfig/network-scripts/ifcfg- scripts are identical aside from
> IPADDR, NAME, DEVICE and UUID fields.
>
> Thanks, Shareef.
>
> [root@ovirt-node-00 ~]# ip addr show
>
>
> 2: p1p1:  mtu 1500 qdisc mq state UP
> group default qlen 1000
>
> link/ether a0:36:9f:1f:f9:78 brd ff:ff:ff:ff:ff:ff
>
> inet 10.0.0.34/24 brd 10.0.0.255 scope global noprefixroute p1p1
>
>valid_lft forever preferred_lft forever
>
> inet6 fd4d:e9e3:6f5:1:a236:9fff:fe1f:f978/64 scope global mngtmpaddr
> dynamic
>
>valid_lft 7054sec preferred_lft 7054sec
>
> inet6 fe80::a236:9fff:fe1f:f978/64 scope link
>
>valid_lft forever preferred_lft forever
>
>
> 4: em1:  mtu 1500 qdisc pfifo_fast state
> UP group default qlen 1000
>
> link/ether 98:90:96:a1:16:ad brd ff:ff:ff:ff:ff:ff
>
> inet 10.0.0.31/24 brd 10.0.0.255 scope global noprefixroute em1
>
>valid_lft forever preferred_lft forever
>
> inet6 fd4d:e9e3:6f5:1:9a90:96ff:fea1:16ad/64 scope global mngtmpaddr
> dynamic
>
>valid_lft 7054sec preferred_lft 7054sec
>
> inet6 fe80::9a90:96ff:fea1:16ad/64 scope link
>
>valid_lft forever preferred_lft forever
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/S7UESGZ6MJXPVKN2UZJTO4OZYGOQIWHE/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/I2UVP4THQIODVBRN46IHDYYDIWBFLG4E/


[ovirt-users] Re: Can't ping gluster interfaces for HCI setup

2020-04-28 Thread Strahil Nikolov
On April 28, 2020 8:46:39 PM GMT+03:00, Shareef Jalloq  
wrote:
>Hi,
>
>I'm in the process of trying to set up an HCI 3 node cluster in my
>homelab
>to better understand the Gluster setup and have failed at the first
>hurdle.
>I've set up the node interfaces on the built in NIC and am using a PCI
>NIC
>for the Gluster traffic - at the moment this is 1Gb until I can upgrade
>-
>and I've assigned a static IP to both interfaces and also have both
>entries
>in my DNS.
>
>From any of the three nodes, I can ping the gateway, the other nodes,
>any
>external IP but I can't ping any of the Gluster NICs.  What have I
>forgotten to do? Here's the relevant output of 'ip addr show'.  em1 is
>the
>motherboard NIC and p1p1 is port 1 of an Intel NIC.  The
>/etc/sysconfig/network-scripts/ifcfg- scripts are identical aside
>from
>IPADDR, NAME, DEVICE and UUID fields.
>
>Thanks, Shareef.
>
>[root@ovirt-node-00 ~]# ip addr show
>
>
>2: p1p1:  mtu 1500 qdisc mq state UP
>group
>default qlen 1000
>
>link/ether a0:36:9f:1f:f9:78 brd ff:ff:ff:ff:ff:ff
>
>inet 10.0.0.34/24 brd 10.0.0.255 scope global noprefixroute p1p1
>
>   valid_lft forever preferred_lft forever
>
>   inet6 fd4d:e9e3:6f5:1:a236:9fff:fe1f:f978/64 scope global mngtmpaddr
>dynamic
>
>   valid_lft 7054sec preferred_lft 7054sec
>
>inet6 fe80::a236:9fff:fe1f:f978/64 scope link
>
>   valid_lft forever preferred_lft forever
>
>
>4: em1:  mtu 1500 qdisc pfifo_fast
>state
>UP group default qlen 1000
>
>link/ether 98:90:96:a1:16:ad brd ff:ff:ff:ff:ff:ff
>
>inet 10.0.0.31/24 brd 10.0.0.255 scope global noprefixroute em1
>
>   valid_lft forever preferred_lft forever
>
>   inet6 fd4d:e9e3:6f5:1:9a90:96ff:fea1:16ad/64 scope global mngtmpaddr
>dynamic
>
>   valid_lft 7054sec preferred_lft 7054sec
>
>inet6 fe80::9a90:96ff:fea1:16ad/64 scope link
>
>   valid_lft forever preferred_lft forever

Use  separate  subnets,  or  change the netmask.
Most probably everything for  10.0.0.0/24 is going through the default gateway.

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GRWPEJM5GLWEOWWT6GLQRXHL2XAHWYF6/


[ovirt-users] Can't ping gluster interfaces for HCI setup

2020-04-28 Thread Shareef Jalloq
Hi,

I'm in the process of trying to set up an HCI 3 node cluster in my homelab
to better understand the Gluster setup and have failed at the first hurdle.
I've set up the node interfaces on the built in NIC and am using a PCI NIC
for the Gluster traffic - at the moment this is 1Gb until I can upgrade -
and I've assigned a static IP to both interfaces and also have both entries
in my DNS.

>From any of the three nodes, I can ping the gateway, the other nodes, any
external IP but I can't ping any of the Gluster NICs.  What have I
forgotten to do? Here's the relevant output of 'ip addr show'.  em1 is the
motherboard NIC and p1p1 is port 1 of an Intel NIC.  The
/etc/sysconfig/network-scripts/ifcfg- scripts are identical aside from
IPADDR, NAME, DEVICE and UUID fields.

Thanks, Shareef.

[root@ovirt-node-00 ~]# ip addr show


2: p1p1:  mtu 1500 qdisc mq state UP group
default qlen 1000

link/ether a0:36:9f:1f:f9:78 brd ff:ff:ff:ff:ff:ff

inet 10.0.0.34/24 brd 10.0.0.255 scope global noprefixroute p1p1

   valid_lft forever preferred_lft forever

inet6 fd4d:e9e3:6f5:1:a236:9fff:fe1f:f978/64 scope global mngtmpaddr
dynamic

   valid_lft 7054sec preferred_lft 7054sec

inet6 fe80::a236:9fff:fe1f:f978/64 scope link

   valid_lft forever preferred_lft forever


4: em1:  mtu 1500 qdisc pfifo_fast state
UP group default qlen 1000

link/ether 98:90:96:a1:16:ad brd ff:ff:ff:ff:ff:ff

inet 10.0.0.31/24 brd 10.0.0.255 scope global noprefixroute em1

   valid_lft forever preferred_lft forever

inet6 fd4d:e9e3:6f5:1:9a90:96ff:fea1:16ad/64 scope global mngtmpaddr
dynamic

   valid_lft 7054sec preferred_lft 7054sec

inet6 fe80::9a90:96ff:fea1:16ad/64 scope link

   valid_lft forever preferred_lft forever
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/S7UESGZ6MJXPVKN2UZJTO4OZYGOQIWHE/


[ovirt-users] Re: Can't deploy engine vm with ovirt-hosted-engine-setup

2020-04-28 Thread Sandro Bonazzola
Il giorno lun 6 apr 2020 alle ore 21:55 Gabriel Bueno 
ha scritto:

> Hi,
> I'm trying deploy an engine vm with ovirt-hosted-engine-setup.
> After the storage domain configuration, the setup exits with the error:
>
> [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_vms":
> [{"affinity_labels": [], "applications": [], "bios": {"boot_menu":
> {"enabled": false}, "type": "i440fx_sea_bios"}, "cdroms": [], "cluster":
> {"href": "/ovirt-engine/api/clusters/8ef534f4-7820-11ea-82a4-00163e2c790d",
> "id": "8ef534f4-7820-11ea-82a4-00163e2c790d"}, "comment": "", "cpu":
> {"architecture": "x86_64", "topology": {"cores": 1, "sockets": 4,
> "threads": 1}}, "cpu_profile": {"href":
> "/ovirt-engine/api/cpuprofiles/58ca604e-01a7-003f-01de-0250", "id":
> "58ca604e-01a7-003f-01de-0250"}, "cpu_shares": 0, "creation_time":
> "2020-04-06 16:12:45.95+00:00", "delete_protected": false,
> "description": "", "disk_attachments": [], "display": {"address":
> "127.0.0.1", "allow_override": false, "certificate": {"content":
> "-BEGIN CERTIFICATE-\n\n-END CERTIFICATE-\n",
> "organization": "xxx", "subject": "O=xxx,CN=xxx"}, "copy_paste_enabled":
> true, "disconnect_action": "LOCK_SCREEN", "file_transf
>  er_enabled": true, "monitors": 1, "port": 5900, "single_qxl_pci": false,
> "smartcard_enabled": false, "type": "vnc"}, "fqdn": "xxx",
> "graphics_consoles": [], "guest_operating_system": {"architecture":
> "x86_64", "codename": "", "distribution": "CentOS Linux", "family":
> "Linux", "kernel": {"version": {"build": 0, "full_version":
> "3.10.0-1062.18.1.el7.x86_64", "major": 3, "minor": 10, "revision": 1062}},
> "version": {"full_version": "7", "major": 7}}, "guest_time_zone": {"name":
> "UTC", "utc_offset": "+00:00"}, "high_availability": {"enabled": false,
> "priority": 0}, "host": {"href":
> "/ovirt-engine/api/hosts/80856993-d4d1-4609-9ad9-7c4bdd2903b6", "id":
> "80856993-d4d1-4609-9ad9-7c4bdd2903b6"}, "host_devices": [], "href":
> "/ovirt-engine/api/vms/e1b55569-5ae4-48c5-a2bf-f109fb4e", "id":
> "e1b55569-5ae4-48c5-a2bf-f109fb4e", "io": {"threads": 1},
> "katello_errata": [], "large_icon": {"href":
> "/ovirt-engine/api/icons/290ffd7e-4bf3-4283-babc-86b675d7a35e", "id":
> "290ffd7e-4bf3-4283-babc-86b6
>  75d7a35e"}, "memory": 17179869184, "memory_policy": {"guaranteed":
> 17179869184, "max": 17179869184}, "migration": {"auto_converge": "inherit",
> "compressed": "inherit"}, "migration_downtime": -1, "multi_queues_enabled":
> true, "name": "external-HostedEngineLocal",
> "next_run_configuration_exists": false, "nics": [], "numa_nodes": [],
> "numa_tune_mode": "interleave", "origin": "external", "original_template":
> {"href":
> "/ovirt-engine/api/templates/----", "id":
> "----"}, "os": {"boot": {"devices":
> ["hd"]}, "type": "other"}, "permissions": [], "placement_policy":
> {"affinity": "migratable"}, "quota": {"id":
> "a80b336c-7820-11ea-b435-00163e2c790d"}, "reported_devices": [],
> "run_once": false, "sessions": [], "small_icon": {"href":
> "/ovirt-engine/api/icons/3d06eacd-f9d1-4c14-a228-b16ed454f7c8", "id":
> "3d06eacd-f9d1-4c14-a228-b16ed454f7c8"}, "snapshots": [], "sso":
> {"methods": [{"id": "guest_agent"}]}, "start_paused": false, "stateles
>  s": false, "statistics": [], "status": "unknown",
> "storage_error_resume_behaviour": "auto_resume", "tags": [], "template":
> {"href":
> "/ovirt-engine/api/templates/----", "id":
> "----"}, "time_zone": {"name": "Etc/GMT"},
> "type": "desktop", "usb": {"enabled": false}, "watchdogs": []}]},
> "attempts": 24, "changed": false, "deprecations": [{"msg": "The
> 'ovirt_vm_facts' module has been renamed to 'ovirt_vm_info', and the
> renamed one no longer returns ansible_facts", "version": "2.13"}]}
>
> Certificate and sensible information has been deleted...
>
> Someone knows what could be wrong with my environment??
>

Above log snippet is not enough for diagnose the issue.
The deprecation warning can safely be ignored.
Can you share a sos report from the host where you were trying to deploy
hosted engine?
Be sure to cross check the content of the report, not sure that sos filters
out the certificate data.




>
> Kind Regards
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3IDHYF7RCQ4CVNE2N4J6XFOJI4LHC6KU/
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com
*
*
*Red Hat respects your work life balance. Therefore there 

[ovirt-users] Re: Upgrade scenario to 4.4 for HCI Gluster

2020-04-28 Thread Sandro Bonazzola
Adding +Lev Veyde  , +Steve Goodman 
 , +Evgeny Slutsky  and +Gobinda Das
 for
awareness

Il giorno sab 18 apr 2020 alle ore 13:03 Gianluca Cecchi <
gianluca.cec...@gmail.com> ha scritto:

> Hello,
> I saw from some mails in the list that the approach for upgrading from 4.3
> to 4.4 will be similar to the 3.6 => 4.0 one.
> Reading the RHV 4.0 Self Hosted Engine Guide it seems it was necessary to
> have a second host, going through this:
> 5.4. Upgrading a RHEV-H-Based Self-Hosted Engine Environment
>
> https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.0/html/self-hosted_engine_guide/upgrading_a_rhev-h-based_self-hosted_engine_environment
> and also
>
> https://www.ovirt.org/develop/release-management/features/sla/hosted-engine-migration-to-4-0.html
>
> upgrade process involved:
>
> - set global ha maintenance
> - install a new host (and set it as on hosted engine one)
> - migrate engine VM to this host and set the host as the SPM
> - create a backup with engine-backup, verify its contents and copy to the
> host in some directory
> - run the upgrade utility on the host to update the engine VM
> hosted-engine --upgrade-appliance
> this will create a backup floating disk of the engine vm disk for rollback
> purposes and override the existing engine disk with a new one where to
> deploy the new engine version applying also the restore from the
> engine-backup
> --> what are requirements for 4.4 engine VM and so free storage to have on
> engine storage domain?
> The engine-setup will be automatically executed on the new version engine
> VM
> - exit from global maintenance
> - update remaining hosts
>
> Is the above procedure the correct one that I can test on a 4.3.9 lab?
>
> The 3.6 -> 4.0 flow implied that a 4.0 host could run a 3.6 engine VM
> Because the backup disk operation is done by the current running engine
> itself.
> So it should be true also for 4.3 -> 4.4 and a 4.4. host should be able to
> run a 4.3 engine, correct?
>
> In case of single 4.3.9 ovirt-node-ng host HCI with Gluster, can I apply
> the upgrade in a similar way?
> Something like:
> - create engine-backup and copy over to the host
> - put host into global maintenace
> - shutdown all VMS, engine included
> - enable 4.4. repo
> - install new image-base 4.4
> is this correct and I can use 4.3 and 4.4 image base versions or do I have
> to sort-of-scratch necessarily? because 4.3 host is based on el7 while 4.4
> on el8... but perhaps on mage based system I have flexibility..?
> what version of gluster will it be applied? Incompatibilities with the
> current 4.3.9 one (version 6.8-1.el7)
> - reboot host
> - exit global maintenance and wait for engine vm to come up correctly
> - enter global maintenance again
> - run the upgrade utility on the host
>
> Or do I need at least temporarily a new server to use as the 4.4. host, or
> what will be the expected path?
>
> Thanks,
> Gianluca
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UR4IJEGRLIIOHZPUD44SK7BRYKM76BHB/
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com
*
*
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZXXDOJB6Q3AQJ7GLLPDXVKG4EODA355Y/


[ovirt-users] Re: [OT] windows 10 qemu-kvm latest stable drivers

2020-04-28 Thread Sandro Bonazzola
Il giorno mer 22 apr 2020 alle ore 18:48 Gianluca Cecchi <
gianluca.cec...@gmail.com> ha scritto:

> Hello,
> I see that this link below, that should be the correct one for stable
> virtio drivers:
>
>
> https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso
>
> points to virtio-win-0.1.171-1 released on May 2019.
> The same pointed by the repo
> https://fedorapeople.org/groups/virt/virtio-win/virtio-win.repo
> Under
>
> https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/archive-virtio/
> there are many new ones... Are so they all to be considered unstable?
>

adding +Gal Zaidman  on this.
Personally, I would recommend using latest version there.



>
> Thanks,
>
> Gianluca
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/CLMIDAFXTGYNPKYN6SE6LWGO7NUUAT3H/
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com
*
*
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/P2P67OJ22DHT4E4HVM6A5J5M46TMCITG/


[ovirt-users] Re: Wrong CPU type recognized on a new Pentium G4560 Kaby Lake

2020-04-28 Thread Sandro Bonazzola
Il giorno mar 28 apr 2020 alle ore 10:09  ha scritto:

> Hi,
> I installed an ovirt node based on latest 4.3.9 iso on this hardware:
>
> CPU Intel Pentium G4560
> Motherboard Asus P10S-i C232 Chipset
>
> Overall setup has no problem but I noticed that the CPU type is recognized
> as:
>
> Intel Westmere IBRS SSBD MDS Family
>

+Ryan Barry  can you please have a look at this?



>
> The overall system info are:
>
> OS Version:
> RHEL - 7 - 7.1908.0.el7.centos
> OS Description:
> oVirt Node 4.3.9
> Kernel Version:
> 3.10.0 - 1062.18.1.el7.x86_64
> KVM Version:
> 2.12.0 - 33.1.el7_7.4
> LIBVIRT Version:
> libvirt-4.5.0-23.el7_7.6
> VDSM Version:
> vdsm-4.30.43-1.el7
> SPICE Version:
> 0.14.0 - 7.el7
> GlusterFS Version:
> glusterfs-6.8-1.el7
> CEPH Version:
> librbd1-10.2.5-4.el7
> Open vSwitch Version:
> openvswitch-2.11.0-4.el7
> Kernel Features:
> PTI: 1, IBRS: 0, RETP: 1, SSBD: 3
> VNC Encryption:
> Disabled
>
> Thats give me some problem on the cluster since I'm trying to join some
> notebooks to it(it's only a lab) and they are recognized ad Haswell CPUs,
> making them too new to be joined
>
> Any help on this?
>
> Thanks
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XC6UVUFBOYL2A5PR6KPR2YXYIIB3CHVX/
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com
*
*
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WV7UPT6TBOJKPO4FWRXQVQOO2R7BQU5N/


[ovirt-users] [ANN] oVirt 4.3.10 Second Release Candidate is now available for testing

2020-04-28 Thread Lev Veyde
The oVirt Project is pleased to announce the availability of oVirt 4.3.10
Second Release Candidate for testing as of April 28th, 2020.



This update is the tenth in a series of stabilization updates to the 4.3
series.



This release is available now on x86_64 architecture for:

* Red Hat Enterprise Linux 7.7 or later (but < 8)

* CentOS Linux (or similar) 7.7 or later (but < 8)



This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:

* Red Hat Enterprise Linux 7.7 or later (but < 8)

* CentOS Linux (or similar) 7.7 or later (but < 8)

* oVirt Node 4.3 (available for x86_64 only)



See the release notes [1] for installation / upgrade instructions and a
list of new features and bugs fixed.



Notes:

- oVirt Appliance is already available

- oVirt Node is already available[2]


Additional Resources:

* Read more about the oVirt 4.3.10 release highlights:
http://www.ovirt.org/release/4.3.10/

* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt

* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/



[1] http://www.ovirt.org/release/4.3.10/
[2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/

-- 

Lev Veyde

Senior Software Engineer, RHCE | RHCVA | MCITP

Red Hat Israel



l...@redhat.com | lve...@redhat.com

TRIED. TESTED. TRUSTED. 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4A6CT52MMBAYUJ7M3VY73E5UWK5BTEJY/


[ovirt-users] Re: Info about openstack staging-ovirt driver connection not released

2020-04-28 Thread Gianluca Cecchi
On Tue, Apr 28, 2020 at 11:40 AM Gianluca Cecchi 
wrote:

> On Tue, Apr 28, 2020 at 11:18 AM Luca 'remix_tj' Lorenzetto <
> lorenzetto.l...@gmail.com> wrote:
>
>> Hello Gianluca,
>>
>> did you tried contacting the RDO team which is responsible of that
>> package? If is an easy fix, they can commit (or help you committing)
>> directly in Openstack.
>>
>> Luca
>>
>>
> I was thinking about it, but I have not clear if I have to use
> https://bugs.launchpad.net/tripleo
> or
>
> https://bugzilla.redhat.com/buglist.cgi?quicksearch=openstack-ironic-staging-drivers
>
> yum list of the package gives:
>
> Installed Packages
> openstack-ironic-staging-drivers.noarch
> 0.9.2-0.20190420093856.546ceca.el7  @delorean-queens
>
> and the repo contains:
>
> [delorean-queens]
> name=delorean-python-dracclient-f49840cfe014040134c1f9b6749acdc7e47d1c24
> baseurl=
> https://trunk.rdoproject.org/centos7-queens/f4/98/f49840cfe014040134c1f9b6749acdc7e47d1c24_424c8702
> enabled=1
> gpgcheck=0
> priority=1
>
> So I should use the bugzilla one I think, correct?
>
> Gianluca
>

In the mean time I posted here:
https://bugzilla.redhat.com/show_bug.cgi?id=1828757

as I see that also the Stein stable package has the problem:
openstack-ironic-staging-drivers-0.11.1-0.20191023075755.ab5bb1d.el7.noarch.rpm

Let's see if it is the correct point

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RBMGMKAA7Y6W5DP5OWIKUB3N2YY22CAR/


[ovirt-users] Re: Info about openstack staging-ovirt driver connection not released

2020-04-28 Thread Gianluca Cecchi
On Tue, Apr 28, 2020 at 11:18 AM Luca 'remix_tj' Lorenzetto <
lorenzetto.l...@gmail.com> wrote:

> Hello Gianluca,
>
> did you tried contacting the RDO team which is responsible of that
> package? If is an easy fix, they can commit (or help you committing)
> directly in Openstack.
>
> Luca
>
>
I was thinking about it, but I have not clear if I have to use
https://bugs.launchpad.net/tripleo
or
https://bugzilla.redhat.com/buglist.cgi?quicksearch=openstack-ironic-staging-drivers

yum list of the package gives:

Installed Packages
openstack-ironic-staging-drivers.noarch
0.9.2-0.20190420093856.546ceca.el7  @delorean-queens

and the repo contains:

[delorean-queens]
name=delorean-python-dracclient-f49840cfe014040134c1f9b6749acdc7e47d1c24
baseurl=
https://trunk.rdoproject.org/centos7-queens/f4/98/f49840cfe014040134c1f9b6749acdc7e47d1c24_424c8702
enabled=1
gpgcheck=0
priority=1

So I should use the bugzilla one I think, correct?

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KNY4QJS7NZ5RYNEVGPEVTVHGG6NOXD5L/


[ovirt-users] Re: Info about openstack staging-ovirt driver connection not released

2020-04-28 Thread Luca 'remix_tj' Lorenzetto
Hello Gianluca,

did you tried contacting the RDO team which is responsible of that
package? If is an easy fix, they can commit (or help you committing)
directly in Openstack.

Luca

On Mon, Apr 27, 2020 at 10:33 PM Gianluca Cecchi
 wrote:
>
> On Sun, Apr 26, 2020 at 1:19 PM Gianluca Cecchi  
> wrote:
>>
>> Hello,
>> I'm setting up an Openstack Queens lab ( to best match OSP 13) using oVirt 
>> VMs as nodes.
>> At this time only undercloud configured and 8 Openstack nodes (VMs) set as 
>> available for provisioning.
>> I'm using staging-ovirt driver on director node in similar way as the vbmc 
>> one.
>> I see from oVirt active user sessions page that every minute I have one 
>> connection for node (in my case 8) of the designated user (in my case 
>> ostackpm).
>> But it seems they are never released.
>> How can I check the problem?
>>
>> Director is CentOS 7 server and the staging-ovirt driver is provided by the 
>> package:
>>
>> [root@director ~]# rpm -q python-ovirt-engine-sdk4
>> python-ovirt-engine-sdk4-4.3.2-2.el7.x86_64
>> [root@director ~]#
>>
>> I didn't configure the oVirt repo but only installed the latest stable 
>> available for 4.3.9:
>>
>> wget 
>> https://resources.ovirt.org/pub/ovirt-4.3/rpm/el7/x86_64/python-ovirt-engine-sdk4-4.3.2-2.el7.x86_64.rpm
>>  sudo yum localinstall python-ovirt-engine-sdk4-4.3.2-2.el7.x86_64.rpm
>>
>> Anyone with experience on this?
>>
>> In the mean time any way to use a command using api to kill the stale (I 
>> think) sessions?
>> Thanks,
>> Gianluca
>
>
> Anyone?
> It seems that the script involved for power management is 
> /usr/lib/python2.7/site-packages/ironic_staging_drivers/ovirt/ovirt.py, of 
> which you can find a copy here:
> https://drive.google.com/file/d/1pC1TXuuc0Vks2UBwlmCHGP4oULVzK1_s/view?usp=sharing
>
> It is part of package 
> openstack-ironic-staging-drivers-0.9.2-0.20190420093856.546ceca.el7.noarch 
> and it misses the "connection.close()" part.
> Anyone more experienced in python can tell me where is it better to put the 
> connection close statement (and if more than one only, in case)?
>
> Thanks,
> Gianluca
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZIM2ZGJZ5VWIH4QQCBPTAQ6GAHYNQLZQ/



-- 
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)

"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)

Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OMFE3WBXYCZQMME2OUU6S5SDKY4L2CPU/


[ovirt-users] Wrong CPU type recognized on a new Pentium G4560 Kaby Lake

2020-04-28 Thread mleone87
Hi, 
I installed an ovirt node based on latest 4.3.9 iso on this hardware:

CPU Intel Pentium G4560
Motherboard Asus P10S-i C232 Chipset

Overall setup has no problem but I noticed that the CPU type is recognized as:

Intel Westmere IBRS SSBD MDS Family

The overall system info are:

OS Version:
RHEL - 7 - 7.1908.0.el7.centos
OS Description:
oVirt Node 4.3.9
Kernel Version:
3.10.0 - 1062.18.1.el7.x86_64
KVM Version:
2.12.0 - 33.1.el7_7.4
LIBVIRT Version:
libvirt-4.5.0-23.el7_7.6
VDSM Version:
vdsm-4.30.43-1.el7
SPICE Version:
0.14.0 - 7.el7
GlusterFS Version:
glusterfs-6.8-1.el7
CEPH Version:
librbd1-10.2.5-4.el7
Open vSwitch Version:
openvswitch-2.11.0-4.el7
Kernel Features:
PTI: 1, IBRS: 0, RETP: 1, SSBD: 3
VNC Encryption:
Disabled

Thats give me some problem on the cluster since I'm trying to join some 
notebooks to it(it's only a lab) and they are recognized ad Haswell CPUs, 
making them too new to be joined

Any help on this?

Thanks
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XC6UVUFBOYL2A5PR6KPR2YXYIIB3CHVX/