[ovirt-users] move 'ovirtmgmt' bridge to a bonded NIC team

2019-01-08 Thread Shawn Southern
We've recently added additional NICs to our oVirt nodes, and want to move the 
ovirtmgmt interface to one of the bonded interfaces, away from the single 
ethernet port currently used.  This is to provide redundant connectivity to the 
nodes.

I've not had any luck finding documentation on how to do this.  If we change it 
manually by editing files in /etc/sysconfig/network-scripts, VDSM simply 
changes everything back.

I'm just looking to be pointed in the right direction here.

Thanks!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SIWBA35BTSOZFTHZR4Z3HCCEWYXXLL44/


[ovirt-users] Shutdown VMs when Nobreak is on Battery

2019-01-08 Thread Vinícius Ferrão
Hello,

I would like to know if oVirt support VMs shutdown when a battery threshold on 
the nobreak device is reached.

There are some fencing agents for APC devices, so I’m hoping this is supported.

If not, how are you guys doing this kind of thing? A separate device or VM in 
the datacenter to issue the shutdown commands?

Thanks,
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/U3UZVJHRTQ5VZV2HBZMA5QDBJZNVJJ6K/


[ovirt-users] Ovirt VDI Solution network issue

2019-01-08 Thread mdcarr
Hello, first time poster looking for help on setting up the network for a VDI 
deployment. A little background, I have Ovirt engine running on a Vmware 
virtual machine and have added a physical host to serve up virtual machines 
running windows 7 to our developers.  The host has a static IP address and when 
I create a virtual machine I can see the default network is attached and up but 
does not receive an IP address. It does have a MAC so I'm wondering if I need 
to have our network team assign an IP for that MAC? I will be creating around 
20 vm's that would get wiped after use so assigning static ip's might cause 
issues. Am I missing something or is there a better way to setup the LAN for 
this host? Thanks for any help!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YZTZ3IZVXMBWFAH44HVQXB4GWKPVOF7H/


[ovirt-users] Re: oVirt Storage questions

2019-01-08 Thread michael
initially I used gluster wizard.  As my knowledge and understanding have 
increased, I tried to "grow" initial volumes, which caused new lvm thinpool 
issues.  So I went through manually and made the storage for ovirt usage.

I originally did just that with the separate addresses.  I want to say that 
worked, until I "reset" some of the gluster bricks in the interface, and then 
when I did that, it was changing the addresses from the infiniband IPs to the 
IPv4 IPs, which have 1/10th the bandwidth.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5IBZSPG6IHFEAQ4SVLWCSCJ7STI4GFYG/


[ovirt-users] Re: Unable to add storage domain

2019-01-08 Thread michael
When I created VDO in cockpit, All options were unchecked, enable compression, 
deduplication, and 512e emulation all defaulted to unchecked.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ESWI2TIGL7EWNHVO2NRJUTRWFPKGB6MV/


[ovirt-users] Re: Hosted Engine VM and Storage not showing up

2019-01-08 Thread Simone Tiraboschi
On Tue, Jan 8, 2019 at 5:31 PM Vinícius Ferrão 
wrote:

> Hello,
>
> On 8 Jan 2019, at 11:20, Simone Tiraboschi  wrote:
>
>
>
> On Mon, Jan 7, 2019 at 10:43 PM Vinícius Ferrão 
> wrote:
>
>> Simone,
>>
>> I have additional findings: Ansible was failing because I was defined the
>> option without-password on SSH root access. So it fails with an
>> authentication failure error during the deployment.
>>
>> After allowing root access over SSH the hosted engine deployement with
>> Ansible worked.
>>
>> Now I will check if everything else is working fine.
>>
>> Maybe I need to open a bug on Bugzilla on this issue?
>>
>
> Ok, from the logs I see that you set without-password and you correctly
> entered a public ssh key when requested.
> But then ansible failed to authenticate to the engine VM, as root, with
> that password.
> So, if you are sure that the correspondent private key was available in
> the right place and with the right permissions, please open a bug.
>
>
> Hello Simone, just to be sure. The private key was always on my personal
> computer. It was never on the oVirt Node.
>
> For years I’ve deployed oVirt this way and it worked as expected.
>
> So if the new behaviour demands a private key on the hypervisor this makes
> the deployment different.
>
> The purpose of the key and enabling root ssh without-password is to
> enforce the security of the hosted engine, right? Not the security between
> the hypervisor and hosted engine during the deployment phase. So the
> setting without-password should be set at the end of hosted engine
> deployment.
>
> If this assumptions are correct I will proceed to the ticket on bugzilla.
>

Now the whole flow, including engine-setup on the engine VM to create the
DB and so on, is executed with ansible and this requires ansible, executed
on the first host, to be able to authenticate to the engine VM over ssh.
Currently the setup is configuring the root password and/or the root ssh
pub key on the first boot with cloud-init and so this implicitly requires
the user to enable password authentication or to configure the host to be
able to access the engine VM with an ssh key.

What you are proposing requires the setup to inject a temporary key
generated on the fly and remove it at the end or configure without-password
only after the deployment.
It makes sense to me but on my opinion it's more an RFE than a real bug.
Feel free to file it.



>
> Thanks,
>
>
>
>
>>
>> Thanks,
>>
>> On 7 Jan 2019, at 15:22, Vinícius Ferrão 
>> wrote:
>>
>> Hello,
>>
>> On 7 Jan 2019, at 12:52, Simone Tiraboschi  wrote:
>>
>>
>>
>> On Mon, Jan 7, 2019 at 2:03 PM Vinícius Ferrão 
>> wrote:
>>
>>> Hello Simone,
>>>
>>> Sent from my iPhone
>>>
>>> On 7 Jan 2019, at 07:11, Simone Tiraboschi  wrote:
>>>
>>>
>>>
>>> On Sun, Jan 6, 2019 at 5:31 PM  wrote:
>>>
 Hello,

 I’ve a new oVirt installation using oVirt 4.2.7.1 Node and after
 deploying the hosted engine it does not show up on the interface even after
 adding the first storage.

 The Datacenter is up but the engine VM and the engine storage does not
 appear.

 I have the following message repeated constantly on /var/log/messages:

 Jan  4 20:17:30 ovirt1 journal: ovirt-ha-agent
 ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm ERROR
 Unable to identify the OVF_STORE volume, falling back to initial vm.conf.
 Please ensure you already added your first data domain for regular VMs

 What’s wrong? Am I doing something different?

>>>
>>> The import of external VM is broken in 4.2.7 as for
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1649615
>>> It will be fixed with 4.2.8.
>>>
>>> In the mean time I strongly suggest to use the regular flow for
>>> hosted-engine deployment (simply skip --noansible option) since only the
>>> vintage deprecated flow is affected by this issue.
>>>
>>>
>>>
>>> Thanks for pointing the issue. I was unable the find this on bugzilla by
>>> myself. The title isn’t helping either.
>>>
>>> But on other hand, I only used the legacy mode because ansible mode
>>> fails.
>>>
>>
>> Can you please attach a log of the issue?
>>
>>
>> For sure, logs on the link:
>> http://www.if.ufrj.br/~ferrao/ovirt/issues/ansible-storage-bypass
>>
>> What happens is that Ansible just bypasses the storage configuration
>> questions:
>>
>> [ INFO  ] Stage: Environment packages setup
>> [ INFO  ] Stage: Programs detection
>> [ INFO  ] Stage: Environment setup
>> [ INFO  ] Stage: Environment customization
>>
>>   --== STORAGE CONFIGURATION ==--
>>
>>
>>   --== HOST NETWORK CONFIGURATION ==--
>>
>>   Please indicate a pingable gateway IP address [10.20.0.1]:
>> [ INFO  ] TASK [Gathering Facts]
>> [ INFO  ] ok: [localhost]
>> [ INFO  ] TASK [Detecting interface on existing management bridge]
>> [ INFO  ] skipping: [localhost]
>> [ INFO  ] TASK [Get all active network interfaces]
>> [ INFO  ] TASK [Filter bonds with bad naming]
>> [ INFO  ] 

[ovirt-users] Re: ISCSI Domain & LVM

2019-01-08 Thread Nir Soffer
On Tue, Jan 8, 2019 at 5:49 PM Ralf Schenk  wrote:
...

> multipaths {
>
> multipath {
> wwid 36001405a26254e2bfd34b179d6e98ba4
> aliasmpath-myhostname-disk1
> }
> }
>
...

> So you suggest to add
>
> "no_path_retry queue"
>
>
> to above config according to your statements in
> https://bugzilla.redhat.com/show_bug.cgi?id=1435335 ?
>

Yes
...

And yes the disks get listed and is also shown in GUI. How can I filter
> this out ? I think https://gerrit.ovirt.org/c/93301/ shows a way to do
> this. Will this be in 4.3 ?
>

Yes, it is available, but I'm not sure using 4.3 at this point is a good
idea. It would be
safer to apply this small patch to 4.2.
Nir

>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HYFKWQ4G35NWCSZT52NVPAHGRY2ULI4T/


[ovirt-users] Re: Hosted Engine VM and Storage not showing up

2019-01-08 Thread Vinícius Ferrão
Hello,

> On 8 Jan 2019, at 11:20, Simone Tiraboschi  wrote:
> 
> 
> 
> On Mon, Jan 7, 2019 at 10:43 PM Vinícius Ferrão  > wrote:
> Simone,
> 
> I have additional findings: Ansible was failing because I was defined the 
> option without-password on SSH root access. So it fails with an 
> authentication failure error during the deployment.
> 
> After allowing root access over SSH the hosted engine deployement with 
> Ansible worked.
> 
> Now I will check if everything else is working fine.
> 
> Maybe I need to open a bug on Bugzilla on this issue?
> 
> Ok, from the logs I see that you set without-password and you correctly 
> entered a public ssh key when requested.
> But then ansible failed to authenticate to the engine VM, as root, with that 
> password.
> So, if you are sure that the correspondent private key was available in the 
> right place and with the right permissions, please open a bug.

Hello Simone, just to be sure. The private key was always on my personal 
computer. It was never on the oVirt Node.

For years I’ve deployed oVirt this way and it worked as expected.

So if the new behaviour demands a private key on the hypervisor this makes the 
deployment different. 

The purpose of the key and enabling root ssh without-password is to enforce the 
security of the hosted engine, right? Not the security between the hypervisor 
and hosted engine during the deployment phase. So the setting without-password 
should be set at the end of hosted engine deployment.

If this assumptions are correct I will proceed to the ticket on bugzilla.

Thanks, 

> 
>  
> 
> Thanks,
> 
>> On 7 Jan 2019, at 15:22, Vinícius Ferrão > > wrote:
>> 
>> Hello,
>> 
>>> On 7 Jan 2019, at 12:52, Simone Tiraboschi >> > wrote:
>>> 
>>> 
>>> 
>>> On Mon, Jan 7, 2019 at 2:03 PM Vinícius Ferrão >> > wrote:
>>> Hello Simone,
>>> 
>>> Sent from my iPhone
>>> 
>>> On 7 Jan 2019, at 07:11, Simone Tiraboschi >> > wrote:
>>> 
 
 
 On Sun, Jan 6, 2019 at 5:31 PM >>> > wrote:
 Hello,
 
 I’ve a new oVirt installation using oVirt 4.2.7.1 Node and after deploying 
 the hosted engine it does not show up on the interface even after adding 
 the first storage.
 
 The Datacenter is up but the engine VM and the engine storage does not 
 appear.
 
 I have the following message repeated constantly on /var/log/messages:
 
 Jan  4 20:17:30 ovirt1 journal: ovirt-ha-agent 
 ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm ERROR 
 Unable to identify the OVF_STORE volume, falling back to initial vm.conf. 
 Please ensure you already added your first data domain for regular VMs
 
 What’s wrong? Am I doing something different?
 
 The import of external VM is broken in 4.2.7 as for 
 https://bugzilla.redhat.com/show_bug.cgi?id=1649615 
 
 It will be fixed with 4.2.8.
 
 In the mean time I strongly suggest to use the regular flow for 
 hosted-engine deployment (simply skip --noansible option) since only the 
 vintage deprecated flow is affected by this issue.
  
>>> 
>>> Thanks for pointing the issue. I was unable the find this on bugzilla by 
>>> myself. The title isn’t helping either.
>>> 
>>> But on other hand, I only used the legacy mode because ansible mode fails.
>>> 
>>> Can you please attach a log of the issue?
>> 
>> For sure, logs on the link:
>> http://www.if.ufrj.br/~ferrao/ovirt/issues/ansible-storage-bypass 
>> 
>> 
>> What happens is that Ansible just bypasses the storage configuration 
>> questions:
>> 
>> [ INFO  ] Stage: Environment packages setup
>> [ INFO  ] Stage: Programs detection
>> [ INFO  ] Stage: Environment setup
>> [ INFO  ] Stage: Environment customization
>>  
>>   --== STORAGE CONFIGURATION ==--
>>  
>>  
>>   --== HOST NETWORK CONFIGURATION ==--
>>  
>>   Please indicate a pingable gateway IP address [10.20.0.1]: 
>> [ INFO  ] TASK [Gathering Facts]
>> [ INFO  ] ok: [localhost]
>> [ INFO  ] TASK [Detecting interface on existing management bridge]
>> [ INFO  ] skipping: [localhost]
>> [ INFO  ] TASK [Get all active network interfaces]
>> [ INFO  ] TASK [Filter bonds with bad naming]
>> [ INFO  ] TASK [Generate output list]
>> 
>> 
>>>  
>>> 
>>> I’m not sure why it fails. I can try it again, but I can ask in advance: 
>>> the management network is bonded, is this an issue? I think I’ve read 
>>> something about this on this list but I’m unsure.
>>> 
>>> No, but you should set bond mode 1, 2, 3, or 4.
>>> Teaming is not supported.
>> 
>> Thanks, since I’m using 802.3ad (LACP) - mode 4, I think I’m good.
>> 
>>>  
>>> 
>>> Thanks,
>>> 

[ovirt-users] Re: ISCSI Domain & LVM

2019-01-08 Thread Nir Soffer
On Tue, Jan 8, 2019 at 6:11 PM Ralf Schenk  wrote:

> Hello,
>
> i see i have already
>
> devices {
> device {
> vendor "LIO-ORG"
> hardware_handler   "1 alua"
> features   "1 queue_if_no_path"
> path_grouping_policy   "failover"
> path_selector  "queue-length 0"
> failback   immediate
> path_checker   directio
> #path_checker   tur
> prio   alua
> prio_args  exclusive_pref_bit
> #fast_io_fail_tmo   25
> *no_path_retry  queue*
> }
> }
>
> Which should result in the same beahaviour, correct ?
>

Yes, but this is very bad for vdsm if vdsm try to manage such devices.

It is better to put this configuration only for the boot device used by
the host, and not for any LIO-ORG device.


> Am 08.01.2019 um 16:48 schrieb Ralf Schenk:
>
> Hello,
>
> I manually renamed them via multipath.conf
>
> multipaths {
> multipath {
> wwid 36001405a26254e2bfd34b179d6e98ba4
> aliasmpath-myhostname-disk1
>
>
So here is a better place for

no_path_retry queue


> }
> }
>
> Before I configured multipath (I installed without first before I knew
> "mpath" parameter in setup !) I had the problem of readonly root a few
> times but the hints and settings (for iscsid.conf) I found out didn't help.
> Thats why I had to tweak dracut ramdisk to set up multipath and understand
> LVM activation and so on of ovirt-ng the hard way.
>
> So you suggest to add
>
> "no_path_retry queue"
>
>
> --
>
>
> *Ralf Schenk*
> fon +49 (0) 24 05 / 40 83 70
> fax +49 (0) 24 05 / 40 83 759
> mail *r...@databay.de* 
>
> *Databay AG*
> Jens-Otto-Krag-Straße 11
> D-52146 Würselen
> *www.databay.de* 
>
> Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
> Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
> Philipp Hermanns
> Aufsichtsratsvorsitzender: Wilhelm Dohmen
> --
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XRW2BRV3XKWWL2SSQGG6BZLSNT5BMWLO/


[ovirt-users] Re: ISCSI Domain & LVM

2019-01-08 Thread Ralf Schenk
Hello,

i see i have already

devices {
    device {
    vendor "LIO-ORG"
    hardware_handler   "1 alua"
    features   "1 queue_if_no_path"
    path_grouping_policy   "failover"
    path_selector  "queue-length 0"
    failback   immediate
    path_checker   directio
    #path_checker   tur
    prio   alua
    prio_args  exclusive_pref_bit
    #fast_io_fail_tmo   25
    *no_path_retry  queue*
    }
}

Which should result in the same beahaviour, correct ?

Am 08.01.2019 um 16:48 schrieb Ralf Schenk:
>
> Hello,
>
> I manually renamed them via multipath.conf
>
> multipaths {
>     multipath {
>     wwid 36001405a26254e2bfd34b179d6e98ba4
>     alias    mpath-myhostname-disk1
>     }
> }
>
> Before I configured multipath (I installed without first before I knew
> "mpath" parameter in setup !) I had the problem of readonly root a few
> times but the hints and settings (for iscsid.conf) I found out didn't
> help. Thats why I had to tweak dracut ramdisk to set up multipath and
> understand LVM activation and so on of ovirt-ng the hard way.
>
> So you suggest to add
>
> "no_path_retry queue"
>
-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* 
    
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* 

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EZIJNG5MHG7F7TGSSVBVMBMWUJCA6RAJ/


[ovirt-users] Re: ISCSI Domain & LVM

2019-01-08 Thread Ralf Schenk
Hello,

I manually renamed them via multipath.conf

multipaths {
    multipath {
    wwid 36001405a26254e2bfd34b179d6e98ba4
    alias    mpath-myhostname-disk1
    }
}

Before I configured multipath (I installed without first before I knew
"mpath" parameter in setup !) I had the problem of readonly root a few
times but the hints and settings (for iscsid.conf) I found out didn't
help. Thats why I had to tweak dracut ramdisk to set up multipath and
understand LVM activation and so on of ovirt-ng the hard way.

So you suggest to add

"no_path_retry queue"

to above config according to your statements in
https://bugzilla.redhat.com/show_bug.cgi?id=1435335 ?

I cannot access https://bugzilla.redhat.com/show_bug.cgi?id=1436415

And yes the disks get listed and is also shown in GUI. How can I filter
this out ? I think https://gerrit.ovirt.org/c/93301/ shows a way to do
this. Will this be in 4.3 ?

So far thanks for your good hints.

[
    {
    "status": "used",
    "vendorID": "LIO-ORG",
    "GUID": "mpath-myhostname-disk1",
    "capacity": "53687091200",
    "fwrev": "4.0",
    "discard_zeroes_data": 0,
    "vgUUID": "",
    "pathlist": [
    {
    "initiatorname": "default",
    "connection": "172.16.1.3",
    "iqn": "iqn.2018-01.com.fqdn:storage01.myhostname-disk1",
    "portal": "1",
    "user": "myhostname",
    "password": "l3tm31scs1-2018",
    "port": "3260"
    },
    {
    "initiatorname": "ovirtmgmt",
    "connection": "192.168.1.3",
    "iqn": "iqn.2018-01.com.fqdn:storage01.myhostname-disk1",
    "portal": "1",
    "user": "myhostname",
    "password": "l3tm31scs1-2018",
    "port": "3260"
    },
    {
    "initiatorname": "ovirtmgmt",
    "connection": "192.168.1.3",
    "iqn": "iqn.2018-01.com.fqdn:storage01.myhostname-disk1",
    "portal": "1",
    "user": "myhostname",
    "password": "l3tm31scs1-2018",
    "port": "3260"
    }
    ],
    "pvsize": "",
    "discard_max_bytes": 4194304,
    "pathstatus": [
    {
    "capacity": "53687091200",
    "physdev": "sda",
    "type": "iSCSI",
    "state": "active",
    "lun": "0"
    },
    {
    "capacity": "53687091200",
    "physdev": "sdb",
    "type": "iSCSI",
    "state": "active",
    "lun": "0"
    },
    {
    "capacity": "53687091200",
    "physdev": "sdc",
    "type": "iSCSI",
    "state": "active",
    "lun": "0"
    }
    ],
    "devtype": "iSCSI",
    "physicalblocksize": "512",
    "pvUUID": "",
    "serial":
"SLIO-ORG_myhostname-disk_a26254e2-bfd3-4b17-9d6e-98ba4ab45902",
    "logicalblocksize": "512",
    "productID": "myhostname-disk"
    }
]

Am 08.01.2019 um 16:19 schrieb Nir Soffer:
> On Tue, Jan 8, 2019 at 4:57 PM Ralf Schenk  > wrote:
>
> Hello,
>
> I cannot tell if this is expected on a node-ng system. I worked
> hard to get it up and running like this. 2 Diskless Hosts (EPYC
> 2x16 Core, 256GB RAM) boot via ISCSI ibft from Gigabit Onboard,
> Initial-Ramdisk establishes multipathing and thats what I get (and
> want). So i've redundant connections (1x1GB, 2x10GB Ethernet as
> bond) to my storage (Ubuntu Box 1xEPYC 16 Core, 128Gig RAM
> currently 8 disks, 2xNvME SSD with ZFS exporting NFS 4.2 and
> targetcli-fb ISCSI targets). All disk images on NFS 4.2 and via
> ISCSI are thin-provisioned and the sparse-files grow and shrink
> when discarding in VM's/Host via fstrim.
>
> My exercises doing this also as UEFI boot were stopped by node-ng
> installer partitioning which refused to set up a UEFI FAT Boot
> partition on the already accessible ISCSI targets.. So systems do
> legacy BIOS boot now.
>
> This works happily even if I unplug one of the Ethernet-Cables.
>
> [root@myhostname ~]# multipath -ll
> mpath-myhostname-disk1 (36001405a26254e2bfd34b179d6e98ba4) dm-0
> LIO-ORG ,myhostname-disk
> size=50G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
> |-+- policy='queue-length 0' prio=50 status=active
> | `- 3:0:0:0 sdb 8:16 active ready running
> |-+- policy='queue-length 0' prio=50 status=enabled
> | `- 4:0:0:0 sdc 8:32 active ready running
> `-+- policy='queue-length 0' prio=50 status=enabled
>   `- 0:0:0:0 sda 8:0  active ready running
>
> See attached lsblk.
>
>
> Looks like the lvm filter suggested by vdsm-tool is correct.
>
> Try to configure it and see if you hosts boot correctly. If not 

[ovirt-users] Re: VM Portal is currently having some issues

2019-01-08 Thread Arnaud DEBEC
Update firefox to 52.7.0, problem solved.
Thank you!

--
Ninroot
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/T5Z2PBBBSHK5J2XANKHQEDOGIQ243XQR/


[ovirt-users] Re: ISCSI Domain & LVM

2019-01-08 Thread Nir Soffer
On Tue, Jan 8, 2019 at 4:57 PM Ralf Schenk  wrote:

> Hello,
>
> I cannot tell if this is expected on a node-ng system. I worked hard to
> get it up and running like this. 2 Diskless Hosts (EPYC 2x16 Core, 256GB
> RAM) boot via ISCSI ibft from Gigabit Onboard, Initial-Ramdisk establishes
> multipathing and thats what I get (and want). So i've redundant connections
> (1x1GB, 2x10GB Ethernet as bond) to my storage (Ubuntu Box 1xEPYC 16 Core,
> 128Gig RAM currently 8 disks, 2xNvME SSD with ZFS exporting NFS 4.2 and
> targetcli-fb ISCSI targets). All disk images on NFS 4.2 and via ISCSI are
> thin-provisioned and the sparse-files grow and shrink when discarding in
> VM's/Host via fstrim.
>
> My exercises doing this also as UEFI boot were stopped by node-ng
> installer partitioning which refused to set up a UEFI FAT Boot partition on
> the already accessible ISCSI targets.. So systems do legacy BIOS boot now.
>
> This works happily even if I unplug one of the Ethernet-Cables.
>
> [root@myhostname ~]# multipath -ll
> mpath-myhostname-disk1 (36001405a26254e2bfd34b179d6e98ba4) dm-0 LIO-ORG
> ,myhostname-disk
> size=50G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
> |-+- policy='queue-length 0' prio=50 status=active
> | `- 3:0:0:0 sdb 8:16 active ready running
> |-+- policy='queue-length 0' prio=50 status=enabled
> | `- 4:0:0:0 sdc 8:32 active ready running
> `-+- policy='queue-length 0' prio=50 status=enabled
>   `- 0:0:0:0 sda 8:0  active ready running
>
> See attached lsblk.
>

Looks like the lvm filter suggested by vdsm-tool is correct.

Try to configure it and see if you hosts boot correctly. If not you will
have to change
to filter to match your setup.

But multipath device named "mpath-myhostname" is alarming.  I would expect
to
see the device as 36001405a26254e2bfd34b179d6e98ba4 in lsblk. Maybe this is
ok with the way your hosts are configured.

Can you share also your /etc/multipath.conf? and any files under
/etc/multpath/conf.d/?

Also check that vdsm does not report mpath-myhostname or
/dev/mapper/36001405a26254e2bfd34b179d6e98ba4 in the output of

vdsm-client Host getDeviceList

Finally, the devices used for booting the host should have special more
robust
multipath configuration, that will queue io forever instead of failing.
Otherwise your
host root file system cab become read only if you loose all paths to
storage at the
same time. The only way to recover from this is to reboot the host.

See these bugs for more info on the needed setup:
https://bugzilla.redhat.com/show_bug.cgi?id=1436415
https://bugzilla.redhat.com/show_bug.cgi?id=1435335

To use proper setup for your boot multipath, you need this patch which is
not available yet in 4.2:
https://gerrit.ovirt.org/c/93301/

Nir



> Am 08.01.2019 um 14:46 schrieb Nir Soffer:
>
> On Tue, Jan 8, 2019 at 1:29 PM Ralf Schenk  wrote:
>
>> Hello,
>>
>> I'm running my Ovirt-Node-NG based Hosts off ISCSI root. That is what
>> "vdsm-tool config-lvm-filter" suggests. Is this correct ?
>>
>> [root@myhostname ~]# vdsm-tool config-lvm-filter
>> Analyzing host...
>> Found these mounted logical volumes on this host:
>>
>>   logical volume:  /dev/mapper/onn_myhostname--iscsi-home
>>   mountpoint:  /home
>>   devices: /dev/mapper/mpath-myhostname-disk1p2
>>
>>   logical volume:
>> /dev/mapper/onn_myhostname--iscsi-ovirt--node--ng--4.2.7.1--0.20181209.0+1
>>   mountpoint:  /
>>   devices: /dev/mapper/mpath-myhostname-disk1p2
>>
>>   logical volume:  /dev/mapper/onn_myhostname--iscsi-swap
>>   mountpoint:  [SWAP]
>>   devices: /dev/mapper/mpath-myhostname-disk1p2
>>
>>   logical volume:  /dev/mapper/onn_myhostname--iscsi-tmp
>>   mountpoint:  /tmp
>>   devices: /dev/mapper/mpath-myhostname-disk1p2
>>
>>   logical volume:  /dev/mapper/onn_myhostname--iscsi-var
>>   mountpoint:  /var
>>   devices: /dev/mapper/mpath-myhostname-disk1p2
>>
>>   logical volume:  /dev/mapper/onn_myhostname--iscsi-var_crash
>>   mountpoint:  /var/crash
>>   devices: /dev/mapper/mpath-myhostname-disk1p2
>>
>>   logical volume:  /dev/mapper/onn_myhostname--iscsi-var_log
>>   mountpoint:  /var/log
>>   devices: /dev/mapper/mpath-myhostname-disk1p2
>>
>>   logical volume:  /dev/mapper/onn_myhostname--iscsi-var_log_audit
>>   mountpoint:  /var/log/audit
>>   devices: /dev/mapper/mpath-myhostname-disk1p2
>>
>> This is the recommended LVM filter for this host:
>>
>>   filter = [ "a|^/dev/mapper/mpath-myhostname-disk1p2$|", "r|.*|" ]
>>
>> This filter allows LVM to access the local devices used by the
>> hypervisor, but not shared storage owned by Vdsm. If you add a new
>> device to the volume group, you will need to edit the filter manually.
>>
>> Configure LVM filter? [yes,NO]
>>
>
> Yoval, is /dev/mapper/mpath-myhostname-disk1p2 expected on node system?
>
> Ralf, can you share the output of:
> lsblk
> multipath -ll
> multipathd show paths format "%d %P"
>
> Nir
>
>>
>>
>>
>> Am 05.01.2019 

[ovirt-users] Re: VM Portal is currently having some issues

2019-01-08 Thread Bohdan Iakymets
Ok, I see. So there is two way to fix this. Update your browser to at
least 47 version. Or try latest master.

BI

On Tue, Jan 8, 2019 at 4:05 PM Arnaud DEBEC  wrote:

> Hi,
>
> Firefox 45.4.0
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/IDOQPGF5UNPDUZZXRFDYGKH2N4CAEPUS/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SWGXQISE2FPFROMQU274YPWNJRQ5UHVA/


[ovirt-users] Re: VM Portal is currently having some issues

2019-01-08 Thread Arnaud DEBEC
Hi,

Firefox 45.4.0
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IDOQPGF5UNPDUZZXRFDYGKH2N4CAEPUS/


[ovirt-users] Re: ISCSI Domain & LVM

2019-01-08 Thread Ralf Schenk
Hello,

I cannot tell if this is expected on a node-ng system. I worked hard to
get it up and running like this. 2 Diskless Hosts (EPYC 2x16 Core, 256GB
RAM) boot via ISCSI ibft from Gigabit Onboard, Initial-Ramdisk
establishes multipathing and thats what I get (and want). So i've
redundant connections (1x1GB, 2x10GB Ethernet as bond) to my storage
(Ubuntu Box 1xEPYC 16 Core, 128Gig RAM currently 8 disks, 2xNvME SSD
with ZFS exporting NFS 4.2 and targetcli-fb ISCSI targets). All disk
images on NFS 4.2 and via ISCSI are thin-provisioned and the
sparse-files grow and shrink when discarding in VM's/Host via fstrim.

My exercises doing this also as UEFI boot were stopped by node-ng
installer partitioning which refused to set up a UEFI FAT Boot partition
on the already accessible ISCSI targets.. So systems do legacy BIOS boot
now.

This works happily even if I unplug one of the Ethernet-Cables.

[root@myhostname ~]# multipath -ll
mpath-myhostname-disk1 (36001405a26254e2bfd34b179d6e98ba4) dm-0 LIO-ORG
,myhostname-disk
size=50G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='queue-length 0' prio=50 status=active
| `- 3:0:0:0 sdb 8:16 active ready running
|-+- policy='queue-length 0' prio=50 status=enabled
| `- 4:0:0:0 sdc 8:32 active ready running
`-+- policy='queue-length 0' prio=50 status=enabled
  `- 0:0:0:0 sda 8:0  active ready running

See attached lsblk.


Am 08.01.2019 um 14:46 schrieb Nir Soffer:
> On Tue, Jan 8, 2019 at 1:29 PM Ralf Schenk  > wrote:
>
> Hello,
>
> I'm running my Ovirt-Node-NG based Hosts off ISCSI root. That is
> what "vdsm-tool config-lvm-filter" suggests. Is this correct ?
>
> [root@myhostname ~]# vdsm-tool config-lvm-filter
> Analyzing host...
> Found these mounted logical volumes on this host:
>
>   logical volume:  /dev/mapper/onn_myhostname--iscsi-home
>   mountpoint:  /home
>   devices: /dev/mapper/mpath-myhostname-disk1p2
>
>   logical volume: 
> /dev/mapper/onn_myhostname--iscsi-ovirt--node--ng--4.2.7.1--0.20181209.0+1
>   mountpoint:  /
>   devices: /dev/mapper/mpath-myhostname-disk1p2
>
>   logical volume:  /dev/mapper/onn_myhostname--iscsi-swap
>   mountpoint:  [SWAP]
>   devices: /dev/mapper/mpath-myhostname-disk1p2
>
>   logical volume:  /dev/mapper/onn_myhostname--iscsi-tmp
>   mountpoint:  /tmp
>   devices: /dev/mapper/mpath-myhostname-disk1p2
>
>   logical volume:  /dev/mapper/onn_myhostname--iscsi-var
>   mountpoint:  /var
>   devices: /dev/mapper/mpath-myhostname-disk1p2
>
>   logical volume:  /dev/mapper/onn_myhostname--iscsi-var_crash
>   mountpoint:  /var/crash
>   devices: /dev/mapper/mpath-myhostname-disk1p2
>
>   logical volume:  /dev/mapper/onn_myhostname--iscsi-var_log
>   mountpoint:  /var/log
>   devices: /dev/mapper/mpath-myhostname-disk1p2
>
>   logical volume:  /dev/mapper/onn_myhostname--iscsi-var_log_audit
>   mountpoint:  /var/log/audit
>   devices: /dev/mapper/mpath-myhostname-disk1p2
>
> This is the recommended LVM filter for this host:
>
>   filter = [ "a|^/dev/mapper/mpath-myhostname-disk1p2$|", "r|.*|" ]
>
> This filter allows LVM to access the local devices used by the
> hypervisor, but not shared storage owned by Vdsm. If you add a new
> device to the volume group, you will need to edit the filter manually.
>
> Configure LVM filter? [yes,NO]
>
>
> Yoval, is /dev/mapper/mpath-myhostname-disk1p2 expected on node system?
>
> Ralf, can you share the output of:
> lsblk
> multipath -ll
> multipathd show paths format "%d %P"
>
> Nir
>
>
>
>
> Am 05.01.2019 um 19:34 schrieb teh...@take3.ro
> :
>> Hello Greg,
>>
>> this is what i was looking for.
>>
>> After running "vdsm-tool config-lvm-filter" on all hosts (and rebooting 
>> them) all PVs, VGs and LVs from ISCSI domain were not visible anymore to 
>> local LVM on the ovirt hosts.
>>
>> Additionally i made following tests:
>> - Cloning + running a VM on the ISCSI domain
>> - Detaching + (re-)attaching of the ISCSI domain
>> - Detaching, removing + (re-)import of the ISCSI domain 
>> - Creating new ISCSI domain (well, i needed to use "force operation" 
>> because creating on same ISCSI target)
>>
>> All tests were successful.
>>
>> As you wished i filed a bug: 
>>  
>> 
>> Thank you.
>>
>> Best regards,
>> Robert
>> ___
>> Users mailing list -- users@ovirt.org 
>> To unsubscribe send an email to users-le...@ovirt.org 
>> 
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: 
>> 

[ovirt-users] Removing SSL 3 from port 16514

2019-01-08 Thread John Nguyen
Hello,

I was hoping to remove SSL 3 from 16514/tcp.  I’m running Ovirt
4.2.0.2-1el7.centos and that port accepts both TSL 1.2 and SSL 3.

Would editing libvirtd.conf and restarting libvirt on the hosts be the best
way?  I worry that it maybe overwritten the next time vdsm gets updated.

#vi /etc/libvirt/libvirtd.conf
tls_priority="NORMAL:!VERS-SSL3.0"


Any help would be appreciated,

Thanks

John
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JOAQ5Z5TT3MACPSX7DDKUFRJGE4SOS7D/


[ovirt-users] Re: Disk full

2019-01-08 Thread Sahina Bose
On Tue, Jan 8, 2019 at 6:37 PM  wrote:
>
> Hi Sahina,
>
> Still have the disk full, the idea is to delete de gluster volume and create 
> a new one.
> In the engine when I try to put the gluster volume in maintenance it keeps in 
> locked state and does not go to maintenance. Even when I try to destroy it 
> does not allow because the operation is in progress.
> I did a gluster volume stop but I don't know if I can do a  gluster volume 
> delete

You can delete the volume, if you do not need the data.
The other option is to delete the disks from the gluster volume mount point.

>
> Any help?
>
> Thanks
>
> José Ferradeira
>
> 
> From: supo...@logicworks.pt
> To: "Sahina Bose" 
> Cc: "users" 
> Sent: Thursday, December 20, 2018 12:25:08 PM
> Subject: [ovirt-users] Re: Disk full
>
> We moved the VM disk to the second gluster. On the ovirt-engine I cannot see 
> the old disk, only the disk attached to the VM on the second gluster.
> We keep having the errors concerning the disk full.
> Using CLI I can see the image on the first gluster volume. So ovirt-engine 
> was able to move the disk to the second volume but did not delete it from the 
> first volume.
>
> # gluster volume info gv0
>
> Volume Name: gv0
> Type: Distribute
> Volume ID: 4aaffd24-553b-4a85-8c9b-386b02b30b6f
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1
> Transport-type: tcp
> Bricks:
> Brick1: gfs1.growtrade.pt:/home/brick1
> Options Reconfigured:
> features.shard-block-size: 512MB
> network.ping-timeout: 30
> storage.owner-gid: 36
> storage.owner-uid: 36
> user.cifs: off
> features.shard: off
> cluster.shd-wait-qlength: 1
> cluster.shd-max-threads: 8
> cluster.locking-scheme: granular
> cluster.data-self-heal-algorithm: full
> cluster.server-quorum-type: server
> cluster.quorum-type: auto
> cluster.eager-lock: enable
> network.remote-dio: enable
> performance.low-prio-threads: 32
> performance.stat-prefetch: off
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> transport.address-family: inet
> performance.readdir-ahead: on
> nfs.disable: on
>
>
> Thanks
>
> 
> From: "Sahina Bose" 
> To: supo...@logicworks.pt, "Krutika Dhananjay" 
> Cc: "users" 
> Sent: Thursday, December 20, 2018 11:53:39 AM
> Subject: Re: [ovirt-users] Disk full
>
> Is it possible for you to delete the old disks from storage domain
> (you can use the ovirt-engine UI). Do you continue to see space used
> despite doing that?
> I see that you are on a much older version of gluster. Have you
> considered updating to 3.12?
>
> Please also provide output of "gluster volume info "
>
> On Thu, Dec 20, 2018 at 3:56 PM  wrote:
> >
> > Yes, I can see the image on the volume.
> > Gluster version:
> > glusterfs-client-xlators-3.8.12-1.el7.x86_64
> > glusterfs-cli-3.8.12-1.el7.x86_64
> > glusterfs-api-3.8.12-1.el7.x86_64
> > glusterfs-fuse-3.8.12-1.el7.x86_64
> > glusterfs-server-3.8.12-1.el7.x86_64
> > glusterfs-libs-3.8.12-1.el7.x86_64
> > glusterfs-3.8.12-1.el7.x86_64
> >
> >
> > Thanks
> >
> > José
> >
> > 
> > From: "Sahina Bose" 
> > To: supo...@logicworks.pt
> > Cc: "users" 
> > Sent: Wednesday, December 19, 2018 4:13:16 PM
> > Subject: Re: [ovirt-users] Disk full
> >
> > Do you see the image on the gluster volume mount? Can you provide the 
> > gluster volume options and version of gluster?
> >
> > On Wed, 19 Dec 2018 at 4:04 PM,  wrote:
> >>
> >> Hi,
> >>
> >> I have a all in one intallation with 2 glusters volumes.
> >> The disk of one VM filled up the brick, which is a partition. That 
> >> partition has 0% free disk space.
> >> I moved the disk of that VM to the other gluster volume, the VM is working 
> >> with the disk on the other gluster volume.
> >> When I move the disk, it didn't delete it from the brick, the engine keeps 
> >> complaining that there is no more disk space on that volume.
> >> What can I do?
> >> Is there a way to prevent this in the future?
> >>
> >> Many thanks
> >>
> >> José
> >>
> >>
> >>
> >> --
> >> 
> >> Jose Ferradeira
> >> http://www.logicworks.pt
> >> ___
> >> Users mailing list -- users@ovirt.org
> >> To unsubscribe send an email to users-le...@ovirt.org
> >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> >> oVirt Code of Conduct: 
> >> https://www.ovirt.org/community/about/community-guidelines/
> >> List Archives: 
> >> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XE37KCG4PHD3LBQG3NCPTE45ASF3IEMX/
> >
> >
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> 

[ovirt-users] Re: ISCSI Domain & LVM

2019-01-08 Thread Nir Soffer
On Tue, Jan 8, 2019 at 1:29 PM Ralf Schenk  wrote:

> Hello,
>
> I'm running my Ovirt-Node-NG based Hosts off ISCSI root. That is what
> "vdsm-tool config-lvm-filter" suggests. Is this correct ?
>
> [root@myhostname ~]# vdsm-tool config-lvm-filter
> Analyzing host...
> Found these mounted logical volumes on this host:
>
>   logical volume:  /dev/mapper/onn_myhostname--iscsi-home
>   mountpoint:  /home
>   devices: /dev/mapper/mpath-myhostname-disk1p2
>
>   logical volume:
> /dev/mapper/onn_myhostname--iscsi-ovirt--node--ng--4.2.7.1--0.20181209.0+1
>   mountpoint:  /
>   devices: /dev/mapper/mpath-myhostname-disk1p2
>
>   logical volume:  /dev/mapper/onn_myhostname--iscsi-swap
>   mountpoint:  [SWAP]
>   devices: /dev/mapper/mpath-myhostname-disk1p2
>
>   logical volume:  /dev/mapper/onn_myhostname--iscsi-tmp
>   mountpoint:  /tmp
>   devices: /dev/mapper/mpath-myhostname-disk1p2
>
>   logical volume:  /dev/mapper/onn_myhostname--iscsi-var
>   mountpoint:  /var
>   devices: /dev/mapper/mpath-myhostname-disk1p2
>
>   logical volume:  /dev/mapper/onn_myhostname--iscsi-var_crash
>   mountpoint:  /var/crash
>   devices: /dev/mapper/mpath-myhostname-disk1p2
>
>   logical volume:  /dev/mapper/onn_myhostname--iscsi-var_log
>   mountpoint:  /var/log
>   devices: /dev/mapper/mpath-myhostname-disk1p2
>
>   logical volume:  /dev/mapper/onn_myhostname--iscsi-var_log_audit
>   mountpoint:  /var/log/audit
>   devices: /dev/mapper/mpath-myhostname-disk1p2
>
> This is the recommended LVM filter for this host:
>
>   filter = [ "a|^/dev/mapper/mpath-myhostname-disk1p2$|", "r|.*|" ]
>
> This filter allows LVM to access the local devices used by the
> hypervisor, but not shared storage owned by Vdsm. If you add a new
> device to the volume group, you will need to edit the filter manually.
>
> Configure LVM filter? [yes,NO]
>

Yoval, is /dev/mapper/mpath-myhostname-disk1p2 expected on node system?

Ralf, can you share the output of:
lsblk
multipath -ll
multipathd show paths format "%d %P"

Nir

>
>
>
> Am 05.01.2019 um 19:34 schrieb teh...@take3.ro:
>
> Hello Greg,
>
> this is what i was looking for.
>
> After running "vdsm-tool config-lvm-filter" on all hosts (and rebooting them) 
> all PVs, VGs and LVs from ISCSI domain were not visible anymore to local LVM 
> on the ovirt hosts.
>
> Additionally i made following tests:
> - Cloning + running a VM on the ISCSI domain
> - Detaching + (re-)attaching of the ISCSI domain
> - Detaching, removing + (re-)import of the ISCSI domain
> - Creating new ISCSI domain (well, i needed to use "force operation" because 
> creating on same ISCSI target)
>
> All tests were successful.
>
> As you wished i filed a bug: 
>  
> 
> Thank you.
>
> Best regards,
> Robert
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VD6WCL343YGB2JVBWPZG54IMSQZ5VZ7M/
>
> --
>
>
> *Ralf Schenk*
> fon +49 (0) 24 05 / 40 83 70
> fax +49 (0) 24 05 / 40 83 759
> mail *r...@databay.de* 
>
> *Databay AG*
> Jens-Otto-Krag-Straße 11
> D-52146 Würselen
> *www.databay.de* 
>
> Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
> Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
> Philipp Hermanns
> Aufsichtsratsvorsitzender: Wilhelm Dohmen
> --
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/HX3LO46HW6GERQXDRCLRYUUN2NLURQTK/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4FVIIXS3D7OJORIRP32J2FFLFHQF5EER/


[ovirt-users] Re: Hosted Engine VM and Storage not showing up

2019-01-08 Thread Simone Tiraboschi
On Mon, Jan 7, 2019 at 10:43 PM Vinícius Ferrão 
wrote:

> Simone,
>
> I have additional findings: Ansible was failing because I was defined the
> option without-password on SSH root access. So it fails with an
> authentication failure error during the deployment.
>
> After allowing root access over SSH the hosted engine deployement with
> Ansible worked.
>
> Now I will check if everything else is working fine.
>
> Maybe I need to open a bug on Bugzilla on this issue?
>

Ok, from the logs I see that you set without-password and you correctly
entered a public ssh key when requested.
But then ansible failed to authenticate to the engine VM, as root, with
that password.
So, if you are sure that the correspondent private key was available in the
right place and with the right permissions, please open a bug.



>
> Thanks,
>
> On 7 Jan 2019, at 15:22, Vinícius Ferrão 
> wrote:
>
> Hello,
>
> On 7 Jan 2019, at 12:52, Simone Tiraboschi  wrote:
>
>
>
> On Mon, Jan 7, 2019 at 2:03 PM Vinícius Ferrão 
> wrote:
>
>> Hello Simone,
>>
>> Sent from my iPhone
>>
>> On 7 Jan 2019, at 07:11, Simone Tiraboschi  wrote:
>>
>>
>>
>> On Sun, Jan 6, 2019 at 5:31 PM  wrote:
>>
>>> Hello,
>>>
>>> I’ve a new oVirt installation using oVirt 4.2.7.1 Node and after
>>> deploying the hosted engine it does not show up on the interface even after
>>> adding the first storage.
>>>
>>> The Datacenter is up but the engine VM and the engine storage does not
>>> appear.
>>>
>>> I have the following message repeated constantly on /var/log/messages:
>>>
>>> Jan  4 20:17:30 ovirt1 journal: ovirt-ha-agent
>>> ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm ERROR
>>> Unable to identify the OVF_STORE volume, falling back to initial vm.conf.
>>> Please ensure you already added your first data domain for regular VMs
>>>
>>> What’s wrong? Am I doing something different?
>>>
>>
>> The import of external VM is broken in 4.2.7 as for
>> https://bugzilla.redhat.com/show_bug.cgi?id=1649615
>> It will be fixed with 4.2.8.
>>
>> In the mean time I strongly suggest to use the regular flow for
>> hosted-engine deployment (simply skip --noansible option) since only the
>> vintage deprecated flow is affected by this issue.
>>
>>
>>
>> Thanks for pointing the issue. I was unable the find this on bugzilla by
>> myself. The title isn’t helping either.
>>
>> But on other hand, I only used the legacy mode because ansible mode fails.
>>
>
> Can you please attach a log of the issue?
>
>
> For sure, logs on the link:
> http://www.if.ufrj.br/~ferrao/ovirt/issues/ansible-storage-bypass
>
> What happens is that Ansible just bypasses the storage configuration
> questions:
>
> [ INFO  ] Stage: Environment packages setup
> [ INFO  ] Stage: Programs detection
> [ INFO  ] Stage: Environment setup
> [ INFO  ] Stage: Environment customization
>
>   --== STORAGE CONFIGURATION ==--
>
>
>   --== HOST NETWORK CONFIGURATION ==--
>
>   Please indicate a pingable gateway IP address [10.20.0.1]:
> [ INFO  ] TASK [Gathering Facts]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [Detecting interface on existing management bridge]
> [ INFO  ] skipping: [localhost]
> [ INFO  ] TASK [Get all active network interfaces]
> [ INFO  ] TASK [Filter bonds with bad naming]
> [ INFO  ] TASK [Generate output list]
>
>
>
>
>>
>> I’m not sure why it fails. I can try it again, but I can ask in advance:
>> the management network is bonded, is this an issue? I think I’ve read
>> something about this on this list but I’m unsure.
>>
>
> No, but you should set bond mode 1, 2, 3, or 4.
> Teaming is not supported.
>
>
> Thanks, since I’m using 802.3ad (LACP) - mode 4, I think I’m good.
>
>
>
>>
>> Thanks,
>>
>>
>>>
>>> Additional infos:
>>>
>>> [root@ovirt1 ~]# vdsm-tool list-nets
>>> ovirtmgmt (default route)
>>> storage
>>>
>>> [root@ovirt1 ~]# ip a | grep "inet "
>>>inet 127.0.0.1/8 scope host lo
>>>inet 10.20.0.101/24 brd 10.20.0.255 scope global dynamic ovirtmgmt
>>>inet 192.168.10.1/29 brd 192.168.10.7 scope global storage
>>>
>>> [root@ovirt1 ~]# mount | grep -i nfs
>>> sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
>>> 10.20.0.200:/mnt/pool0/ovirt/he on /rhev/data-center/mnt/10.20.0.
>>> 200:_mnt_pool0_ovirt_he type nfs4
>>> (rw,relatime,vers=4.1,rsize=131072,wsize=131072,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=10.20.0.101,local_lock=none,addr=10.20.0.200)
>>>
>>> [root@ovirt1 ~]# hosted-engine --check-deployed
>>> Returns nothing!
>>>
>>> [root@ovirt1 ~]# hosted-engine --check-liveliness
>>> Hosted Engine is up!
>>>
>>> [root@ovirt1 ~]# hosted-engine --vm-status
>>>
>>> --== Host 1 status ==--
>>>
>>> conf_on_shared_storage : True
>>> Status up-to-date  : True
>>> Hostname   : ovirt1.local.versatushpc.com.br
>>> Host ID: 1
>>> Engine status  : {"health": "good", "vm": "up",
>>> 

[ovirt-users] Re: Disk full

2019-01-08 Thread suporte
Hi Sahina, 

Still have the disk full, the idea is to delete de gluster volume and create a 
new one. 
In the engine when I try to put the gluster volume in maintenance it keeps in 
locked state and does not go to maintenance. Even when I try to destroy it does 
not allow because the operation is in progress. 
I did a gluster volume stop but I don't know if I can do a gluster volume 
delete 

Any help? 

Thanks 

José Ferradeira 


From: supo...@logicworks.pt 
To: "Sahina Bose"  
Cc: "users"  
Sent: Thursday, December 20, 2018 12:25:08 PM 
Subject: [ovirt-users] Re: Disk full 

We moved the VM disk to the second gluster. On the ovirt-engine I cannot see 
the old disk, only the disk attached to the VM on the second gluster. 
We keep having the errors concerning the disk full. 
Using CLI I can see the image on the first gluster volume. So ovirt-engine was 
able to move the disk to the second volume but did not delete it from the first 
volume. 

# gluster volume info gv0 

Volume Name: gv0 
Type: Distribute 
Volume ID: 4aaffd24-553b-4a85-8c9b-386b02b30b6f 
Status: Started 
Snapshot Count: 0 
Number of Bricks: 1 
Transport-type: tcp 
Bricks: 
Brick1: gfs1.growtrade.pt:/home/brick1 
Options Reconfigured: 
features.shard-block-size: 512MB 
network.ping-timeout: 30 
storage.owner-gid: 36 
storage.owner-uid: 36 
user.cifs: off 
features.shard: off 
cluster.shd-wait-qlength: 1 
cluster.shd-max-threads: 8 
cluster.locking-scheme: granular 
cluster.data-self-heal-algorithm: full 
cluster.server-quorum-type: server 
cluster.quorum-type: auto 
cluster.eager-lock: enable 
network.remote-dio: enable 
performance.low-prio-threads: 32 
performance.stat-prefetch: off 
performance.io-cache: off 
performance.read-ahead: off 
performance.quick-read: off 
transport.address-family: inet 
performance.readdir-ahead: on 
nfs.disable: on 


Thanks 


From: "Sahina Bose"  
To: supo...@logicworks.pt, "Krutika Dhananjay"  
Cc: "users"  
Sent: Thursday, December 20, 2018 11:53:39 AM 
Subject: Re: [ovirt-users] Disk full 

Is it possible for you to delete the old disks from storage domain 
(you can use the ovirt-engine UI). Do you continue to see space used 
despite doing that? 
I see that you are on a much older version of gluster. Have you 
considered updating to 3.12? 

Please also provide output of "gluster volume info " 

On Thu, Dec 20, 2018 at 3:56 PM  wrote: 
> 
> Yes, I can see the image on the volume. 
> Gluster version: 
> glusterfs-client-xlators-3.8.12-1.el7.x86_64 
> glusterfs-cli-3.8.12-1.el7.x86_64 
> glusterfs-api-3.8.12-1.el7.x86_64 
> glusterfs-fuse-3.8.12-1.el7.x86_64 
> glusterfs-server-3.8.12-1.el7.x86_64 
> glusterfs-libs-3.8.12-1.el7.x86_64 
> glusterfs-3.8.12-1.el7.x86_64 
> 
> 
> Thanks 
> 
> José 
> 
>  
> From: "Sahina Bose"  
> To: supo...@logicworks.pt 
> Cc: "users"  
> Sent: Wednesday, December 19, 2018 4:13:16 PM 
> Subject: Re: [ovirt-users] Disk full 
> 
> Do you see the image on the gluster volume mount? Can you provide the gluster 
> volume options and version of gluster? 
> 
> On Wed, 19 Dec 2018 at 4:04 PM,  wrote: 
>> 
>> Hi, 
>> 
>> I have a all in one intallation with 2 glusters volumes. 
>> The disk of one VM filled up the brick, which is a partition. That partition 
>> has 0% free disk space. 
>> I moved the disk of that VM to the other gluster volume, the VM is working 
>> with the disk on the other gluster volume. 
>> When I move the disk, it didn't delete it from the brick, the engine keeps 
>> complaining that there is no more disk space on that volume. 
>> What can I do? 
>> Is there a way to prevent this in the future? 
>> 
>> Many thanks 
>> 
>> José 
>> 
>> 
>> 
>> -- 
>>  
>> Jose Ferradeira 
>> http://www.logicworks.pt 
>> ___ 
>> Users mailing list -- users@ovirt.org 
>> To unsubscribe send an email to users-le...@ovirt.org 
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ 
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/ 
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XE37KCG4PHD3LBQG3NCPTE45ASF3IEMX/
>>  
> 
> 

___ 
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org 
Privacy Statement: https://www.ovirt.org/site/privacy-policy/ 
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/ 
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XS6UCO7LD6XIXXFTVK2KJM7FD6X4TNT5/
 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 

[ovirt-users] Re: ISCSI Domain & LVM

2019-01-08 Thread Ralf Schenk
Hello,

I'm running my Ovirt-Node-NG based Hosts off ISCSI root. That is what
"vdsm-tool config-lvm-filter" suggests. Is this correct ?

[root@myhostname ~]# vdsm-tool config-lvm-filter
Analyzing host...
Found these mounted logical volumes on this host:

  logical volume:  /dev/mapper/onn_myhostname--iscsi-home
  mountpoint:  /home
  devices: /dev/mapper/mpath-myhostname-disk1p2

  logical volume: 
/dev/mapper/onn_myhostname--iscsi-ovirt--node--ng--4.2.7.1--0.20181209.0+1
  mountpoint:  /
  devices: /dev/mapper/mpath-myhostname-disk1p2

  logical volume:  /dev/mapper/onn_myhostname--iscsi-swap
  mountpoint:  [SWAP]
  devices: /dev/mapper/mpath-myhostname-disk1p2

  logical volume:  /dev/mapper/onn_myhostname--iscsi-tmp
  mountpoint:  /tmp
  devices: /dev/mapper/mpath-myhostname-disk1p2

  logical volume:  /dev/mapper/onn_myhostname--iscsi-var
  mountpoint:  /var
  devices: /dev/mapper/mpath-myhostname-disk1p2

  logical volume:  /dev/mapper/onn_myhostname--iscsi-var_crash
  mountpoint:  /var/crash
  devices: /dev/mapper/mpath-myhostname-disk1p2

  logical volume:  /dev/mapper/onn_myhostname--iscsi-var_log
  mountpoint:  /var/log
  devices: /dev/mapper/mpath-myhostname-disk1p2

  logical volume:  /dev/mapper/onn_myhostname--iscsi-var_log_audit
  mountpoint:  /var/log/audit
  devices: /dev/mapper/mpath-myhostname-disk1p2

This is the recommended LVM filter for this host:

  filter = [ "a|^/dev/mapper/mpath-myhostname-disk1p2$|", "r|.*|" ]

This filter allows LVM to access the local devices used by the
hypervisor, but not shared storage owned by Vdsm. If you add a new
device to the volume group, you will need to edit the filter manually.

Configure LVM filter? [yes,NO]


Am 05.01.2019 um 19:34 schrieb teh...@take3.ro:
> Hello Greg,
>
> this is what i was looking for.
>
> After running "vdsm-tool config-lvm-filter" on all hosts (and rebooting them) 
> all PVs, VGs and LVs from ISCSI domain were not visible anymore to local LVM 
> on the ovirt hosts.
>
> Additionally i made following tests:
> - Cloning + running a VM on the ISCSI domain
> - Detaching + (re-)attaching of the ISCSI domain
> - Detaching, removing + (re-)import of the ISCSI domain 
> - Creating new ISCSI domain (well, i needed to use "force operation" because 
> creating on same ISCSI target)
>
> All tests were successful.
>
> As you wished i filed a bug: 
> Thank you.
>
> Best regards,
> Robert
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VD6WCL343YGB2JVBWPZG54IMSQZ5VZ7M/
-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* 
    
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* 

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HX3LO46HW6GERQXDRCLRYUUN2NLURQTK/


[ovirt-users] Re: Spice console very poor performance for Windows 10 vm

2019-01-08 Thread Victor Toso
Hi,

On Tue, Jan 08, 2019 at 12:08:31PM +0200, Leo David wrote:
> Thank you very mucjh,  and sorry for being so lazy to search
> for that rpm by myself. Somehow, fedora rpms missed from my
> mind.  Oh boy, it requires a lot of packages. Do you think
> would it be a good idea to temporarily install fedora repos, do
> the yum installation to get the dependencoes too and then
> disable the repo ? I am thinking to not break the ovirt node
> installation.

The easiest path is to get the source from your current rpm,
apply the patch mentioned in previous email, build, install,
test.

If that does not work you can rollback. If works, you can rethink
what is best.

Cheers,

>  yum localinstall spice-server-0.14.1-1.fc30.x86_64.rpm
> Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist,
> package_upload, product-id, search-disabled-repos, subscription-manager,
> vdsmupgrade
> This system is not registered with an entitlement server. You can use
> subscription-manager to register.
> Examining spice-server-0.14.1-1.fc30.x86_64.rpm:
> spice-server-0.14.1-1.fc30.x86_64
> Marking spice-server-0.14.1-1.fc30.x86_64.rpm as an update to
> spice-server-0.14.0-2.el7_5.3.x86_64
> Resolving Dependencies
> --> Running transaction check
> ---> Package spice-server.x86_64 0:0.14.0-2.el7_5.3 will be updated
> ---> Package spice-server.x86_64 0:0.14.1-1.fc30 will be an update
> --> Processing Dependency: libcrypto.so.1.1(OPENSSL_1_1_0)(64bit) for
> package: spice-server-0.14.1-1.fc30.x86_64
> Loading mirror speeds from cached hostfile
>  * epel: ftp.nluug.nl
>  * ovirt-4.2-epel: ftp.nluug.nl
> --> Processing Dependency: libssl.so.1.1(OPENSSL_1_1_0)(64bit) for package:
> spice-server-0.14.1-1.fc30.x86_64
> --> Processing Dependency: libcrypto.so.1.1()(64bit) for package:
> spice-server-0.14.1-1.fc30.x86_64
> --> Processing Dependency: libgstapp-1.0.so.0()(64bit) for package:
> spice-server-0.14.1-1.fc30.x86_64
> --> Processing Dependency: libgstbase-1.0.so.0()(64bit) for package:
> spice-server-0.14.1-1.fc30.x86_64
> --> Processing Dependency: libgstreamer-1.0.so.0()(64bit) for package:
> spice-server-0.14.1-1.fc30.x86_64
> --> Processing Dependency: libgstvideo-1.0.so.0()(64bit) for package:
> spice-server-0.14.1-1.fc30.x86_64
> --> Processing Dependency: liborc-0.4.so.0()(64bit) for package:
> spice-server-0.14.1-1.fc30.x86_64
> --> Processing Dependency: libssl.so.1.1()(64bit) for package:
> spice-server-0.14.1-1.fc30.x86_64
> --> Finished Dependency Resolution
> Error: Package: spice-server-0.14.1-1.fc30.x86_64
> (/spice-server-0.14.1-1.fc30.x86_64)
>Requires: libgstvideo-1.0.so.0()(64bit)
> Error: Package: spice-server-0.14.1-1.fc30.x86_64
> (/spice-server-0.14.1-1.fc30.x86_64)
>Requires: libgstbase-1.0.so.0()(64bit)
> Error: Package: spice-server-0.14.1-1.fc30.x86_64
> (/spice-server-0.14.1-1.fc30.x86_64)
>Requires: libgstreamer-1.0.so.0()(64bit)
> Error: Package: spice-server-0.14.1-1.fc30.x86_64
> (/spice-server-0.14.1-1.fc30.x86_64)
>Requires: libcrypto.so.1.1(OPENSSL_1_1_0)(64bit)
> Error: Package: spice-server-0.14.1-1.fc30.x86_64
> (/spice-server-0.14.1-1.fc30.x86_64)
>Requires: liborc-0.4.so.0()(64bit)
> Error: Package: spice-server-0.14.1-1.fc30.x86_64
> (/spice-server-0.14.1-1.fc30.x86_64)
>Requires: libcrypto.so.1.1()(64bit)
> Error: Package: spice-server-0.14.1-1.fc30.x86_64
> (/spice-server-0.14.1-1.fc30.x86_64)
>Requires: libssl.so.1.1(OPENSSL_1_1_0)(64bit)
> Error: Package: spice-server-0.14.1-1.fc30.x86_64
> (/spice-server-0.14.1-1.fc30.x86_64)
>Requires: libgstapp-1.0.so.0()(64bit)
> Error: Package: spice-server-0.14.1-1.fc30.x86_64
> (/spice-server-0.14.1-1.fc30.x86_64)
>Requires: libssl.so.1.1()(64bit)
>  You could try using --skip-broken to work around the problem
>  You could try running: rpm -Va --nofiles --nodigest
> Uploading Enabled Repositories Report
> Loaded plugins: fastestmirror, product-id, subscription-manager
> This system is not registered with an entitlement server. You can use
> subscription-manager to register.
> Cannot upload enabled repos report, is this client registered?
> 
> Thank you  !
> 
> Leo
> 
> On Tue, Jan 8, 2019 at 10:19 AM Victor Toso  wrote:
> 
> > Hi,
> >
> > On Mon, Jan 07, 2019 at 07:29:13PM +0200, Leo David wrote:
> > > Thank you very much Victor,
> > > Is there a rpm that I can easily install on all the nodes, or will I need
> > > to build that version from sources ?
> >
> > You can fetch the release of 0.14.1 version at
> > https://gitlab.freedesktop.org/spice/spice/tags
> >
> > You can fetch an rpm from Fedora here
> > https://koji.fedoraproject.org/koji/buildinfo?buildID=1138474
> >
> > Cheers,
> >
> > > On Mon, Jan 7, 2019, 19:14 Victor Toso  > >
> > > > Hi,
> > > >
> > > > On Mon, Jan 07, 2019 at 07:00:04PM +0200, Leo David wrote:
> > > > > Thank you very much !
> > > > >
> > > > > I have modified
> > 

[ovirt-users] Re: Spice console very poor performance for Windows 10 vm

2019-01-08 Thread Gianluca Cecchi
On Tue, Jan 8, 2019 at 11:11 AM Leo David  wrote:

> Thank you very mucjh,  and sorry for being so lazy to search for that rpm
> by myself. Somehow, fedora rpms missed from my mind.
> Oh boy, it requires a lot of packages. Do you think would it be a good
> idea to temporarily install fedora repos, do the yum installation to get
> the dependencoes too and then disable the repo ? I am thinking to not break
> the ovirt node installation.
>
>  yum localinstall spice-server-0.14.1-1.fc30.x86_64.rpm
>

Fedora 30 corresponds to rawhide at this moment... and the beta for Fedora
30 is expected in April...
So I wouldn't go with these kind of packages right now.
The real question in my opinion is why the just released (30 October
actually) Fedora 29 has 0.14.0 and does not include 0.14.1 of spice server
that has been released 7 months ago and is considered the stable one from
Spice project.
So I would open an RFE to Fedora 29 and to RHEL 7 (and so inherited by
CentOS too) for pushing 0.14.1 into their stable repos

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7B7R46NKI6QVGBBQ7MZQAZ7BIELZKK23/


[ovirt-users] Re: Spice console very poor performance for Windows 10 vm

2019-01-08 Thread Leo David
Thank you very mucjh,  and sorry for being so lazy to search for that rpm
by myself. Somehow, fedora rpms missed from my mind.
Oh boy, it requires a lot of packages. Do you think would it be a good idea
to temporarily install fedora repos, do the yum installation to get the
dependencoes too and then disable the repo ? I am thinking to not break the
ovirt node installation.

 yum localinstall spice-server-0.14.1-1.fc30.x86_64.rpm
Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist,
package_upload, product-id, search-disabled-repos, subscription-manager,
vdsmupgrade
This system is not registered with an entitlement server. You can use
subscription-manager to register.
Examining spice-server-0.14.1-1.fc30.x86_64.rpm:
spice-server-0.14.1-1.fc30.x86_64
Marking spice-server-0.14.1-1.fc30.x86_64.rpm as an update to
spice-server-0.14.0-2.el7_5.3.x86_64
Resolving Dependencies
--> Running transaction check
---> Package spice-server.x86_64 0:0.14.0-2.el7_5.3 will be updated
---> Package spice-server.x86_64 0:0.14.1-1.fc30 will be an update
--> Processing Dependency: libcrypto.so.1.1(OPENSSL_1_1_0)(64bit) for
package: spice-server-0.14.1-1.fc30.x86_64
Loading mirror speeds from cached hostfile
 * epel: ftp.nluug.nl
 * ovirt-4.2-epel: ftp.nluug.nl
--> Processing Dependency: libssl.so.1.1(OPENSSL_1_1_0)(64bit) for package:
spice-server-0.14.1-1.fc30.x86_64
--> Processing Dependency: libcrypto.so.1.1()(64bit) for package:
spice-server-0.14.1-1.fc30.x86_64
--> Processing Dependency: libgstapp-1.0.so.0()(64bit) for package:
spice-server-0.14.1-1.fc30.x86_64
--> Processing Dependency: libgstbase-1.0.so.0()(64bit) for package:
spice-server-0.14.1-1.fc30.x86_64
--> Processing Dependency: libgstreamer-1.0.so.0()(64bit) for package:
spice-server-0.14.1-1.fc30.x86_64
--> Processing Dependency: libgstvideo-1.0.so.0()(64bit) for package:
spice-server-0.14.1-1.fc30.x86_64
--> Processing Dependency: liborc-0.4.so.0()(64bit) for package:
spice-server-0.14.1-1.fc30.x86_64
--> Processing Dependency: libssl.so.1.1()(64bit) for package:
spice-server-0.14.1-1.fc30.x86_64
--> Finished Dependency Resolution
Error: Package: spice-server-0.14.1-1.fc30.x86_64
(/spice-server-0.14.1-1.fc30.x86_64)
   Requires: libgstvideo-1.0.so.0()(64bit)
Error: Package: spice-server-0.14.1-1.fc30.x86_64
(/spice-server-0.14.1-1.fc30.x86_64)
   Requires: libgstbase-1.0.so.0()(64bit)
Error: Package: spice-server-0.14.1-1.fc30.x86_64
(/spice-server-0.14.1-1.fc30.x86_64)
   Requires: libgstreamer-1.0.so.0()(64bit)
Error: Package: spice-server-0.14.1-1.fc30.x86_64
(/spice-server-0.14.1-1.fc30.x86_64)
   Requires: libcrypto.so.1.1(OPENSSL_1_1_0)(64bit)
Error: Package: spice-server-0.14.1-1.fc30.x86_64
(/spice-server-0.14.1-1.fc30.x86_64)
   Requires: liborc-0.4.so.0()(64bit)
Error: Package: spice-server-0.14.1-1.fc30.x86_64
(/spice-server-0.14.1-1.fc30.x86_64)
   Requires: libcrypto.so.1.1()(64bit)
Error: Package: spice-server-0.14.1-1.fc30.x86_64
(/spice-server-0.14.1-1.fc30.x86_64)
   Requires: libssl.so.1.1(OPENSSL_1_1_0)(64bit)
Error: Package: spice-server-0.14.1-1.fc30.x86_64
(/spice-server-0.14.1-1.fc30.x86_64)
   Requires: libgstapp-1.0.so.0()(64bit)
Error: Package: spice-server-0.14.1-1.fc30.x86_64
(/spice-server-0.14.1-1.fc30.x86_64)
   Requires: libssl.so.1.1()(64bit)
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest
Uploading Enabled Repositories Report
Loaded plugins: fastestmirror, product-id, subscription-manager
This system is not registered with an entitlement server. You can use
subscription-manager to register.
Cannot upload enabled repos report, is this client registered?

Thank you  !

Leo

On Tue, Jan 8, 2019 at 10:19 AM Victor Toso  wrote:

> Hi,
>
> On Mon, Jan 07, 2019 at 07:29:13PM +0200, Leo David wrote:
> > Thank you very much Victor,
> > Is there a rpm that I can easily install on all the nodes, or will I need
> > to build that version from sources ?
>
> You can fetch the release of 0.14.1 version at
> https://gitlab.freedesktop.org/spice/spice/tags
>
> You can fetch an rpm from Fedora here
> https://koji.fedoraproject.org/koji/buildinfo?buildID=1138474
>
> Cheers,
>
> > On Mon, Jan 7, 2019, 19:14 Victor Toso  >
> > > Hi,
> > >
> > > On Mon, Jan 07, 2019 at 07:00:04PM +0200, Leo David wrote:
> > > > Thank you very much !
> > > >
> > > > I have modified
> /etc/ovirt-engine/osinfo.conf.d/00-defaults.properties
> > > file
> > > > and added the last line
> > > >
> > > > # Windows10x64
> > > > os.windows_10x64.id.value = 27
> > > > os.windows_10x64.name.value = Windows 10 x64
> > > > os.windows_10x64.derivedFrom.value = windows_8x64
> > > > os.windows_10x64.productKey.value =
> > > > os.windows_10x64.resources.maximum.ram.value = 2097152
> > > > os.windows_10x64.cpu.unsupported.value = conroe, opteron_g1
> > > > os.windows_10x64.sysprepPath.value =
> > > > 

[ovirt-users] Re: migrating to Hosted Engine failed

2019-01-08 Thread Simone Tiraboschi
On Mon, Jan 7, 2019 at 9:27 AM maoz zadok  wrote:

> Hello,
> *oVirt Node: 4.2.7.1*
>
> I'm trying to migrate to a hosted engine on an existing cluster.
> I following:
>
> https://www.ovirt.org/documentation/self-hosted/chap-Migrating_from_Bare_Metal_to_an_EL-Based_Self-Hosted_Environment.html
> or:
>
> https://www.ovirt.org/develop/developer-guide/engine/migrate-to-hosted-engine.html
>
>
> 
> in both cases, it waits for the guest to be up until falling with an error
> I can see that it added some record to /etc/host and the guest is up. I
> was able to connect to the guest with ssh from the host that runs the
> hosted-engine deployment.
> after many retries, the host removes the guest and throw the error.
>
> also now the host cannot be activated back and throw following error when
> tring to activate:
>
>> VDSM kvm7 command Get Host Capabilities failed: General SSLEngine problem
>
>
>
> [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts":
> [{"address": "kvm7.replaceddomain.com", "affinity_labels": [],
> "auto_numa_status": "unknown", "certificate": {"organization": "
> replaceddomain.com", "subject": "O=replaceddomain.com,CN=
> kvm7.replaceddomain.com"}, "cluster": {"href":
> "/ovirt-engine/api/clusters/49af9e16-1250-11e9-ba8c-00163e4e185e", "id":
> "49af9e16-1250-11e9-ba8c-00163e4e185e"}, "comment": "", "cpu": {"speed":
> 0.0, "topology": {}}, "device_passthrough": {"enabled": false}, "devices":
> [], "external_network_provider_configurations": [], "external_status":
> "ok", "hardware_information": {"supported_rng_sources": []}, "hooks": [],
> "href": "/ovirt-engine/api/hosts/18517a0d-f264-4fd7-a191-c4a1f035da94",
> "id": "18517a0d-f264-4fd7-a191-c4a1f035da94", "katello_errata": [],
> "kdump_status": "unknown", "ksm": {"enabled": false},
> "max_scheduling_memory": 0, "memory": 0, "name": "kvm7.replaceddomain.com",
> "network_attachments": [], "nics": [], "numa_nodes": [], "numa_supported":
> false, "os": {"custom_kernel_cmdline": ""}, "permissions": [], "port":
> 54321, "power_management": {"automatic_pm_enabled": true, "enabled": false,
> "kdump_detection": true, "pm_proxies": []}, "protocol": "stomp",
> "se_linux": {}, "spm": {"priority": 5, "status": "none"}, "ssh":
> {"fingerprint": "SHA256:FL4Drh9DQxHFwyQQ9LXEyNjex62LBZ6c4+4O09LoTWg",
> "port": 22}, "statistics": [], "status": "install_failed",
> "storage_connection_extensions": [], "summary": {"total": 0}, "tags": [],
> "transparent_huge_pages": {"enabled": false}, "type": "ovirt_node",
> "unmanaged_networks": [], "update_available": false}]}, "attempts": 120,
> "changed": false, "deprecations": [{"msg": "The 'ovirt_hosts_facts' module
> is being renamed 'ovirt_host_facts'", "version": 2.8}]}
>

The relevant piece is:

"status": "install_failed",

which means that something went wrong at host-deploy time.

[ INFO ] TASK [Fetch logs from the engine VM]
>

You should check host-deploy logs (collected by this task) to understand
why the engine failed to deploy the host.




>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FOVTUTUERECEMC6MQWKKUJ2N2O7IUD2B/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IODQXAKD3SI7MEKAQ7KC2DG2LTS7UPZP/


[ovirt-users] Re: Spice console very poor performance for Windows 10 vm

2019-01-08 Thread Victor Toso
Hi,

On Mon, Jan 07, 2019 at 07:29:13PM +0200, Leo David wrote:
> Thank you very much Victor,
> Is there a rpm that I can easily install on all the nodes, or will I need
> to build that version from sources ?

You can fetch the release of 0.14.1 version at
https://gitlab.freedesktop.org/spice/spice/tags

You can fetch an rpm from Fedora here
https://koji.fedoraproject.org/koji/buildinfo?buildID=1138474

Cheers,

> On Mon, Jan 7, 2019, 19:14 Victor Toso  
> > Hi,
> >
> > On Mon, Jan 07, 2019 at 07:00:04PM +0200, Leo David wrote:
> > > Thank you very much !
> > >
> > > I have modified /etc/ovirt-engine/osinfo.conf.d/00-defaults.properties
> > file
> > > and added the last line
> > >
> > > # Windows10x64
> > > os.windows_10x64.id.value = 27
> > > os.windows_10x64.name.value = Windows 10 x64
> > > os.windows_10x64.derivedFrom.value = windows_8x64
> > > os.windows_10x64.productKey.value =
> > > os.windows_10x64.resources.maximum.ram.value = 2097152
> > > os.windows_10x64.cpu.unsupported.value = conroe, opteron_g1
> > > os.windows_10x64.sysprepPath.value =
> > > ${ENGINE_USR}/conf/sysprep/sysprep.w10x64
> > > *os.windows_10x64.devices.display.vramMultiplier.value = 2*
> > >
> > > The vm has the "Windows10x64" profile configured
> > >
> > > Restarted the ovirt-engine vm, powered on the Windows 10 vm. Not any
> > > difference...The console is very slow, almost unusable.
> > >
> > > Also, tried to upgrade spice-server on the node, but it seems that there
> > > aeno updates available. At the moment, it is installed:
> > >  spice-server.x86_64   0.14.0-2.el7_5.3
> >
> > Just for reference, the patch in spice-server that should help is
> > from the mail thread
> >
> >
> > https://lists.freedesktop.org/archives/spice-devel/2018-June/044237.html
> >
> > Merged as
> >
> >
> > https://gitlab.freedesktop.org/spice/spice/commit/ca4984570f425e87e92abe5f62f9687bb55c1e14
> >
> > Looking at the repo with git tag --contains ca4984570f425e87e92
> > it shows v0.14.1.
> >
> > 0.14.0-2 probably does not contain that. Either update to 0.14.1
> > or backport the patch. It does need to shutdown and start the VM
> > again.
> >
> > > Any thoughts ?
> > >
> > > Thank you !
> >
> > I hope it helps ;)
> >
> > Cheers,
> >


signature.asc
Description: PGP signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UG7EJHF7W3DKEZZ7X4GXEJ4HSPBN3JA2/