[ovirt-users] Re: Nvidia Grid K2 and Ovirt VGPU

2019-01-25 Thread R A
Many Thanks for your reply.

Now i am trying to use GPU Passtrough with Grid K2, but i fail.

Can u help me please?

https://lists.ovirt.org/archives/list/users@ovirt.org/message/7TW2DY3CSA35Y3LJTEACY3IRIUH57422/



Von: femi adegoke 
Gesendet: Donnerstag, 17. Januar 2019 15:47
An: Michal Skrivanek 
Cc: R A ; users 
Betreff: Re: [ovirt-users] Re: Nvidia Grid K2 and Ovirt VGPU

nVidia GRID does NOT work for vGPU.
Let me say that again, GRID does not work for vGPU.
Ask me how I know (insert sarcasm emoji)
I have a long email thread of trying to get it configured with the help of a 
few RH folks.
It does NOT do vGPU.
You need a card like a Tesla P4.

On Jan 17 2019, at 6:39 am, Michal Skrivanek 
mailto:michal.skriva...@redhat.com>> wrote:


On 11 Jan 2019, at 18:11, R A mailto:jarhe...@hotmail.de>> 
wrote:

Hello Michal,

many thanks for your response.

Are you sure that there is no vGPU support? Please check lines below

Hi Reza,
it’s nvidia’s doc, you would have to talk to them if you have more questions. 
All I understand from that doc (it’s just little more below on the same page) 
is that they explicitly say that K1 and K2 do not support vGPU. This might be a 
licensing limitation or temporary or anything else, I don’t know.

Thanks,
michal





and here:



The NVIDIA Grid K2 is GRID capaable.

Many thanks!
Reza

Von: Michal Skrivanek 
mailto:michal.skriva...@redhat.com>>
Gesendet: Freitag, 11. Januar 2019 12:04
An: jarhe...@hotmail.de
Cc: users@ovirt.org
Betreff: Re: [ovirt-users] Nvidia Grid K2 and Ovirt VGPU

> On 10 Jan 2019, at 02:14, jarhe...@hotmail.de 
> wrote:
>
> Hello,
>
> sry for my bad english.
>
> I have just bought the NVIDIA Grid K2 and want to passtrough the vGPU to 
> several VMS. But i am not able to manage this.
> I need the driver for Guest and Host, but can not download it. On the 
> official Site of Nvidia there are only drivers for XenServer and VMWare.
>
> This Nvidia-Site tells that there is support for Ovirt (RHEV) and K2 
> (https://docs.nvidia.com/grid/4.6/grid-vgpu-release-notes-red-hat-el-kvm/index.html
>  )

The page says K1 and K2 do not support vGPU so you could only do Host
Device Passthrough of the whole GPU

>
> Can someone please tell me what is going on? Is there any possibility to run 
> Grid K2 in Ovirt with vGpu (or Host Device Passtrough?)
>
> Furthermore if I do "vdsm-client Host hostdevListByCaps" on the Host , i do 
> not get any mdev markers like described 
> here:https://www.ovirt.org/documentation/install-guide/appe-Preparing_a_Host_for_vGPU_Installation.html

That’s only for vgpu which it doesn’t support, so you do not see any mdev

>
> Many Thanks
> Volrath
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to 
> users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FBIEV6OCU4VDKS4TX6LYX2ZQLEJFUINM/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to 
users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QGQNXYQ4GNG2DAC3VE7QHST3JMRBOFN7/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BDSXFUBYCJFPTXQQIEPUOMZLMWZWTLZ2/


[ovirt-users] Re: Cannot Increase Hosted Engine VM Memory

2019-01-25 Thread Douglas Duckworth
Yes, I do.  Gold crown indeed.

It's the "HostedEngine" as seen attached!


Thanks,

Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit
Weill Cornell Medicine
1300 York Avenue
New York, NY 10065
E: d...@med.cornell.edu
O: 212-746-6305
F: 212-746-8690


On Wed, Jan 23, 2019 at 12:02 PM Simone Tiraboschi 
mailto:stira...@redhat.com>> wrote:


On Wed, Jan 23, 2019 at 5:51 PM Douglas Duckworth 
mailto:dod2...@med.cornell.edu>> wrote:
Hi Simone

Can I get help with this issue?  Still cannot increase memory for Hosted Engine.

From the logs it seams that the engine is trying to hotplug memory to the 
engine VM which is something it should not happen.
The engine should simply update engine VM configuration in the OVF_STORE and 
require a reboot of the engine VM.
Quick question, in the VM panel do you see a gold crown symbol on the Engine VM?


Thanks,

Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit
Weill Cornell Medicine
1300 York Avenue
New York, NY 10065
E: d...@med.cornell.edu
O: 212-746-6305
F: 212-746-8690


On Thu, Jan 17, 2019 at 8:08 AM Douglas Duckworth 
mailto:dod2...@med.cornell.edu>> wrote:
Sure, they're attached.  In "first attempt" the error seems to be:

2019-01-17 07:49:24,795-05 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default 
task-29) [680f82b3-7612-4d91-afdc-43937aa298a2] EVENT_ID: 
FAILED_HOT_SET_MEMORY_NOT_DIVIDABLE(2,048), Failed to hot plug memory to VM 
HostedEngine. Amount of added memory (4000MiB) is not dividable by 256MiB.

Followed by:

2019-01-17 07:49:24,814-05 WARN  
[org.ovirt.engine.core.bll.UpdateRngDeviceCommand] (default task-29) [26f5f3ed] 
Validation of action 'UpdateRngDevice' failed for user admin@internal-authz. 
Reasons: ACTION_TYPE_FAILED_VM_IS_RUNNING
2019-01-17 07:49:24,815-05 ERROR [org.ovirt.engine.core.bll.UpdateVmCommand] 
(default task-29) [26f5f3ed] Updating RNG device of VM HostedEngine 
(adf14389-1563-4b1a-9af6-4b40370a825b) failed. Old RNG device = 
VmRngDevice:{id='VmDeviceId:{deviceId='6435b2b5-163c-4f0c-934e-7994da60dc89', 
vmId='adf14389-1563-4b1a-9af6-4b40370a825b'}', device='virtio', type='RNG', 
specParams='[source=urandom]', address='', managed='true', plugged='true', 
readOnly='false', deviceAlias='', customProperties='null', snapshotId='null', 
logicalName='null', hostDevice='null'}. New RNG device = 
VmRngDevice:{id='VmDeviceId:{deviceId='6435b2b5-163c-4f0c-934e-7994da60dc89', 
vmId='adf14389-1563-4b1a-9af6-4b40370a825b'}', device='virtio', type='RNG', 
specParams='[source=urandom]', address='', managed='true', plugged='true', 
readOnly='false', deviceAlias='', customProperties='null', snapshotId='null', 
logicalName='null', hostDevice='null'}.

In "second attempt" I used values that are dividable by 256 MiB so that's no 
longer present.  Though same error:

2019-01-17 07:56:59,795-05 INFO  
[org.ovirt.engine.core.vdsbroker.SetAmountOfMemoryVDSCommand] (default task-22) 
[7059a48f] START, SetAmountOfMemoryVDSCommand(HostName = 
ovirt-hv1.med.cornell.edu, 
Params:{hostId='cdd5ffda-95c7-4ffa-ae40-be66f1d15c30', 
vmId='adf14389-1563-4b1a-9af6-4b40370a825b', 
memoryDevice='VmDevice:{id='VmDeviceId:{deviceId='7f7d97cc-c273-4033-af53-bc9033ea3abe',
 vmId='adf14389-1563-4b1a-9af6-4b40370a825b'}', device='memory', type='MEMORY', 
specParams='[node=0, size=2048]', address='', managed='true', plugged='true', 
readOnly='false', deviceAlias='', customProperties='null', snapshotId='null', 
logicalName='null', hostDevice='null'}', minAllocatedMem='6144'}), log id: 
50873daa
2019-01-17 07:56:59,855-05 INFO  
[org.ovirt.engine.core.vdsbroker.SetAmountOfMemoryVDSCommand] (default task-22) 
[7059a48f] FINISH, SetAmountOfMemoryVDSCommand, log id: 50873daa
2019-01-17 07:56:59,862-05 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default 
task-22) [7059a48f] EVENT_ID: HOT_SET_MEMORY(2,039), Hotset memory: changed the 
amount of memory on VM HostedEngine from 4096 to 4096
2019-01-17 07:56:59,881-05 WARN  
[org.ovirt.engine.core.bll.UpdateRngDeviceCommand] (default task-22) [28fd4c82] 
Validation of action 'UpdateRngDevice' failed for user admin@internal-authz. 
Reasons: ACTION_TYPE_FAILED_VM_IS_RUNNING
2019-01-17 07:56:59,882-05 ERROR [org.ovirt.engine.core.bll.UpdateVmCommand] 
(default task-22) [28fd4c82] Updating RNG device of VM HostedEngine 
(adf14389-1563-4b1a-9af6-4b40370a825b) failed. Old RNG device = 
VmRngDevice:{id='VmDeviceId:{deviceId='6435b2b5-163c-4f0c-934e-7994da60dc89', 
vmId='adf14389-1563-4b1a-9af6-4b40370a825b'}', device='virtio', type='RNG', 
specParams='[source=urandom]', address='', managed='true', plugged='true', 
readOnly='false', deviceAlias='', customProperties='null', snapshotId='null', 
logicalName='null', hostDevice='null'}. New RNG device = 

[ovirt-users] Re: Cluster upgrade with ansible only upgrades one host then stops

2019-01-25 Thread Greg Sheremeta
+Ondra Machacek  can you assist?

On Fri, Jan 25, 2019 at 2:33 PM Jayme  wrote:

> I have a three node HCI setup, running 4.2.7 and want to upgrade to
> 4.2.8.  When I use ansible to perform the host updates for some reason it
> fully updates one host then stops without error, it does not continue
> upgrading the remaining two hosts.  If I run it again it will proceed to
> upgrade the next host.  Is there something wrong with the ansible plays I
> am using or perhaps the command I'm using to run ansible needing to specify
> all hosts to run against.  I don't understand why it's not upgrading all
> hosts in one single run.  Here is the complete ansible output of the last
> run, in this example it fully updated and rebooted host0 with no errors but
> did not proceed to upgrade host1 or host2:
>
>
> $ cat ovirt-upgrade
> # ansible-playbook --ask-vault-pass upgrade.yml
> Vault password:
>
> PLAY [oVirt Cluster Upgrade]
> *
>
> TASK [oVirt.cluster-upgrade : set_fact]
> **
> ok: [localhost]
>
> TASK [oVirt.cluster-upgrade : Login to oVirt]
> 
> ok: [localhost]
>
> TASK [oVirt.cluster-upgrade : Get hosts]
> *
> ok: [localhost]
>
> TASK [oVirt.cluster-upgrade : Check if there are hosts to be updated]
> 
> skipping: [localhost]
>
> TASK [oVirt.cluster-upgrade : include_tasks]
> *
> included:
> /usr/share/ansible/roles/ovirt.cluster-upgrade/tasks/cluster_policy.yml for
> localhost
>
> TASK [oVirt.cluster-upgrade : Get cluster facts]
> *
> ok: [localhost]
>
> TASK [oVirt.cluster-upgrade : Get name of the original scheduling policy]
> 
> ok: [localhost]
>
> TASK [oVirt.cluster-upgrade : Remember the cluster scheduling policy]
> 
> ok: [localhost]
>
> TASK [oVirt.cluster-upgrade : Remember the cluster scheduling policy
> properties]
> *
> ok: [localhost]
>
> TASK [oVirt.cluster-upgrade : Get API facts]
> *
> ok: [localhost]
>
> TASK [oVirt.cluster-upgrade : Set in cluster upgrade policy]
> 

[ovirt-users] Re: Unable to get the proper console of vm

2019-01-25 Thread Shikhar Verma
Yes verified...

Shikhar Verma

On Fri, 25 Jan 2019, 20:52 Greg Sheremeta 
> On Thu, Jan 24, 2019 at 9:04 PM Shikhar Verma  wrote:
>
>> Yes iso image is relevant.
>>
>
> By "relevant" do you mean that you verified it's not corrupt? The exact
> same iso file boots on real hardware or elsewhere? Can you try another
> distro or iso?
>
> How much memory does the vm have?
>
>
>>
>> Shikhar Verma
>>
>> On Thu, 24 Jan 2019, 13:59 Michal Skrivanek > wrote:
>>
>>>
>>>
>>> > On 21 Jan 2019, at 15:54, Shikhar Verma  wrote:
>>> >
>>> > Hi,
>>> >
>>> > I have created the virtual machine from ovirt manager but when I am
>>> trying to get the console of the vm to do the installation of it but it is
>>> only giving two two line and even i have tried as run once and selected CD
>>> ROM as first priority and attached the ISO of centos7
>>>
>>> is the iso alright? does it boot elsewhere? does your vm have enough ram?
>>>
>>> >
>>> > SeaBIOS (version 1.11.0-2.e17)
>>> > Machine UUID ---
>>> >
>>> > Also, from manager, newly launched vm is showing green..
>>> >
>>> > And from the host machine, it is showing this error
>>> >
>>> > Jan 21 19:23:24 servera libvirtd: 2019-01-21 13:53:24.286+: 12800:
>>> error : qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU
>>> guest agent is not connected
>>>
>>> because it’s not booted yet. irrelevant.
>>>
>>> >
>>> > I am using the latest version version of ovirt-engine & host as well.
>>> >
>>> > Please respond.
>>> >
>>> > Thanks
>>> > Shikhar
>>> > ___
>>> > Users mailing list -- users@ovirt.org
>>> > To unsubscribe send an email to users-le...@ovirt.org
>>> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> > oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> > List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/52HAFOXSXLJRI47DB3JBM7HY3VXGC6CM/
>>>
>>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NLTWS2X6RVAD7TIEFF7K42AWKGNWVVTO/
>>
>
>
> --
>
> GREG SHEREMETA
>
> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
>
> Red Hat NA
>
> 
>
> gsher...@redhat.comIRC: gshereme
> 
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/L5RVUMA53F3DMV3ZXUQEFRRXL5GE5RDT/


[ovirt-users] Re: Cluster upgrade with ansible only upgrades one host then stops

2019-01-25 Thread Jayme
Also, here is my upgrade.yml playbook:

---
- name: oVirt Cluster Upgrade
  hosts: localhost
  connection: local
  gather_facts: false

  vars_files:
- engine_vars.yml
- passwords.yml

  roles:
- oVirt.cluster-upgrade


and hosts file:

[host_names]
host0
host1
host2

Prior to running ansible the ovirt engine was upgraded/rebooted an all
three hosts were showing updates available.

On Fri, Jan 25, 2019 at 3:26 PM Jayme  wrote:

> I have a three node HCI setup, running 4.2.7 and want to upgrade to
> 4.2.8.  When I use ansible to perform the host updates for some reason it
> fully updates one host then stops without error, it does not continue
> upgrading the remaining two hosts.  If I run it again it will proceed to
> upgrade the next host.  Is there something wrong with the ansible plays I
> am using or perhaps the command I'm using to run ansible needing to specify
> all hosts to run against.  I don't understand why it's not upgrading all
> hosts in one single run.  Here is the complete ansible output of the last
> run, in this example it fully updated and rebooted host0 with no errors but
> did not proceed to upgrade host1 or host2:
>
>
> $ cat ovirt-upgrade
> # ansible-playbook --ask-vault-pass upgrade.yml
> Vault password:
>
> PLAY [oVirt Cluster Upgrade]
> *
>
> TASK [oVirt.cluster-upgrade : set_fact]
> **
> ok: [localhost]
>
> TASK [oVirt.cluster-upgrade : Login to oVirt]
> 
> ok: [localhost]
>
> TASK [oVirt.cluster-upgrade : Get hosts]
> *
> ok: [localhost]
>
> TASK [oVirt.cluster-upgrade : Check if there are hosts to be updated]
> 
> skipping: [localhost]
>
> TASK [oVirt.cluster-upgrade : include_tasks]
> *
> included:
> /usr/share/ansible/roles/ovirt.cluster-upgrade/tasks/cluster_policy.yml for
> localhost
>
> TASK [oVirt.cluster-upgrade : Get cluster facts]
> *
> ok: [localhost]
>
> TASK [oVirt.cluster-upgrade : Get name of the original scheduling policy]
> 
> ok: [localhost]
>
> TASK [oVirt.cluster-upgrade : Remember the cluster scheduling policy]
> 
> ok: [localhost]
>
> TASK [oVirt.cluster-upgrade : Remember the cluster scheduling policy
> properties]
> *
> ok: [localhost]
>
> TASK [oVirt.cluster-upgrade : Get API facts]
> 

[ovirt-users] Cluster upgrade with ansible only upgrades one host then stops

2019-01-25 Thread Jayme
I have a three node HCI setup, running 4.2.7 and want to upgrade to 4.2.8.
When I use ansible to perform the host updates for some reason it fully
updates one host then stops without error, it does not continue upgrading
the remaining two hosts.  If I run it again it will proceed to upgrade the
next host.  Is there something wrong with the ansible plays I am using or
perhaps the command I'm using to run ansible needing to specify all hosts
to run against.  I don't understand why it's not upgrading all hosts in one
single run.  Here is the complete ansible output of the last run, in this
example it fully updated and rebooted host0 with no errors but did not
proceed to upgrade host1 or host2:


$ cat ovirt-upgrade
# ansible-playbook --ask-vault-pass upgrade.yml
Vault password:

PLAY [oVirt Cluster Upgrade]
*

TASK [oVirt.cluster-upgrade : set_fact]
**
ok: [localhost]

TASK [oVirt.cluster-upgrade : Login to oVirt]

ok: [localhost]

TASK [oVirt.cluster-upgrade : Get hosts]
*
ok: [localhost]

TASK [oVirt.cluster-upgrade : Check if there are hosts to be updated]

skipping: [localhost]

TASK [oVirt.cluster-upgrade : include_tasks]
*
included:
/usr/share/ansible/roles/ovirt.cluster-upgrade/tasks/cluster_policy.yml for
localhost

TASK [oVirt.cluster-upgrade : Get cluster facts]
*
ok: [localhost]

TASK [oVirt.cluster-upgrade : Get name of the original scheduling policy]

ok: [localhost]

TASK [oVirt.cluster-upgrade : Remember the cluster scheduling policy]

ok: [localhost]

TASK [oVirt.cluster-upgrade : Remember the cluster scheduling policy
properties]
*
ok: [localhost]

TASK [oVirt.cluster-upgrade : Get API facts]
*
ok: [localhost]

TASK [oVirt.cluster-upgrade : Set in cluster upgrade policy]
*
changed: [localhost]

TASK [oVirt.cluster-upgrade : Get list of VMs in cluster]

[ovirt-users] Re: Unable to deatch/remove ISO DOMAIN

2019-01-25 Thread Strahil
In my case I couldn't create a new one with the same name, so I had to import it back.Glad it worked.Best Regards,Strahil NikolovOn Jan 25, 2019 16:38, Martin Humaj  wrote:Hi Strahil.IT WORKED !!! Thanks a lot for your help. It is gone and I was able to finally create a new one. GOD BLESS YOU engine=# select id from storage_domain_dynamic;  id-- 072fbaa1-08f3-4a40-9f34-a5ca22dd1d74 a294bbf5-67c5-4291-a6d2-9cfa53e1670b 470aade4-967f-4adb-9ca2-cccbfa2d5dc9 ab6f78d5-668f-4d58-9b76-58113bbf938a 61045461-10ff-4f7a-b464-67198c4a6c27(5 rows)engine=# delete from storage_domain_dynamic where id ='61045461-10ff-4f7a-b464-67198c4a6c27';DELETE 1engine=#engine=#engine=# delete from storage_domain_static where id ='61045461-10ff-4f7a-b464-67198c4a6c27';DELETE 1engine=#engine=#engine=# select id, storage_name from storage_domain_static;  id  |  storage_name--+ ab6f78d5-668f-4d58-9b76-58113bbf938a | hosted_storage a294bbf5-67c5-4291-a6d2-9cfa53e1670b | testLun1TB 470aade4-967f-4adb-9ca2-cccbfa2d5dc9 | oVirt-V7000 072fbaa1-08f3-4a40-9f34-a5ca22dd1d74 | ovirt-image-repository(4 rows)thank you so much marinOn Fri, Jan 25, 2019 at 2:15 PM Strahil Nikolov  wrote:
Hi Martin,I just tried to add a new storage with the same name - and it failed. Most probably some remnants were left.Best Regards,Strahil Nikolov





В петък, 25 януари 2019 г., 11:04:01 ч. Гринуич+2, Martin Humaj  написа:



Hi StrahilI have tried to use the same ip and nfs export to replace the original, did not work properly.If you can guide me how to do it in engine DB i would appreciate it. This is a test system.thank you MartinOn Fri, Jan 25, 2019 at 9:56 AM Strahil  wrote:Can you create a temporary NFS server which to be accessed during the removal?I have managed to edit the engine's DB to get rid of cluster domain, but this is not recommended for production  systems :)


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PEWDNKJB6NDDLGBR5DKF47VMZQ77KIX4/


[ovirt-users] Re: Clearing asynchronous task Unknown

2019-01-25 Thread Nicholas Vaughan
Hi Nicolas,

We had a similar issue and it was caused by a stuck task in VDSM on the
host that was the SPM.

We found that VDSM tasks don't always show up in the oVirt GUI.  You can
check using 'vdsm-client Host getAllTasksStatuses' on the SPM host.

We could not manually cancel any of the stuck VDSM tasks or move the SPM to
another host.  The only solution we found was to migrate all the VM's off
that host and restart it.  Once the remaining hosts had contended to be the
new SPM, we gave the engine a restart too.

Hope that helps,
Nick


On Fri, 25 Jan 2019 at 12:02,  wrote:

> Hi,
>
> We're running oVirt 4.1.9 (I know there's a new version, we can't
> upgrade until [1] is implemented). The thing is that since some days
> we're having an event that floods our event list:
>
>Clearing asynchronous task Unknown that started at Tue Jan 22 14:33:17
> WET 2019
>
> The event shows up every minute. We tried restarting the ovirt-engine,
> but after some time it starts flooding again. No pending tasks in the
> task list.
>
> How can I check what is happening and how to solve it?
>
> Thanks.
>
>[1]: https://github.com/oVirt/ovirt-web-ui/issues/490
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/YQ35IGLYZBYGY7F5IKUOXFMRUOXD6BK7/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YEHWZQOKSVGMUKNEZH4V5W5DFBWAW75B/


[ovirt-users] Re: Ovirt 4.2.8 allows to remove a gluster volume without detaching the storage domain

2019-01-25 Thread Kaustav Majumder
Hi,
I feel its similar to this bughttps://
bugzilla.redhat.com/show_bug.cgi?id=1620198 .

Kaustav

On Fri, Jan 25, 2019, 9:18 PM Greg Sheremeta  wrote:

>
>
> On Fri, Jan 25, 2019 at 8:51 AM Strahil Nikolov 
> wrote:
>
>> Hey Community,
>>
>> where can I report this one ?
>>
>
> https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine
>
> Thanks!
>
>
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> В четвъртък, 24 януари 2019 г., 19:25:37 ч. Гринуич+2, Strahil Nikolov <
>> hunter86...@yahoo.com> написа:
>>
>>
>> Hello Community,
>>
>> As I'm still experimenting with my ovirt lab , I have managed somehow to
>> remove my gluster volume ('gluster volume list' confirms it) whithout
>> detaching the storage domain.
>>
>> This sounds to me as bug, am I right ?
>>
>> Steps to reproduce:
>> 1. Create a replica 3 arbiter 1 gluster volume
>> 2. Create a storage domain of it
>> 3. Go to Volumes and select the name of the volume
>> 4. Press remove and confirm . The tasks fails , but the volume is now
>> gone in gluster .
>>
>> I guess , I have to do some cleanup in the DB in order to fix that.
>>
>> Best Regards,
>> Strahil Nikolov
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/4B2U6XEK6XIXTF5SZEJWAGGX5ENGSS52/
>>
>
>
> --
>
> GREG SHEREMETA
>
> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
>
> Red Hat NA
>
> 
>
> gsher...@redhat.comIRC: gshereme
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/JJFB63RKSKBBGDKGLEBZLU54DIOAOL3Y/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5QJONAMQ4O4KTWE5EURBUPLU7XHHCB6O/


[ovirt-users] Re: Ovirt 4.2.8 allows to remove a gluster volume without detaching the storage domain

2019-01-25 Thread Greg Sheremeta
On Fri, Jan 25, 2019 at 8:51 AM Strahil Nikolov 
wrote:

> Hey Community,
>
> where can I report this one ?
>

https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine

Thanks!


>
> Best Regards,
> Strahil Nikolov
>
> В четвъртък, 24 януари 2019 г., 19:25:37 ч. Гринуич+2, Strahil Nikolov <
> hunter86...@yahoo.com> написа:
>
>
> Hello Community,
>
> As I'm still experimenting with my ovirt lab , I have managed somehow to
> remove my gluster volume ('gluster volume list' confirms it) whithout
> detaching the storage domain.
>
> This sounds to me as bug, am I right ?
>
> Steps to reproduce:
> 1. Create a replica 3 arbiter 1 gluster volume
> 2. Create a storage domain of it
> 3. Go to Volumes and select the name of the volume
> 4. Press remove and confirm . The tasks fails , but the volume is now gone
> in gluster .
>
> I guess , I have to do some cleanup in the DB in order to fix that.
>
> Best Regards,
> Strahil Nikolov
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/4B2U6XEK6XIXTF5SZEJWAGGX5ENGSS52/
>


-- 

GREG SHEREMETA

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX

Red Hat NA



gsher...@redhat.comIRC: gshereme

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JJFB63RKSKBBGDKGLEBZLU54DIOAOL3Y/


[ovirt-users] Re: engine-iso-uploader taking a REALLY long time

2019-01-25 Thread Greg Sheremeta
On Fri, Jan 25, 2019 at 9:14 AM  wrote:

> Thanks for the response.  The ISO's are sitting on a local hard drive on
> the same system where both the NFS share and engine reside with a 1Gbps
> link.  Not sure how that would factor.
>
> I was not aware ISO Domains are deprecated.  Is there updated
> documentation I can read to get up to speed on the new procedure?
>

https://www.ovirt.org/documentation/admin-guide/chap-Storage.html#storage-tasks


> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ETZKTAJD4E4VQ77MKAC7WHCDEA5Y63XU/
>


-- 

GREG SHEREMETA

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX

Red Hat NA



gsher...@redhat.comIRC: gshereme

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CXS7BK5R6C2UIW4J3DQOFMA4KIJH3KQE/


[ovirt-users] Re: Unable to get the proper console of vm

2019-01-25 Thread Greg Sheremeta
On Thu, Jan 24, 2019 at 9:04 PM Shikhar Verma  wrote:

> Yes iso image is relevant.
>

By "relevant" do you mean that you verified it's not corrupt? The exact
same iso file boots on real hardware or elsewhere? Can you try another
distro or iso?

How much memory does the vm have?


>
> Shikhar Verma
>
> On Thu, 24 Jan 2019, 13:59 Michal Skrivanek  wrote:
>
>>
>>
>> > On 21 Jan 2019, at 15:54, Shikhar Verma  wrote:
>> >
>> > Hi,
>> >
>> > I have created the virtual machine from ovirt manager but when I am
>> trying to get the console of the vm to do the installation of it but it is
>> only giving two two line and even i have tried as run once and selected CD
>> ROM as first priority and attached the ISO of centos7
>>
>> is the iso alright? does it boot elsewhere? does your vm have enough ram?
>>
>> >
>> > SeaBIOS (version 1.11.0-2.e17)
>> > Machine UUID ---
>> >
>> > Also, from manager, newly launched vm is showing green..
>> >
>> > And from the host machine, it is showing this error
>> >
>> > Jan 21 19:23:24 servera libvirtd: 2019-01-21 13:53:24.286+: 12800:
>> error : qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU
>> guest agent is not connected
>>
>> because it’s not booted yet. irrelevant.
>>
>> >
>> > I am using the latest version version of ovirt-engine & host as well.
>> >
>> > Please respond.
>> >
>> > Thanks
>> > Shikhar
>> > ___
>> > Users mailing list -- users@ovirt.org
>> > To unsubscribe send an email to users-le...@ovirt.org
>> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> > oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> > List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/52HAFOXSXLJRI47DB3JBM7HY3VXGC6CM/
>>
>> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NLTWS2X6RVAD7TIEFF7K42AWKGNWVVTO/
>


-- 

GREG SHEREMETA

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX

Red Hat NA



gsher...@redhat.comIRC: gshereme

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LXOI6BXBZDJLHPDSWZJ7BVU5SLUDYD44/


[ovirt-users] Re: ovirt.org HTML Issue?

2019-01-25 Thread Greg Sheremeta
Thanks for reporting! I opened a bug:
https://github.com/oVirt/ovirt-site/issues/1876

Best wishes,
Greg


On Fri, Jan 25, 2019 at 7:48 AM  wrote:

>
> Hello all,
>
> I wasn't sure where to post this or email this (I promised I looked
> around), but the web page for the following web page in the manual appears
> to have HTML issues towards the end of the page:
>
> https://www.ovirt.org/documentation/admin-guide/chap-Pools.html
>
> Should I notify someone else that it ought to be fixed?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FLCQRPF3NOFSW734VNXXU2FMQMVRUHNL/
>


-- 

GREG SHEREMETA

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX

Red Hat NA



gsher...@redhat.comIRC: gshereme

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MPNTVLUXS3KH45ZQZB4KEEMD5Z6PFDAA/


[ovirt-users] Re: 4.3.0 rc2 cannot mount glusterfs volumes on ovirt node ng

2019-01-25 Thread Nir Soffer
On Fri, Jan 25, 2019 at 3:55 PM Nir Soffer  wrote:

> On Fri, Jan 25, 2019 at 3:18 PM Jorick Astrego  wrote:
>
>> Hi,
>>
>> We're having problems mounting the preexisting 3.12 glusterfs storage
>> domains in ovirt node ng 4.3.0 rc2.
>>
>> Getting
>>
>> There are no iptables blocks on the storage network, the ip's are
>> pingable bothe ways. I can telnet to the glusterfs ports and I see no
>> messages in the logs of the glusterfs servers.
>>
>> When I try the mount command manually it hangs for ever:
>>
>> /usr/bin/mount -t glusterfs -o backup-volfile-servers=*.*.*.*:*.*.*.*
>> *.*.*.*:/sdd8 /mnt/temp
>>
>> I haven't submitted a bug yet
>>
>> from supervdsm.log
>>
>> MainProcess|jsonrpc/2::DEBUG::2019-01-25
>> 13:42:45,282::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper)
>> call volumeInfo with (u'sdd8', u'*.*.*.*') {}
>> MainProcess|jsonrpc/2::DEBUG::2019-01-25
>> 13:42:45,282::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list
>> 0-63 /usr/sbin/gluster --mode=script volume info --remote-host=*.*.*.* sdd8
>> --xml (cwd None)
>> MainProcess|jsonrpc/2::DEBUG::2019-01-25
>> 13:44:45,399::commands::219::root::(execCmd) FAILED:  = '';  = 1
>> MainProcess|jsonrpc/2::DEBUG::2019-01-25
>> 13:44:45,399::logutils::319::root::(_report_stats) ThreadedHandler is ok
>> in the last 120 seconds (max pending: 2)
>>
>
> This looks like
> https://bugzilla.redhat.com/show_bug.cgi?id=1666123#c18
>
> We should see "ThreadedHandler is ok" every 60 seconds when using debug
> log level.
>
> Looks like your entire supervdsmd process was hang for 120 seconds.
>
>
>> MainProcess|jsonrpc/2::ERROR::2019-01-25
>> 13:44:45,399::supervdsm_server::104::SuperVdsm.ServerCallback::(wrapper)
>> Error in volumeInfo
>> Traceback (most recent call last):
>>   File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line
>> 102, in wrapper
>> res = func(*args, **kwargs)
>>   File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 529,
>> in volumeInfo
>> xmltree = _execGlusterXml(command)
>>   File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 131,
>> in _execGlusterXml
>> return _getTree(rc, out, err)
>>   File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 112,
>> in _getTree
>> raise ge.GlusterCmdExecFailedException(rc, out, err)
>> GlusterCmdExecFailedException: Command execution failed
>> error: E
>> r
>> r
>> o
>> r
>>
>> :
>>
>> R
>> e
>> q
>> u
>> e
>> s
>> t
>>
>> t
>> i
>> m
>> e
>> d
>>
>> o
>> u
>> t
>>
> Looks like side effect of
> https://gerrit.ovirt.org/c/94784/
>
> GlusterException assumes that it accept list of lines, but we started to
> raise
> strings. The class should be fixed to handle strings.
>

Fixed in https://gerrit.ovirt.org/c/97316/

I think we need this in 4.2.8.
Denis, please check.

>
>>
>> return code: 1
>> MainProcess|jsonrpc/2::DEBUG::2019-01-25
>> 13:44:45,400::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper)
>> call mount with (> 0x7f6eb8d0a2d0>, u'*.*.*.*:/sdd8',
>> u'/rhev/data-center/mnt/glusterSD/*.*.*.*:_sdd8') {'vfstype': u'glusterfs',
>> 'mntOpts': u'backup-volfile-servers=*.*.*.*:*.*.*.*', 'cgroup':
>> 'vdsm-glusterfs'}
>> MainProcess|jsonrpc/2::DEBUG::2019-01-25
>> 13:44:45,400::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list
>> 0-63 /usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount -t
>> glusterfs -o backup-volfile-servers=*.*.*.*:*.*.*.* *.*.*.*:/sdd8
>> /rhev/data-center/mnt/glusterSD/*.*.*.*:_sdd8 (cwd None)
>> MainProcess|jsonrpc/0::DEBUG::2019-01-25
>> 13:45:02,884::commands::219::root::(execCmd) FAILED:  = 'Running scope
>> as unit run-38676.scope.\nMount failed. Please check the log file for more
>> details.\n';  = 1
>> MainProcess|jsonrpc/0::ERROR::2019-01-25
>> 13:45:02,884::supervdsm_server::104::SuperVdsm.ServerCallback::(wrapper)
>> Error in mount
>> Traceback (most recent call last):
>>   File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line
>> 102, in wrapper
>> res = func(*args, **kwargs)
>>   File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line
>> 144, in mount
>> cgroup=cgroup)
>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line
>> 277, in _mount
>> _runcmd(cmd)
>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line
>> 305, in _runcmd
>> raise MountError(rc, b";".join((out, err)))
>> MountError: (1, ';Running scope as unit run-38676.scope.\nMount failed.
>> Please check the log file for more details.\n')
>>
>
> The mount failure is probably related to glusterfs. There are glusterfs
> logs on the host that
> can give more info on this error.
>
>> MainProcess|jsonrpc/0::DEBUG::2019-01-25
>> 13:45:02,894::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper)
>> call volumeInfo with (u'ssd9', u'*.*.*.*') {}
>> MainProcess|jsonrpc/0::DEBUG::2019-01-25
>> 13:45:02,894::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list
>> 0-63 /usr/sbin/gluster --mode=script volume 

[ovirt-users] Re: 4.3.0 rc2 cannot mount glusterfs volumes on ovirt node ng

2019-01-25 Thread Jorick Astrego

On 1/25/19 3:26 PM, Sahina Bose wrote:
>
>
> On Fri, Jan 25, 2019 at 7:36 PM Jorick Astrego  > wrote:
>
>>
>> The mount failure is probably related to glusterfs. There are
>> glusterfs logs on the host that 
>> can give more info on this error. 
> Oh duh, sorry forgot to check :-&
>
> "failed to fetch volume file"
>
>
> [2019-01-25 14:02:45.067440] I
> [glusterfsd-mgmt.c:2424:mgmt_rpc_notify] 0-glusterfsd-mgmt:
> disconnected from remote-host: *.*.*.*
>
> Are you able to access this host, and the gluster ports open?
> Anything in the glusterd.log of the gluster server?
>
> Adding Sanju to help


>From the host:

Telnet *.*.*.14 24007
Trying *.*.*.14...
Connected to *.*.*.14.
Escape character is '^]'.

glusterd.log on *.*.*.14:

Chain INPUT (policy ACCEPT)
target prot opt source   destination
ACCEPT all  --  anywhere anywhere state
RELATED,ESTABLISHED
ACCEPT icmp --  anywhere anywhere   
ACCEPT all  --  anywhere anywhere   
ACCEPT tcp  --  anywhere anywhere tcp
dpt:54321
ACCEPT tcp  --  anywhere anywhere tcp
dpt:54322
ACCEPT tcp  --  anywhere anywhere tcp
dpt:sunrpc
ACCEPT udp  --  anywhere anywhere udp
dpt:sunrpc
ACCEPT tcp  --  anywhere anywhere tcp
dpt:ssh
ACCEPT udp  --  anywhere anywhere udp
dpt:snmp
ACCEPT tcp  --  anywhere anywhere tcp
dpt:websm
ACCEPT tcp  --  anywhere anywhere tcp
dpt:24007
ACCEPT tcp  --  anywhere anywhere tcp
dpt:webcache
ACCEPT tcp  --  anywhere anywhere tcp
dpt:38465
ACCEPT tcp  --  anywhere anywhere tcp
dpt:38466
ACCEPT tcp  --  anywhere anywhere tcp
dpt:38467
ACCEPT tcp  --  anywhere anywhere tcp
dpt:nfs
ACCEPT tcp  --  anywhere anywhere tcp
dpt:38469
ACCEPT tcp  --  anywhere anywhere tcp
dpt:5666
ACCEPT tcp  --  anywhere anywhere tcp
dpt:39543
ACCEPT tcp  --  anywhere anywhere tcp
dpt:55863
ACCEPT tcp  --  anywhere anywhere tcp
dpt:38468
ACCEPT udp  --  anywhere anywhere udp
dpt:963
ACCEPT tcp  --  anywhere anywhere tcp
dpt:965
ACCEPT tcp  --  anywhere anywhere tcp
dpt:ctdb
ACCEPT tcp  --  anywhere anywhere tcp
dpt:netbios-ssn
ACCEPT tcp  --  anywhere anywhere tcp
dpt:microsoft-ds
ACCEPT tcp  --  anywhere anywhere tcp
dpts:24009:24108
ACCEPT tcp  --  anywhere anywhere tcp
dpts:49152:49251
ACCEPT tcp  --  anywhere anywhere tcp
dpts:49217:49316
ACCEPT tcp  --  anywhere anywhere tcp
dpt:zabbix-agent /* Zabbix agent */
REJECT all  --  anywhere anywhere
reject-with icmp-host-prohibited

Chain FORWARD (policy ACCEPT)
target prot opt source   destination
REJECT all  --  anywhere anywhere
PHYSDEV match ! --physdev-is-bridged reject-with icmp-host-prohibited

Chain OUTPUT (policy ACCEPT)
target prot opt source   destination

And in the glusterd.log has nothing else but:

[2019-01-25 11:02:32.617694] I [MSGID: 106499]
[glusterd-handler.c:4303:__glusterd_handle_status_volume]
0-management: Received status volume req for volume ssd6
[2019-01-25 11:02:32.623892] I [MSGID: 106499]
[glusterd-handler.c:4303:__glusterd_handle_status_volume]
0-management: Received status volume req for volume ssd9
[2019-01-25 11:03:31.847006] I [MSGID: 106488]
[glusterd-handler.c:1548:__glusterd_handle_cli_get_volume]
0-management: Received get vol req
[2019-01-25 11:04:33.023685] I [MSGID: 106499]
[glusterd-handler.c:4303:__glusterd_handle_status_volume]
0-management: Received status volume req for volume hdd2
[2019-01-25 11:04:33.030549] I [MSGID: 106499]
[glusterd-handler.c:4303:__glusterd_handle_status_volume]
0-management: Received status volume req for volume sdd8
[2019-01-25 11:04:33.037024] I [MSGID: 106499]
[glusterd-handler.c:4303:__glusterd_handle_status_volume]
0-management: Received status volume req for volume ssd3
[2019-01-25 11:04:33.043442] I 

[ovirt-users] Re: How to replace vMware infrastructure with oVirt

2019-01-25 Thread Derek Atkins
Hi,

"Mannish Kumar"  writes:

> Hi,
>
> I have two Esxi hosts managed by VMware vCenter Server. I want to
> create a similar infrastructure with oVirt. I know that oVirt is
> similar to VMware vCenter Server but not sure what to replace the Esxi
> hosts with in oVirt Environment.
>
> I am looking to build oVirt with Self-Hosted Engine.It would be great
> help if someone could help me to build this.

I migrated from the old vmware-server to oVirt a few years ago.  I
exported my VMs as OVA and then imported them into oVirt.  Some of them
imported immediately, some took several hours.  But this was all with
oVirt 4.0 and older versions of virt-v2v, so some of my issues may have
been fixed.

I would recommend you build a new oVirt infra first, migrate your VMs,
and then, if you want, you can repurpose your existing hardware for
additional nodes.

-derek
-- 
   Derek Atkins 617-623-3745
   de...@ihtfp.com www.ihtfp.com
   Computer and Internet Security Consultant
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7KRVXC6O75VD56E7LYZD6GMS2M2OIIHL/


[ovirt-users] Re: 4.3.0 rc2 cannot mount glusterfs volumes on ovirt node ng

2019-01-25 Thread Sahina Bose
On Fri, Jan 25, 2019 at 7:36 PM Jorick Astrego  wrote:

>
> The mount failure is probably related to glusterfs. There are glusterfs
> logs on the host that
> can give more info on this error.
>
> Oh duh, sorry forgot to check :-&
>
> "failed to fetch volume file"
>

[2019-01-25 14:02:45.067440] I [glusterfsd-mgmt.c:2424:mgmt_rpc_notify]
0-glusterfsd-mgmt: disconnected from remote-host: *.*.*.*

Are you able to access this host, and the gluster ports open?
Anything in the glusterd.log of the gluster server?

Adding Sanju to help

[2019-01-25 13:47:03.560677] I [MSGID: 100030] [glusterfsd.c:2691:main]
> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 5.2
> (args: /usr/sbin/glusterfs --process-name fuse --volfile-server=*.*.*.*
> --volfile-server=*.*.*.* --volfile-server=*.*.*.* --volfile-id=ssd5
> /rhev/data-center/mnt/glusterSD/*.*.*.*:ssd5)
> [2019-01-25 13:47:03.571819] I [MSGID: 101190]
> [event-epoll.c:622:event_dispatch_epoll_worker] 0-epoll: Started thread
> with index 1
> [2019-01-25 14:02:45.067440] I [glusterfsd-mgmt.c:2424:mgmt_rpc_notify]
> 0-glusterfsd-mgmt: disconnected from remote-host: *.*.*.*
> [2019-01-25 14:02:45.067500] I [glusterfsd-mgmt.c:2464:mgmt_rpc_notify]
> 0-glusterfsd-mgmt: connecting to next volfile server *.*.*.*
> [2019-01-25 14:02:45.069678] E [rpc-clnt.c:346:saved_frames_unwind] (-->
> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f2b0b7f4f1b] (-->
> /lib64/libgfrpc.so.0(+0xce11)[0x7f2b0b5bde11] (-->
> /lib64/libgfrpc.so.0(+0xcf2e)[0x7f2b0b5bdf2e] (-->
> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x91)[0x7f2b0b5bf531] (-->
> /lib64/libgfrpc.so.0(+0xf0d8)[0x7f2b0b5c00d8] ) 0-glusterfs: forced
> unwinding frame type(GlusterFS Handshake) op(GETSPEC(2)) called at
> 2019-01-25 13:47:03.572632 (xid=0x2)
> [2019-01-25 14:02:45.069697] E [glusterfsd-mgmt.c:2136:mgmt_getspec_cbk]
> 0-mgmt: failed to fetch volume file (key:ssd5)
> [2019-01-25 14:02:45.069725] W [glusterfsd.c:1481:cleanup_and_exit]
> (-->/lib64/libgfrpc.so.0(+0xce32) [0x7f2b0b5bde32]
> -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x841) [0x5651101b7231]
> -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x5651101afbbb] ) 0-:
> received signum (0), shutting down
> [2019-01-25 14:02:45.069749] I [fuse-bridge.c:5897:fini] 0-fuse:
> Unmounting '/rhev/data-center/mnt/glusterSD/*.*.*.*:ssd5'.
> [2019-01-25 14:02:45.078949] I [fuse-bridge.c:5902:fini] 0-fuse: Closing
> fuse connection to '/rhev/data-center/mnt/glusterSD/*.*.*.*:ssd5'.
> [2019-01-25 14:02:45.079035] W [glusterfsd.c:1481:cleanup_and_exit]
> (-->/lib64/libpthread.so.0(+0x7dd5) [0x7f2b0a656dd5]
> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x5651101afd45]
> -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x5651101afbbb] ) 0-:
> received signum (15), shutting down
>
> On 1/25/19 2:55 PM, Nir Soffer wrote:
>
> On Fri, Jan 25, 2019 at 3:18 PM Jorick Astrego  wrote:
>
>> Hi,
>>
>> We're having problems mounting the preexisting 3.12 glusterfs storage
>> domains in ovirt node ng 4.3.0 rc2.
>>
>> Getting
>>
>> There are no iptables blocks on the storage network, the ip's are
>> pingable bothe ways. I can telnet to the glusterfs ports and I see no
>> messages in the logs of the glusterfs servers.
>>
>> When I try the mount command manually it hangs for ever:
>>
>> /usr/bin/mount -t glusterfs -o backup-volfile-servers=*.*.*.*:*.*.*.*
>> *.*.*.*:/sdd8 /mnt/temp
>>
>> I haven't submitted a bug yet
>>
>> from supervdsm.log
>>
>> MainProcess|jsonrpc/2::DEBUG::2019-01-25
>> 13:42:45,282::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper)
>> call volumeInfo with (u'sdd8', u'*.*.*.*') {}
>> MainProcess|jsonrpc/2::DEBUG::2019-01-25
>> 13:42:45,282::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list
>> 0-63 /usr/sbin/gluster --mode=script volume info --remote-host=*.*.*.* sdd8
>> --xml (cwd None)
>> MainProcess|jsonrpc/2::DEBUG::2019-01-25
>> 13:44:45,399::commands::219::root::(execCmd) FAILED:  = '';  = 1
>> MainProcess|jsonrpc/2::DEBUG::2019-01-25
>> 13:44:45,399::logutils::319::root::(_report_stats) ThreadedHandler is ok
>> in the last 120 seconds (max pending: 2)
>>
>
> This looks like
> https://bugzilla.redhat.com/show_bug.cgi?id=1666123#c18
>
> We should see "ThreadedHandler is ok" every 60 seconds when using debug
> log level.
>
> Looks like your entire supervdsmd process was hang for 120 seconds.
>
>
>> MainProcess|jsonrpc/2::ERROR::2019-01-25
>> 13:44:45,399::supervdsm_server::104::SuperVdsm.ServerCallback::(wrapper)
>> Error in volumeInfo
>> Traceback (most recent call last):
>>   File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line
>> 102, in wrapper
>> res = func(*args, **kwargs)
>>   File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 529,
>> in volumeInfo
>> xmltree = _execGlusterXml(command)
>>   File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 131,
>> in _execGlusterXml
>> return _getTree(rc, out, err)
>>   File 

[ovirt-users] Re: reinstallation information

2019-01-25 Thread nikkognt
Thanks Sandro for your suggests.
I followed your instructions and all work perfectly.
The only problem that I found was the iso-domain that was not working properly 
then I have destroy it and I recreate it and now work.

The last question
At the moment my hosts are installed with centos not ovirt-node. In the future 
I would like to reinstall them with ovirt-node iso.
Is it possible? I only put in maintenance a then reinstall one by one with 
ovirt-node or I must remove from engine and then reinstall one by one?

Many thanks
Nikkognt
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EBRER4REDS7JVDT2ZWQAQ4IOKVRETX75/


[ovirt-users] Re: reinstallation information

2019-01-25 Thread nikkognt
Thanks Sandro for your suggests.
I followed your instructions and all work perfectly.
The only problem that I found was the iso-domain that was not working properly 
then I have destroy it and I recreate it and now work.

The question
At the moment my hosts are installed with centos not ovirt-node. In the future 
I would like to reinstall them with ovirt-node iso.
Is it possible? I only put in maintenance a then reinstall one by one with 
ovirt-node or I must remove from engine and then reinstall one by one?

Many thanks
Nikkognt
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/34L4Q3Q5QJTS2FF46ZKS3AYVXEES3547/


[ovirt-users] Re: 4.3.0 rc2 cannot mount glusterfs volumes on ovirt node ng

2019-01-25 Thread Jorick Astrego
>
> The mount failure is probably related to glusterfs. There are
> glusterfs logs on the host that 
> can give more info on this error. 
Oh duh, sorry forgot to check :-&

"failed to fetch volume file"

[2019-01-25 13:47:03.560677] I [MSGID: 100030] [glusterfsd.c:2691:main]
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 5.2
(args: /usr/sbin/glusterfs --process-name fuse --volfile-server=*.*.*.*
--volfile-server=*.*.*.* --volfile-server=*.*.*.* --volfile-id=ssd5
/rhev/data-center/mnt/glusterSD/*.*.*.*:ssd5)
[2019-01-25 13:47:03.571819] I [MSGID: 101190]
[event-epoll.c:622:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 1
[2019-01-25 14:02:45.067440] I [glusterfsd-mgmt.c:2424:mgmt_rpc_notify]
0-glusterfsd-mgmt: disconnected from remote-host: *.*.*.*
[2019-01-25 14:02:45.067500] I [glusterfsd-mgmt.c:2464:mgmt_rpc_notify]
0-glusterfsd-mgmt: connecting to next volfile server *.*.*.*
[2019-01-25 14:02:45.069678] E [rpc-clnt.c:346:saved_frames_unwind] (-->
/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f2b0b7f4f1b] (-->
/lib64/libgfrpc.so.0(+0xce11)[0x7f2b0b5bde11] (-->
/lib64/libgfrpc.so.0(+0xcf2e)[0x7f2b0b5bdf2e] (-->
/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x91)[0x7f2b0b5bf531]
(--> /lib64/libgfrpc.so.0(+0xf0d8)[0x7f2b0b5c00d8] ) 0-glusterfs:
forced unwinding frame type(GlusterFS Handshake) op(GETSPEC(2)) called
at 2019-01-25 13:47:03.572632 (xid=0x2)
[2019-01-25 14:02:45.069697] E [glusterfsd-mgmt.c:2136:mgmt_getspec_cbk]
0-mgmt: failed to fetch volume file (key:ssd5)
[2019-01-25 14:02:45.069725] W [glusterfsd.c:1481:cleanup_and_exit]
(-->/lib64/libgfrpc.so.0(+0xce32) [0x7f2b0b5bde32]
-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x841) [0x5651101b7231]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x5651101afbbb] ) 0-:
received signum (0), shutting down
[2019-01-25 14:02:45.069749] I [fuse-bridge.c:5897:fini] 0-fuse:
Unmounting '/rhev/data-center/mnt/glusterSD/*.*.*.*:ssd5'.
[2019-01-25 14:02:45.078949] I [fuse-bridge.c:5902:fini] 0-fuse: Closing
fuse connection to '/rhev/data-center/mnt/glusterSD/*.*.*.*:ssd5'.
[2019-01-25 14:02:45.079035] W [glusterfsd.c:1481:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7dd5) [0x7f2b0a656dd5]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x5651101afd45]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x5651101afbbb] ) 0-:
received signum (15), shutting down

On 1/25/19 2:55 PM, Nir Soffer wrote:
> On Fri, Jan 25, 2019 at 3:18 PM Jorick Astrego  > wrote:
>
> Hi,
>
> We're having problems mounting the preexisting 3.12 glusterfs
> storage domains in ovirt node ng 4.3.0 rc2.
>
> Getting
>
> There are no iptables blocks on the storage network, the ip's are
> pingable bothe ways. I can telnet to the glusterfs ports and I see
> no messages in the logs of the glusterfs servers.
>
> When I try the mount command manually it hangs for ever:
>
> /usr/bin/mount -t glusterfs -o
> backup-volfile-servers=*.*.*.*:*.*.*.* *.*.*.*:/sdd8 /mnt/temp
>
> I haven't submitted a bug yet
>
> from supervdsm.log
>
> MainProcess|jsonrpc/2::DEBUG::2019-01-25
> 13:42:45,282::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper)
> call volumeInfo with (u'sdd8', u'*.*.*.*') {}
> MainProcess|jsonrpc/2::DEBUG::2019-01-25
> 13:42:45,282::commands::198::root::(execCmd) /usr/bin/taskset
> --cpu-list 0-63 /usr/sbin/gluster --mode=script volume info
> --remote-host=*.*.*.* sdd8 --xml (cwd None)
> MainProcess|jsonrpc/2::DEBUG::2019-01-25
> 13:44:45,399::commands::219::root::(execCmd) FAILED:  = '';
>  = 1
> MainProcess|jsonrpc/2::DEBUG::2019-01-25
> 13:44:45,399::logutils::319::root::(_report_stats) ThreadedHandler
> is ok in the last 120 seconds (max pending: 2)
>
>
> This looks like
> https://bugzilla.redhat.com/show_bug.cgi?id=1666123#c18
>
> We should see "ThreadedHandler is ok" every 60 seconds when using
> debug log level.
>
> Looks like your entire supervdsmd process was hang for 120 seconds.
>  
>
> MainProcess|jsonrpc/2::ERROR::2019-01-25
> 13:44:45,399::supervdsm_server::104::SuperVdsm.ServerCallback::(wrapper)
> Error in volumeInfo
> Traceback (most recent call last):
>   File
> "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line
> 102, in wrapper
>     res = func(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py",
> line 529, in volumeInfo
>     xmltree = _execGlusterXml(command)
>   File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py",
> line 131, in _execGlusterXml
>     return _getTree(rc, out, err)
>   File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py",
> line 112, in _getTree
>     raise ge.GlusterCmdExecFailedException(rc, out, err)
> GlusterCmdExecFailedException: Command execution failed
> error: E
> r
> r
> o
> r
>  
> :
>  
> R
> 

[ovirt-users] Re: 4.3.0 rc2 cannot mount glusterfs volumes on ovirt node ng

2019-01-25 Thread Nir Soffer
On Fri, Jan 25, 2019 at 3:18 PM Jorick Astrego  wrote:

> Hi,
>
> We're having problems mounting the preexisting 3.12 glusterfs storage
> domains in ovirt node ng 4.3.0 rc2.
>
> Getting
>
> There are no iptables blocks on the storage network, the ip's are pingable
> bothe ways. I can telnet to the glusterfs ports and I see no messages in
> the logs of the glusterfs servers.
>
> When I try the mount command manually it hangs for ever:
>
> /usr/bin/mount -t glusterfs -o backup-volfile-servers=*.*.*.*:*.*.*.*
> *.*.*.*:/sdd8 /mnt/temp
>
> I haven't submitted a bug yet
>
> from supervdsm.log
>
> MainProcess|jsonrpc/2::DEBUG::2019-01-25
> 13:42:45,282::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper)
> call volumeInfo with (u'sdd8', u'*.*.*.*') {}
> MainProcess|jsonrpc/2::DEBUG::2019-01-25
> 13:42:45,282::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list
> 0-63 /usr/sbin/gluster --mode=script volume info --remote-host=*.*.*.* sdd8
> --xml (cwd None)
> MainProcess|jsonrpc/2::DEBUG::2019-01-25
> 13:44:45,399::commands::219::root::(execCmd) FAILED:  = '';  = 1
> MainProcess|jsonrpc/2::DEBUG::2019-01-25
> 13:44:45,399::logutils::319::root::(_report_stats) ThreadedHandler is ok
> in the last 120 seconds (max pending: 2)
>

This looks like
https://bugzilla.redhat.com/show_bug.cgi?id=1666123#c18

We should see "ThreadedHandler is ok" every 60 seconds when using debug log
level.

Looks like your entire supervdsmd process was hang for 120 seconds.


> MainProcess|jsonrpc/2::ERROR::2019-01-25
> 13:44:45,399::supervdsm_server::104::SuperVdsm.ServerCallback::(wrapper)
> Error in volumeInfo
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line
> 102, in wrapper
> res = func(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 529,
> in volumeInfo
> xmltree = _execGlusterXml(command)
>   File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 131,
> in _execGlusterXml
> return _getTree(rc, out, err)
>   File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 112,
> in _getTree
> raise ge.GlusterCmdExecFailedException(rc, out, err)
> GlusterCmdExecFailedException: Command execution failed
> error: E
> r
> r
> o
> r
>
> :
>
> R
> e
> q
> u
> e
> s
> t
>
> t
> i
> m
> e
> d
>
> o
> u
> t
>
Looks like side effect of
https://gerrit.ovirt.org/c/94784/

GlusterException assumes that it accept list of lines, but we started to
raise
strings. The class should be fixed to handle strings.

>
>
> return code: 1
> MainProcess|jsonrpc/2::DEBUG::2019-01-25
> 13:44:45,400::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper)
> call mount with ( 0x7f6eb8d0a2d0>, u'*.*.*.*:/sdd8',
> u'/rhev/data-center/mnt/glusterSD/*.*.*.*:_sdd8') {'vfstype': u'glusterfs',
> 'mntOpts': u'backup-volfile-servers=*.*.*.*:*.*.*.*', 'cgroup':
> 'vdsm-glusterfs'}
> MainProcess|jsonrpc/2::DEBUG::2019-01-25
> 13:44:45,400::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list
> 0-63 /usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount -t
> glusterfs -o backup-volfile-servers=*.*.*.*:*.*.*.* *.*.*.*:/sdd8
> /rhev/data-center/mnt/glusterSD/*.*.*.*:_sdd8 (cwd None)
> MainProcess|jsonrpc/0::DEBUG::2019-01-25
> 13:45:02,884::commands::219::root::(execCmd) FAILED:  = 'Running scope
> as unit run-38676.scope.\nMount failed. Please check the log file for more
> details.\n';  = 1
> MainProcess|jsonrpc/0::ERROR::2019-01-25
> 13:45:02,884::supervdsm_server::104::SuperVdsm.ServerCallback::(wrapper)
> Error in mount
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line
> 102, in wrapper
> res = func(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line
> 144, in mount
> cgroup=cgroup)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 277,
> in _mount
> _runcmd(cmd)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 305,
> in _runcmd
> raise MountError(rc, b";".join((out, err)))
> MountError: (1, ';Running scope as unit run-38676.scope.\nMount failed.
> Please check the log file for more details.\n')
>

The mount failure is probably related to glusterfs. There are glusterfs
logs on the host that
can give more info on this error.

> MainProcess|jsonrpc/0::DEBUG::2019-01-25
> 13:45:02,894::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper)
> call volumeInfo with (u'ssd9', u'*.*.*.*') {}
> MainProcess|jsonrpc/0::DEBUG::2019-01-25
> 13:45:02,894::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list
> 0-63 /usr/sbin/gluster --mode=script volume info --remote-host=*.*.*.* ssd9
> --xml (cwd None)
>
>
> from vdsm.log
>
> 2019-01-25 13:46:03,519+0100 WARN  (vdsm.Scheduler) [Executor] Worker
> blocked:  {u'connectionParams': [{u'mnt_options':
> u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id':
> u'6b6b7899-c82b-4417-b453-0b3b0ac11deb', u'connection': 

[ovirt-users] Re: engine-iso-uploader taking a REALLY long time

2019-01-25 Thread zachary . winter
Thanks for the response.  The ISO's are sitting on a local hard drive on the 
same system where both the NFS share and engine reside with a 1Gbps link.  Not 
sure how that would factor.

I was not aware ISO Domains are deprecated.  Is there updated documentation I 
can read to get up to speed on the new procedure?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ETZKTAJD4E4VQ77MKAC7WHCDEA5Y63XU/


[ovirt-users] Re: 4.3.0 rc2 cannot mount glusterfs volumes on ovirt node ng

2019-01-25 Thread Sandro Bonazzola
Adding some people to help debugging the issue

Il giorno ven 25 gen 2019 alle ore 14:18 Jorick Astrego 
ha scritto:

> Hi,
>
> We're having problems mounting the preexisting 3.12 glusterfs storage
> domains in ovirt node ng 4.3.0 rc2.
>
> Getting
>
> There are no iptables blocks on the storage network, the ip's are pingable
> bothe ways. I can telnet to the glusterfs ports and I see no messages in
> the logs of the glusterfs servers.
>
> When I try the mount command manually it hangs for ever:
>
> /usr/bin/mount -t glusterfs -o backup-volfile-servers=*.*.*.*:*.*.*.*
> *.*.*.*:/sdd8 /mnt/temp
>
> I haven't submitted a bug yet
>
> from supervdsm.log
>
> MainProcess|jsonrpc/2::DEBUG::2019-01-25
> 13:42:45,282::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper)
> call volumeInfo with (u'sdd8', u'*.*.*.*') {}
> MainProcess|jsonrpc/2::DEBUG::2019-01-25
> 13:42:45,282::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list
> 0-63 /usr/sbin/gluster --mode=script volume info --remote-host=*.*.*.* sdd8
> --xml (cwd None)
> MainProcess|jsonrpc/2::DEBUG::2019-01-25
> 13:44:45,399::commands::219::root::(execCmd) FAILED:  = '';  = 1
> MainProcess|jsonrpc/2::DEBUG::2019-01-25
> 13:44:45,399::logutils::319::root::(_report_stats) ThreadedHandler is ok in
> the last 120 seconds (max pending: 2)
> MainProcess|jsonrpc/2::ERROR::2019-01-25
> 13:44:45,399::supervdsm_server::104::SuperVdsm.ServerCallback::(wrapper)
> Error in volumeInfo
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line
> 102, in wrapper
> res = func(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 529,
> in volumeInfo
> xmltree = _execGlusterXml(command)
>   File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 131,
> in _execGlusterXml
> return _getTree(rc, out, err)
>   File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 112,
> in _getTree
> raise ge.GlusterCmdExecFailedException(rc, out, err)
> GlusterCmdExecFailedException: Command execution failed
> error: E
> r
> r
> o
> r
>
> :
>
> R
> e
> q
> u
> e
> s
> t
>
> t
> i
> m
> e
> d
>
> o
> u
> t
>
>
> return code: 1
> MainProcess|jsonrpc/2::DEBUG::2019-01-25
> 13:44:45,400::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper)
> call mount with ( 0x7f6eb8d0a2d0>, u'*.*.*.*:/sdd8',
> u'/rhev/data-center/mnt/glusterSD/*.*.*.*:_sdd8') {'vfstype': u'glusterfs',
> 'mntOpts': u'backup-volfile-servers=*.*.*.*:*.*.*.*', 'cgroup':
> 'vdsm-glusterfs'}
> MainProcess|jsonrpc/2::DEBUG::2019-01-25
> 13:44:45,400::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list
> 0-63 /usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount -t
> glusterfs -o backup-volfile-servers=*.*.*.*:*.*.*.* *.*.*.*:/sdd8
> /rhev/data-center/mnt/glusterSD/*.*.*.*:_sdd8 (cwd None)
> MainProcess|jsonrpc/0::DEBUG::2019-01-25
> 13:45:02,884::commands::219::root::(execCmd) FAILED:  = 'Running scope
> as unit run-38676.scope.\nMount failed. Please check the log file for more
> details.\n';  = 1
> MainProcess|jsonrpc/0::ERROR::2019-01-25
> 13:45:02,884::supervdsm_server::104::SuperVdsm.ServerCallback::(wrapper)
> Error in mount
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line
> 102, in wrapper
> res = func(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line
> 144, in mount
> cgroup=cgroup)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 277,
> in _mount
> _runcmd(cmd)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 305,
> in _runcmd
> raise MountError(rc, b";".join((out, err)))
> MountError: (1, ';Running scope as unit run-38676.scope.\nMount failed.
> Please check the log file for more details.\n')
> MainProcess|jsonrpc/0::DEBUG::2019-01-25
> 13:45:02,894::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper)
> call volumeInfo with (u'ssd9', u'*.*.*.*') {}
> MainProcess|jsonrpc/0::DEBUG::2019-01-25
> 13:45:02,894::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list
> 0-63 /usr/sbin/gluster --mode=script volume info --remote-host=*.*.*.* ssd9
> --xml (cwd None)
>
>
> from vdsm.log
>
> 2019-01-25 13:46:03,519+0100 WARN  (vdsm.Scheduler) [Executor] Worker
> blocked:  {u'connectionParams': [{u'mnt_options':
> u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id':
> u'6b6b7899-c82b-4417-b453-0b3b0ac11deb', u'connection': u'*.*.*.*:ssd4',
> u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false',
> u'vfs_type': u'glusterfs', u'password': '', u'port': u''},
> {u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id':
> u'b036005a-d44d-4689-a8c3-13e1bbf55af7', u'connection': u'*.*.*.*:ssd5',
> u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false',
> u'vfs_type': u'glusterfs', u'password': '', u'port': u''},
> {u'mnt_options': 

[ovirt-users] Re: engine-iso-uploader taking a REALLY long time

2019-01-25 Thread Sandro Bonazzola
Il giorno ven 25 gen 2019 alle ore 13:50  ha
scritto:

> I am noticing that the engine-iso-uploader tool on oVirt 4.2.7 (running on
> CentOS  7.6.1810) takes a very long time to load individual ISO's, usually
> on the order of hours for a single ISO.  Is this normal?
>

It may be normal if you're uploading on a very slow network connection
otherwise seems suspicious.
As a side note, ISO domains are deprecated, you should upload ISO images
directly on DATA domains from the web admin console.


> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VGA4GV2Q2762633TXOX3L62XZFFLMSST/
>


-- 

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UCDNCPU4EXAH7WS5LJEZJZIRLMOGLPKJ/


[ovirt-users] Re: Ovirt 4.2.8 allows to remove a gluster volume without detaching the storage domain

2019-01-25 Thread Strahil Nikolov
 Hey Community,
where can I report this one ?
Best Regards,Strahil Nikolov

В четвъртък, 24 януари 2019 г., 19:25:37 ч. Гринуич+2, Strahil Nikolov 
 написа:  
 
 Hello Community,
As I'm still experimenting with my ovirt lab , I have managed somehow to remove 
my gluster volume ('gluster volume list' confirms it) whithout detaching the 
storage domain.
This sounds to me as bug, am I right ?
Steps to reproduce:1. Create a replica 3 arbiter 1 gluster volume2. Create a 
storage domain of it3. Go to Volumes and select the name of the volume4. Press 
remove and confirm . The tasks fails , but the volume is now gone in gluster .
I guess , I have to do some cleanup in the DB in order to fix that.
Best Regards,Strahil Nikolov
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4B2U6XEK6XIXTF5SZEJWAGGX5ENGSS52/


[ovirt-users] 4.3.0 rc2 cannot mount glusterfs volumes on ovirt node ng

2019-01-25 Thread Jorick Astrego
Hi,

We're having problems mounting the preexisting 3.12 glusterfs storage
domains in ovirt node ng 4.3.0 rc2.

Getting

There are no iptables blocks on the storage network, the ip's are
pingable bothe ways. I can telnet to the glusterfs ports and I see no
messages in the logs of the glusterfs servers.

When I try the mount command manually it hangs for ever:

/usr/bin/mount -t glusterfs -o
backup-volfile-servers=*.*.*.*:*.*.*.* *.*.*.*:/sdd8 /mnt/temp

I haven't submitted a bug yet

from supervdsm.log

MainProcess|jsonrpc/2::DEBUG::2019-01-25
13:42:45,282::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper)
call volumeInfo with (u'sdd8', u'*.*.*.*') {}
MainProcess|jsonrpc/2::DEBUG::2019-01-25
13:42:45,282::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list
0-63 /usr/sbin/gluster --mode=script volume info --remote-host=*.*.*.*
sdd8 --xml (cwd None)
MainProcess|jsonrpc/2::DEBUG::2019-01-25
13:44:45,399::commands::219::root::(execCmd) FAILED:  = '';  = 1
MainProcess|jsonrpc/2::DEBUG::2019-01-25
13:44:45,399::logutils::319::root::(_report_stats) ThreadedHandler is ok
in the last 120 seconds (max pending: 2)
MainProcess|jsonrpc/2::ERROR::2019-01-25
13:44:45,399::supervdsm_server::104::SuperVdsm.ServerCallback::(wrapper)
Error in volumeInfo
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line
102, in wrapper
    res = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 529,
in volumeInfo
    xmltree = _execGlusterXml(command)
  File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 131,
in _execGlusterXml
    return _getTree(rc, out, err)
  File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 112,
in _getTree
    raise ge.GlusterCmdExecFailedException(rc, out, err)
GlusterCmdExecFailedException: Command execution failed
error: E
r
r
o
r
 
:
 
R
e
q
u
e
s
t
 
t
i
m
e
d
 
o
u
t


return code: 1
MainProcess|jsonrpc/2::DEBUG::2019-01-25
13:44:45,400::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper)
call mount with (, u'*.*.*.*:/sdd8',
u'/rhev/data-center/mnt/glusterSD/*.*.*.*:_sdd8') {'vfstype':
u'glusterfs', 'mntOpts': u'backup-volfile-servers=*.*.*.*:*.*.*.*',
'cgroup': 'vdsm-glusterfs'}
MainProcess|jsonrpc/2::DEBUG::2019-01-25
13:44:45,400::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list
0-63 /usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount
-t glusterfs -o backup-volfile-servers=*.*.*.*:*.*.*.* *.*.*.*:/sdd8
/rhev/data-center/mnt/glusterSD/*.*.*.*:_sdd8 (cwd None)
MainProcess|jsonrpc/0::DEBUG::2019-01-25
13:45:02,884::commands::219::root::(execCmd) FAILED:  = 'Running
scope as unit run-38676.scope.\nMount failed. Please check the log file
for more details.\n';  = 1
MainProcess|jsonrpc/0::ERROR::2019-01-25
13:45:02,884::supervdsm_server::104::SuperVdsm.ServerCallback::(wrapper)
Error in mount
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line
102, in wrapper
    res = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line
144, in mount
    cgroup=cgroup)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line
277, in _mount
    _runcmd(cmd)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line
305, in _runcmd
    raise MountError(rc, b";".join((out, err)))
MountError: (1, ';Running scope as unit run-38676.scope.\nMount failed.
Please check the log file for more details.\n')
MainProcess|jsonrpc/0::DEBUG::2019-01-25
13:45:02,894::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper)
call volumeInfo with (u'ssd9', u'*.*.*.*') {}
MainProcess|jsonrpc/0::DEBUG::2019-01-25
13:45:02,894::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list
0-63 /usr/sbin/gluster --mode=script volume info --remote-host=*.*.*.*
ssd9 --xml (cwd None)


from vdsm.log

2019-01-25 13:46:03,519+0100 WARN  (vdsm.Scheduler) [Executor] Worker
blocked:  timeout=60,
duration=1260.00 at 0x7f9be815ca10> task#=98 at 0x7f9be83bb750>, traceback:
File: "/usr/lib64/python2.7/threading.py", line 785, in __bootstrap
  self.__bootstrap_inner()
File: "/usr/lib64/python2.7/threading.py", line 812, in __bootstrap_inner
  self.run()
File: "/usr/lib64/python2.7/threading.py", line 765, in run
  self.__target(*self.__args, **self.__kwargs)
File: "/usr/lib/python2.7/site-packages/vdsm/common/concurrent.py", line
195, in run
  ret = func(*args, **kwargs)
File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 301, in _run
  self._execute_task()
File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 315, in
_execute_task
  task()
File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 391, in
__call__
  self._callable()
File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
262, in __call__
  self._handler(self._ctx, self._req)
File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
305, in 

[ovirt-users] engine-iso-uploader taking a REALLY long time

2019-01-25 Thread zachary . winter
I am noticing that the engine-iso-uploader tool on oVirt 4.2.7 (running on 
CentOS  7.6.1810) takes a very long time to load individual ISO's, usually on 
the order of hours for a single ISO.  Is this normal?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VGA4GV2Q2762633TXOX3L62XZFFLMSST/


[ovirt-users] ovirt.org HTML Issue?

2019-01-25 Thread zachary . winter

Hello all,

I wasn't sure where to post this or email this (I promised I looked around), 
but the web page for the following web page in the manual appears to have HTML 
issues towards the end of the page:

https://www.ovirt.org/documentation/admin-guide/chap-Pools.html

Should I notify someone else that it ought to be fixed?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FLCQRPF3NOFSW734VNXXU2FMQMVRUHNL/


[ovirt-users] Re: Ovirt snapshot issues

2019-01-25 Thread Alex K
On Thu, Jan 24, 2019 at 11:28 AM Elad Ben Aharon 
wrote:

> Thanks!
>
> +Fred Rolland  seems like the same issue as reported
> in https://bugzilla.redhat.com/show_bug.cgi?id=1555116
>
Seems to be related with time-out issues. Though i did not have any storage
issues that could have affected the snapshot procedure, although I am
running a replica 2 on 1Gbit network where vm storage resides which is not
the fastest and may affect this. Could rebasing the backing-chain to
reflect the engine state be a solution for this?


>
> 2019-01-24 10:12:08,240+02 ERROR
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand] (default
> task-544) [416c625f-e57b-46b8-bf74-5b774191fada] Error during
> ValidateFailure.: java.lang.NullPointerExceptio
> n
>at org.ovirt.engine.core.bll.validator.storage.
> StorageDomainValidator.getTotalSizeForMerge(StorageDomainValidator.java:205)
> [bll.jar:]
>at org.ovirt.engine.core.bll.validator.storage.
> StorageDomainValidator.hasSpaceForMerge(StorageDomainValidator.java:241)
> [bll.jar:]
>at org.ovirt.engine.core.bll.validator.storage.
> MultipleStorageDomainsValidator.lambda$allDomainsHaveSpaceForMerge$6(
> MultipleStorageDomainsValidator.java:122) [bll.jar:]
>at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> [rt.jar:1.8.0_191]
>
>
>
> On Thu, Jan 24, 2019 at 10:25 AM Alex K  wrote:
>
>> When I get the error the engine.log  logs the attached
>> engine-partial.log.
>> At vdsm.log at SPM host I don't see any error generated.
>> Full logs also attached.
>>
>> Thanx,
>> Alex
>>
>>
>>
>>
>> On Wed, Jan 23, 2019 at 5:53 PM Elad Ben Aharon 
>> wrote:
>>
>>> Hi,
>>>
>>> Can you please provide engine.log and vdsm.log?
>>>
>>> On Wed, Jan 23, 2019 at 5:41 PM Alex K  wrote:
>>>
 Hi all,

 I have ovirt 4.2.7, self-hosted on top gluster, with two servers.
 I have a specific VM which has encountered some snapshot issues.
 The engine lists 4 snapshots and when trying to delete one of them I
 get "General command validation failure".

 The VM was being backed up periodically by a python script which was
 creating a snapshot -> clone -> export -> delete clone -> delete snapshot.
 There were times where the VM was complaining of some illegal snapshots
 following such backup procedures and I had to delete such illegal snapshots
 references from the engine DB (following some steps found online),
 otherwise I would not be able to start the VM if it was shut down. Seems
 though that this is not a clean process and leaves the underlying image of
 the VM in an inconsistent state in regards to its snapshots as when
 checking the backing chain of the image file I get:

 *b46d8efe-885b-4a68-94ca-e8f437566bee* (active VM)* ->*
 *b7673dca-6e10-4a0f-9885-1c91b86616af ->*
 *4f636d91-a66c-4d68-8720-d2736a3765df ->*
 6826cb76-6930-4b53-a9f5-fdeb0e8012ac ->
 61eea475-1135-42f4-b8d1-da6112946bac ->
 *604d84c3-8d5f-4bb6-a2b5-0aea79104e43 ->*
 1e75898c-9790-4163-ad41-847cfe84db40 ->
 *cf8707f2-bf1f-4827-8dc2-d7e6ffcc3d43 ->*
 3f54c98e-07ca-4810-82d8-cbf3964c7ce5 (raw image)

 The bold ones are the ones shown at engine GUI. The VM runs normally
 without issues.
 I was thinking if I could use qemu-img commit to consolidate and remove
 the snapshots that are not referenced from engine anymore. Any ideas from
 your side?

 Thanx,
 Alex
 ___
 Users mailing list -- users@ovirt.org
 To unsubscribe send an email to users-le...@ovirt.org
 Privacy Statement: https://www.ovirt.org/site/privacy-policy/
 oVirt Code of Conduct:
 https://www.ovirt.org/community/about/community-guidelines/
 List Archives:
 https://lists.ovirt.org/archives/list/users@ovirt.org/message/DDZXH5UG6QEH76A5EO4STZ4YV7RIQQ2I/

>>>
>>>
>>> --
>>>
>>> Elad Ben Aharon
>>>
>>> ASSOCIATE MANAGER, RHV storage QE
>>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6IJLQCVUHR6ZDNEMHL52PF7H54UADRWT/
>>
>
>
> --
>
> Elad Ben Aharon
>
> ASSOCIATE MANAGER, RHV storage QE
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FSLK64KM336WYGAL7RLPU2VWC46XB43U/


[ovirt-users] Clearing asynchronous task Unknown

2019-01-25 Thread nicolas

Hi,

We're running oVirt 4.1.9 (I know there's a new version, we can't 
upgrade until [1] is implemented). The thing is that since some days 
we're having an event that floods our event list:


  Clearing asynchronous task Unknown that started at Tue Jan 22 14:33:17 
WET 2019


The event shows up every minute. We tried restarting the ovirt-engine, 
but after some time it starts flooding again. No pending tasks in the 
task list.


How can I check what is happening and how to solve it?

Thanks.

  [1]: https://github.com/oVirt/ovirt-web-ui/issues/490
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YQ35IGLYZBYGY7F5IKUOXFMRUOXD6BK7/


[ovirt-users] Re: [ANN] oVirt 4.3.0 Third Release Candidate is now available for testing

2019-01-25 Thread Sandro Bonazzola
Il giorno ven 25 gen 2019 alle ore 12:38 Jorick Astrego 
ha scritto:

> Hi Sandro,
>
> Is there a list with the changes/fixes from rc1 > rc2 > rc3?
>
Builds:
rc1 content:
https://github.com/oVirt/releng-tools/blob/master/releases/ovirt-4.3.0_rc1.conf
rc2 content:
https://github.com/oVirt/releng-tools/blob/master/releases/ovirt-4.3.0_rc2.conf
rc3 content:
https://github.com/oVirt/releng-tools/blob/master/releases/ovirt-4.3.0_rc3.conf

Changes:
alpha -> rc1:  https://github.com/oVirt/ovirt-site/pull/1862
rc1 -> rc2: https://github.com/oVirt/ovirt-site/pull/1867
rc2 -> rc3: https://github.com/oVirt/ovirt-site/pull/1875




> Regards,
>
> Jorick Astrego
>
>
> On 1/24/19 2:10 PM, Sandro Bonazzola wrote:
>
> The oVirt Project is pleased to announce the availability of the Third
> Release Candidate of oVirt 4.3.0, as of January 24th, 2018
>
> This is pre-release software. This pre-release should not to be used in
> production.
>
> Please take a look at our community page[1] to learn how to ask questions
> and interact with developers and users.
>
> All issues or bugs should be reported via oVirt Bugzilla[2].
>
> This update is the first release candidate of the 4.3.0 version.
>
> This release brings more than 130 enhancements and more than 450 bug fixes
> on top of oVirt 4.2 series.
>
> What's new in oVirt 4.3.0?
>
> * Q35 chipset, support booting using UEFI and Secure Boot
>
> * Skylake-server and AMD EPYC support
>
> * New smbus driver in windows guest tools
>
> * Improved support for v2v
>
> * OVA export / import of Templates
>
> * Full support for live migration of High Performance VMs
>
> * Microsoft Failover clustering support (SCSI Persistent Reservation) for
> Direct LUN disks
>
> * Hundreds of bug fixes on top of oVirt 4.2 series
>
> * New VM portal details page (see a preview here:
> https://imgur.com/a/ExINpci)
>
> * New Cluster upgrade UI
>
> * OVN security groups
>
> * IPv6 (static host addresses)
>
> * Support of Neutron from RDO OpenStack 13 as external network provider
>
> * Support of using Skydive from RDO OpenStack 14 as Tech Preview
>
> * Support for 3.6 and 4.0 data centers, clusters and hosts were removed
>
> * Now using PostgreSQL 10
>
> * New metrics support using rsyslog instead of fluentd
>
>
> This release is available now on x86_64 architecture for:
>
> * Red Hat Enterprise Linux 7.6 or later
>
> * CentOS Linux (or similar) 7.6 or later
>
>
> This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
> for:
>
> * Red Hat Enterprise Linux 7.6 or later
>
> * CentOS Linux (or similar) 7.6 or later
>
> * oVirt Node 4.3 (available for x86_64 only)
>
> Experimental tech preview for x86_64 and s390x architectures for Fedora 28
> is also included.
>
> See the release notes draft [3] for installation / upgrade instructions
> and a list of new features and bugs fixed.
>
> Notes:
>
> - oVirt Appliance is already available for both CentOS 7 and Fedora 28
> (tech preview).
>
> - oVirt Node NG  is already available for CentOS 7
>
> - oVirt Node NG for Fedora 28 (tech preview) is being delayed due to build
> issues with the build system.
>
> Additional Resources:
>
> * Read more about the oVirt 4.3.0 release highlights:
> http://www.ovirt.org/release/4.3.0/
>
> * Get more oVirt project updates on Twitter: https://twitter.com/ovirt
>
> * Check out the latest project news on the oVirt blog:
> http://www.ovirt.org/blog/
>
>
> [1] https://www.ovirt.org/community/
>
> [2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
>
> [3] http://www.ovirt.org/release/4.3.0/
>
> [4] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
>
>
> --
>
> SANDRO BONAZZOLA
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> 
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/CFX4K7K6WTVVVQJHP2XAAZQYSNMOFXYI/
>
>
>
>
>
> Met vriendelijke groet, With kind regards,
>
> Jorick Astrego
>
> *Netbulae Virtualization Experts *
> --
> Tel: 053 20 30 270 i...@netbulae.eu Staalsteden 4-3A KvK 08198180
> Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01
> --
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VTG2COKN2TOW23ZKYFW3K2AWNRFYPMYS/
>


-- 

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R 

[ovirt-users] Re: [ANN] oVirt 4.3.0 Third Release Candidate is now available for testing

2019-01-25 Thread Jorick Astrego
Hi Sandro,

Is there a list with the changes/fixes from rc1 > rc2 > rc3?

Regards,

Jorick Astrego


On 1/24/19 2:10 PM, Sandro Bonazzola wrote:
>
> The oVirt Project is pleased to announce the availability of the Third
> Release Candidate of oVirt 4.3.0, as of January 24th, 2018
>
>
> This is pre-release software. This pre-release should not to be used
> in production.
>
>
> Please take a look at our community page[1] to learn how to ask
> questions and interact with developers and users.
>
> All issues or bugs should be reported via oVirt Bugzilla[2].
>
>
> This update is the first release candidate of the 4.3.0 version.
>
> This release brings more than 130 enhancements and more than 450 bug
> fixes on top of oVirt 4.2 series.
>
>
> What's new in oVirt 4.3.0?
>
> * Q35 chipset, support booting using UEFI and Secure Boot
>
> * Skylake-server and AMD EPYC support
>
> * New smbus driver in windows guest tools
>
> * Improved support for v2v
>
> * OVA export / import of Templates
>
> * Full support for live migration of High Performance VMs
>
> * Microsoft Failover clustering support (SCSI Persistent Reservation)
> for Direct LUN disks
>
> * Hundreds of bug fixes on top of oVirt 4.2 series
>
> * New VM portal details page (see a preview here:
> https://imgur.com/a/ExINpci)
>
> * New Cluster upgrade UI
>
> * OVN security groups
>
> * IPv6 (static host addresses)
>
> * Support of Neutron from RDO OpenStack 13 as external network provider
>
> * Support of using Skydive from RDO OpenStack 14 as Tech Preview
>
> * Support for 3.6 and 4.0 data centers, clusters and hosts were removed
>
> * Now using PostgreSQL 10
>
> * New metrics support using rsyslog instead of fluentd
>
>
>
> This release is available now on x86_64 architecture for:
>
> * Red Hat Enterprise Linux 7.6 or later
>
> * CentOS Linux (or similar) 7.6 or later
>
>
>
> This release supports Hypervisor Hosts on x86_64 and ppc64le
> architectures for:
>
> * Red Hat Enterprise Linux 7.6 or later
>
> * CentOS Linux (or similar) 7.6 or later
>
> * oVirt Node 4.3 (available for x86_64 only)
>
>
> Experimental tech preview for x86_64 and s390x architectures for
> Fedora 28 is also included.
>
>
> See the release notes draft [3] for installation / upgrade
> instructions and a list of new features and bugs fixed.
>
>
> Notes:
>
> - oVirt Appliance is already available for both CentOS 7 and Fedora 28
> (tech preview).
>
> - oVirt Node NG  is already available for CentOS 7
>
> - oVirt Node NG for Fedora 28 (tech preview) is being delayed due to
> build issues with the build system.
>
>
> Additional Resources:
>
> * Read more about the oVirt 4.3.0 release highlights:
> http://www.ovirt.org/release/4.3.0/
>
> * Get more oVirt project updates on Twitter: https://twitter.com/ovirt
>
> * Check out the latest project news on the oVirt blog:
> http://www.ovirt.org/blog/
>
>
>
> [1] https://www.ovirt.org/community/
>
> [2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
>
> [3] http://www.ovirt.org/release/4.3.0/
>
> [4] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
>
>
>
> -- 
>
> SANDRO BONAZZOLA
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com    
>
> 
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/CFX4K7K6WTVVVQJHP2XAAZQYSNMOFXYI/




Met vriendelijke groet, With kind regards,

Jorick Astrego

Netbulae Virtualization Experts 



Tel: 053 20 30 270  i...@netbulae.euStaalsteden 4-3A
KvK 08198180
Fax: 053 20 30 271  www.netbulae.eu 7547 TA Enschede
BTW NL821234584B01



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VTG2COKN2TOW23ZKYFW3K2AWNRFYPMYS/


[ovirt-users] Re: Unable to deatch/remove ISO DOMAIN

2019-01-25 Thread Strahil Nikolov
 Hi Martin,
this is my history (please keep in mind that it might get distorted due to mail 
client).Note: I didn't stop the ovirt-engine.service and this caused some 
errors to be logged - but the engine is still working without issues. As I said 
- this is my test lab and I was willing to play around :)
Good Luck!

ssh root@engine
#Switch to postgre usersu - postgres

#If you don't load this , there will be no path for psql , nor it will start at 
allsource /opt/rh/rh-postgresql95/enable
#open the DB. psql engine
#Commands in the DB:select id, storage_name from storage_domain_static;
select storage_domain_id, ovf_disk_id from storage_domains_ovf_info where 
storag                                                                          
              e_domain_id='fbe7bf1a-2f03-4311-89fa-5031eab638bf';
delete from storage_domain_dynamic where id = 
'fbe7bf1a-2f03-4311-89fa-5031eab63                                              
                                          8bf';
delete from storage_domain_static where id = 
'fbe7bf1a-2f03-4311-89fa-5031eab638                                             
                                           bf';
delete from base_disks where disk_id = 
'7a155ede-5317-4860-aa93-de1dc283213e';delete from base_disks where disk_id = 
'7dedd0e1-8ce8-444e-8a3d-117c46845bb0';
delete from storage_domains_ovf_info where storage_domain_id = 
'fbe7bf1a-2f03-43                                                               
                         11-89fa-5031eab638bf';
delete from storage_pool_iso_map where storage_id = 
'fbe7bf1a-2f03-4311-89fa-503                                                    
                                    1eab638bf';
#I think this shows all tables:select table_schema ,table_name from 
information_schema.tables order by table_sc                                     
                                                   hema,table_name;#Maybe you 
don't need this one and you need to find the NFS volume:select * from 
gluster_volumes ;delete from gluster_volumes where id = 
'9b06a1e9-8102-4cd7-bc56-84960a1efaa2';
select table_schema ,table_name from information_schema.tables order by 
table_sc                                                                        
                hema,table_name;
# The previous delete failed as there was an entry in 
storage_server_connections.#In your case could be differentselect * from 
storage_server_connections;delete from storage_server_connections where id = 
'490ee1c7-ae29-45c0-bddd-61708                                                  
                                      22c8490';delete from gluster_volumes 
where id = '9b06a1e9-8102-4cd7-bc56-84960a1efaa2';


Best Regards,Strahil Nikolov
В петък, 25 януари 2019 г., 11:04:01 ч. Гринуич+2, Martin Humaj 
 написа:  
 
 Hi StrahilI have tried to use the same ip and nfs export to replace the 
original, did not work properly.
If you can guide me how to do it in engine DB i would appreciate it. This is a 
test system.
thank you Martin

On Fri, Jan 25, 2019 at 9:56 AM Strahil  wrote:

Can you create a temporary NFS server which to be accessed during the removal?I 
have managed to edit the engine's DB to get rid of cluster domain, but this is 
not recommended for production  systems :)
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FHVNCODMC2POM5ISTICNMJ462VX72WXT/


[ovirt-users] Re: Nvidia Grid K2 and Ovirt GPU Passtrough

2019-01-25 Thread Josep Manel Andrés Moscardó

Hi,
Are you sure it hangs? or is just redirection of the graphics is not 
going to the console anymore? The second is my case, so I can ssh to the 
server and see it alive.

But no idea how to set the graphicals properly.

On 25/1/19 2:51, jarhe...@hotmail.de wrote:

Hello,

i have tried every documentation that i found to bring GPU Passtrough with a 
Nvidia Grid K2 to ovirt, but i failed.

I am confident that it has to run with my hardware, but i have no ideas 
anymore. Maybe the community can help me.

Actually my VM (Win7 and Win10) crashes or hangs on startup.

I have done theese steps:

1. lspci -nnk
.
07:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK104GL [GRID K2] 
[10de:11bf] (rev a1)
 Subsystem: NVIDIA Corporation Device [10de:100a]
 Kernel driver in use: pci-stub
 Kernel modules: nouveau
08:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK104GL [GRID K2] 
[10de:11bf] (rev a1)
 Subsystem: NVIDIA Corporation Device [10de:100a]
 Kernel driver in use: pci-stub
 Kernel modules: nouveau
.

2. /etc/default/grub
.
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=cl/root rd.lvm.lv=cl/swap rhgb quiet 
pci-stub.ids=10de:11bf rdblacklist=nouveau amd_iommu=on"


3.
  Added Line

  "options vfio-pci ids=10de:11bf"
  
to nano /etc/modprobe.d/vfio.conf


dmesg | grep -i vfio ->

[   11.202767] VFIO - User Level meta-driver version: 0.3
[   11.315368] vfio_pci: add [10de:11bf[:]] class 0x00/
[ 1032.582778] vfio_ecap_init: :07:00.0 hiding ecap 0x19@0x900
[ 1046.232009] vfio-pci :07:00.0: irq 61 for MSI/MSI-X

  -
After assigning the GPU to the VM, the OS hangs on startup

Any ideas?

Best Regards
Reza
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7TW2DY3CSA35Y3LJTEACY3IRIUH57422/



--
Josep Manel Andrés Moscardó
Systems Engineer, IT Operations
EMBL Heidelberg
T +49 6221 387-8394



smime.p7s
Description: S/MIME Cryptographic Signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SHVYUOSIXBJ6IAUBK6S4UV3L66QJIRRN/


[ovirt-users] Unable to deatch/remove ISO DOMAIN

2019-01-25 Thread mhumaj
Ovirt - 4.2.4.5-1.el7

Is there any way how to remove the nfs ISO domain in DB? We cannot get rid of 
it in GUI and we are not able to use it anymore. The problem is that NFS server 
which was responsible for DATA TYPE ISO domain was deleted. Even we are trying 
to change it in settings it will not allow us to do it.

Error messages:
Failed to activate Storage Domain oVirt-ISO (Data Center InnovationCenter) by 
admin@internal-authz
VDSM command ActivateStorageDomainVDS failed: Storage domain does not exist: 
(u'61045461-10ff-4f7a-b464-67198c4a6c27',)

tank you
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QRHH7T5MVPO23AVPEHMWBTFSZEDLTIZK/