Re: [ovirt-users] Enabling libgfapi disk access with oVirt 4.2

2017-11-14 Thread Misak Khachatryan
Hi,

will it be a more clean approach? I can't tolerate full stop of all
VMs just to enable it, seems too disastrous for real production
environment. Will it be some migration mechanisms in future?

Best regards,
Misak Khachatryan


On Fri, Nov 10, 2017 at 12:35 AM, Darrell Budic  wrote:
> You do need to stop the VMs and restart them, not just issue a reboot. I
> havn’t tried under 4.2 yet, but it works in 4.1.6 that way for me.
>
> 
> From: Alessandro De Salvo 
> Subject: Re: [ovirt-users] Enabling libgfapi disk access with oVirt 4.2
> Date: November 9, 2017 at 2:35:01 AM CST
> To: users@ovirt.org
>
>
> Hi again,
>
> OK, tried to stop all the vms, except the engine, set engine-config -s
> LibgfApiSupported=true (for 4.2 only) and restarted the engine.
>
> When I tried restarting the VMs they are still not using gfapi, so it does
> not seem to help.
>
> Cheers,
>
>
> Alessandro
>
>
>
> Il 09/11/17 09:12, Alessandro De Salvo ha scritto:
>
> Hi,
> where should I enable gfapi via the UI?
> The only command I tried was engine-config -s LibgfApiSupported=true but the
> result is what is shown in my output below, so it’s set to true for v4.2. Is
> it enough?
> I’ll try restarting the engine. Is it really needed to stop all the VMs and
> restart them all? Of course this is a test setup and I can do it, but for
> production clusters in the future it may be a problem.
> Thanks,
>
>Alessandro
>
> Il giorno 09 nov 2017, alle ore 07:23, Kasturi Narra  ha
> scritto:
>
> Hi ,
>
> The procedure to enable gfapi is below.
>
> 1) stop all the vms running
> 2) Enable gfapi via UI or using engine-config command
> 3) Restart ovirt-engine service
> 4) start the vms.
>
> Hope you have not missed any !!
>
> Thanks
> kasturi
>
> On Wed, Nov 8, 2017 at 11:58 PM, Alessandro De Salvo
>  wrote:
>>
>> Hi,
>>
>> I'm using the latest 4.2 beta release and want to try the gfapi access,
>> but I'm currently failing to use it.
>>
>> My test setup has an external glusterfs cluster v3.12, not managed by
>> oVirt.
>>
>> The compatibility flag is correctly showing gfapi should be enabled with
>> 4.2:
>>
>> # engine-config -g LibgfApiSupported
>> LibgfApiSupported: false version: 3.6
>> LibgfApiSupported: false version: 4.0
>> LibgfApiSupported: false version: 4.1
>> LibgfApiSupported: true version: 4.2
>>
>> The data center and cluster have the 4.2 compatibility flags as well.
>>
>> However, when starting a VM with a disk on gluster I can still see the
>> disk is mounted via fuse.
>>
>> Any clue of what I'm still missing?
>>
>> Thanks,
>>
>>
>>Alessandro
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [SOLVED] Bug or feature? Thin provision RAW disk images take all space in advance

2017-11-14 Thread Pavol Brilla
Or second option for estimate file space usage is use
# du -h $PATH_TO_DISKS

On Wed, Nov 15, 2017 at 12:19 AM, andre...@starlett.lv  wrote:

> Dumb error.
> ls -la doesn't show real allocated size of sparse image,
> ls -lsha does the trick.
>
>
> On 11/15/2017 12:55 AM, andre...@starlett.lv wrote:
> > Hi,
> >
> > Run into strange problem with newly installed latest 4.1.
> > Defined local storage domain.
> > Create virtual disk - thin provisioning (in web interface) makes RAW
> > images, and PREALLOCATES to full size instead of min size 1GB.
> > Preallocation occurs even before formatting to ext4.
> > Tried several times, same result.
> >
> > How to fix this behavior?
> > Thanks in advance.
> > Andrei
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>



-- 

PAVOL BRILLA

RHV QUALITY ENGINEER, CLOUD

Red Hat Czech Republic, Brno 

TRIED. TESTED. TRUSTED. 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Snapshot or not?

2017-11-14 Thread Demeter Tibor
Hi, 
Can somebody help me? 

Thanks. 

T. 

- 2017. nov.. 13., 21:36, Demeter Tibor  írta: 

> Dear Users,

> I have a disk of a vm, that is have a snapshot. It is very interesting, 
> because
> there are two other disk of that VM, but there are no snapshots of them.
> I found this while I've try to migrate a storage-domain between two 
> datacenter.
> Because, I didn't import that vm from the storage domain, I did an another
> similar VM with exactly same sized thin-provisioned disks. I have renamed,
> copied to here my originals.

> The VM started successfully, but the disk that contain a snapshot did not
> recognized by the os. I can see the whole disk as raw. (disk id, format in
> ovirt, filenames of images, etc) . I think ovirt don't know that is a
> snapshotted image and use as raw. Is it possible?
> I don't see any snapshot in snapshots. Also I have try to list snapshots with
> qemu-img info and qemu-img snapshot -l , but it does not see any snapshots in
> the image.

> Really, I don't know how is possible this.

> [root@storage1 8d815282-6957-41c0-bb3e-6c8f4a23a64b]# qemu-img info
> 5974fd33-af4c-4e3b-aadb-bece6054eb6b
> image: 5974fd33-af4c-4e3b-aadb-bece6054eb6b
> file format: qcow2
> virtual size: 13T (13958643712000 bytes)
> disk size: 12T
> cluster_size: 65536
> backing file:
> ../8d815282-6957-41c0-bb3e-6c8f4a23a64b/723ad5aa-02f6-4067-ac75-0ce0a761627f
> backing file format: raw
> Format specific information:
> compat: 0.10

> [root@storage1 8d815282-6957-41c0-bb3e-6c8f4a23a64b]# qemu-img info
> 723ad5aa-02f6-4067-ac75-0ce0a761627f
> image: 723ad5aa-02f6-4067-ac75-0ce0a761627f
> file format: raw
> virtual size: 2.0T (2147483648000 bytes)
> disk size: 244G

> [root@storage1 8d815282-6957-41c0-bb3e-6c8f4a23a64b]# ll
> total 13096987560
> -rw-rw. 1 36 36 13149448896512 Nov 13 13:42
> 5974fd33-af4c-4e3b-aadb-bece6054eb6b
> -rw-rw. 1 36 36 1048576 Nov 13 19:34
> 5974fd33-af4c-4e3b-aadb-bece6054eb6b.lease
> -rw-r--r--. 1 36 36 262 Nov 13 19:54 5974fd33-af4c-4e3b-aadb-bece6054eb6b.meta
> -rw-rw. 1 36 36 2147483648000 Jul 8 2016
> 723ad5aa-02f6-4067-ac75-0ce0a761627f
> -rw-rw. 1 36 36 1048576 Jul 7 2016
> 723ad5aa-02f6-4067-ac75-0ce0a761627f.lease
> -rw-r--r--. 1 36 36 335 Nov 13 19:52 723ad5aa-02f6-4067-ac75-0ce0a761627f.meta

> qemu-img snapshot -l 5974fd33-af4c-4e3b-aadb-bece6054eb6b

> (nothing)

> Because it is a very big (13 TB) disk I can't migrate to an another image,
> because I don't have enough free space. So I just would like to use it in 
> ovirt
> like in the past.

> I have a very old ovirt (3.5)

> How can I use this disk?

> Thanks in advance,

> Regards,

> Tibor

> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Enabling libgfapi disk access with oVirt 4.2

2017-11-14 Thread Kasturi Narra
Hi Bryan,

 In your output if you see the -drive file=gluster:///
this means that vm disk drives are being accessed using libgfapi.

If it is fuse then you would have seen something like
"file=/rhev/data-center/59f2df7e-0388-00ea-02c2-017b/67d7d3cc-df3f-4d07-b6f3-944982c5677c/images/8e6f96d3-2ed4-4c56-87d1-3a994284e683/9bf5a54e-d72d-4b1f-8ab1-0a84eb987fdd"

Thanks
kasturi

On Tue, Nov 14, 2017 at 8:37 PM, Bryan Sockel  wrote:

> Hrm, not sure what i am doing wrong then, does not seem to be working for
> me.  I am not using the hosted engine, but a direct install on a physical
> server.  I thought i have enabled support for libgfapi with this command:
>
> # engine-config -g LibgfApiSupported
> LibgfApiSupported: false version: 3.6
> LibgfApiSupported: false version: 4.0
> LibgfApiSupported: true version: 4.1
>
> restarted the engine, shutdown the vm completely and started it back up a
> short time later.
>
> I am using this command to check:
>  ps ax | grep qemu | grep 'file=gluster\|file=/rhev'
>
> Output is
>  file=gluster://10.20.102.181/gl-vm12/
>
> Thanks
> Bryan
>
> -Original Message-
> From: Kasturi Narra 
> To: Bryan Sockel 
> Cc: Alessandro De Salvo , users <
> users@ovirt.org>
> Date: Tue, 14 Nov 2017 12:56:49 +0530
> Subject: Re: [ovirt-users] Enabling libgfapi disk access with oVirt 4.2
>
> yes, it  does work with 4.1.7.6 version
>
> On Tue, Nov 14, 2017 at 4:49 AM, Bryan Sockel 
> wrote:
>>
>> Is libgfapi supposed to be working in 4.1.7.6?
>> Bryan
>>
>> -Original Message-
>> From: Alessandro De Salvo 
>> To: users@ovirt.org
>> Date: Thu, 9 Nov 2017 09:35:01 +0100
>> Subject: Re: [ovirt-users] Enabling libgfapi disk access with oVirt 4.2
>>
>> Hi again,
>> OK, tried to stop all the vms, except the engine, set engine-config -s
>> LibgfApiSupported=true (for 4.2 only) and restarted the engine.
>> When I tried restarting the VMs they are still not using gfapi, so it
>> does not seem to help.
>> Cheers,
>>
>> Alessandro
>>
>>
>> Il 09/11/17 09:12, Alessandro De Salvo ha scritto:
>>
>>
>> Hi,
>> where should I enable gfapi via the UI?
>> The only command I tried was engine-config -s LibgfApiSupported=true but
>> the result is what is shown in my output below, so it’s set to true for
>> v4.2. Is it enough?
>> I’ll try restarting the engine. Is it really needed to stop all the VMs
>> and restart them all? Of course this is a test setup and I can do it, but
>> for production clusters in the future it may be a problem.
>> Thanks,
>>
>>Alessandro
>>
>> Il giorno 09 nov 2017, alle ore 07:23, Kasturi Narra 
>> ha scritto:
>>
>>
>> Hi ,
>>
>> The procedure to enable gfapi is below.
>>
>> 1) stop all the vms running
>> 2) Enable gfapi via UI or using engine-config command
>> 3) Restart ovirt-engine service
>> 4) start the vms.
>>
>> Hope you have not missed any !!
>>
>> Thanks
>> kasturi
>>
>> On Wed, Nov 8, 2017 at 11:58 PM, Alessandro De Salvo <
>> alessandro.desa...@roma1.infn.it> wrote:
>>>
>>> Hi,
>>>
>>> I'm using the latest 4.2 beta release and want to try the gfapi access,
>>> but I'm currently failing to use it.
>>>
>>> My test setup has an external glusterfs cluster v3.12, not managed by
>>> oVirt.
>>>
>>> The compatibility flag is correctly showing gfapi should be enabled with
>>> 4.2:
>>>
>>> # engine-config -g LibgfApiSupported
>>> LibgfApiSupported: false version: 3.6
>>> LibgfApiSupported: false version: 4.0
>>> LibgfApiSupported: false version: 4.1
>>> LibgfApiSupported: true version: 4.2
>>>
>>> The data center and cluster have the 4.2 compatibility flags as well.
>>>
>>> However, when starting a VM with a disk on gluster I can still see the
>>> disk is mounted via fuse.
>>>
>>> Any clue of what I'm still missing?
>>>
>>> Thanks,
>>>
>>>
>>>Alessandro
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>>
>> ___
>> Users mailing 
>> listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [SOLVED] Bug or feature? Thin provision RAW disk images take all space in advance

2017-11-14 Thread andre...@starlett.lv
Dumb error.
ls -la doesn't show real allocated size of sparse image,
ls -lsha does the trick.


On 11/15/2017 12:55 AM, andre...@starlett.lv wrote:
> Hi,
>
> Run into strange problem with newly installed latest 4.1.
> Defined local storage domain.
> Create virtual disk - thin provisioning (in web interface) makes RAW
> images, and PREALLOCATES to full size instead of min size 1GB.
> Preallocation occurs even before formatting to ext4.
> Tried several times, same result.
>
> How to fix this behavior?
> Thanks in advance.
> Andrei
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Bug or feature? Thin provision RAW disk images take all space in advance

2017-11-14 Thread andre...@starlett.lv
Hi,

Run into strange problem with newly installed latest 4.1.
Defined local storage domain.
Create virtual disk - thin provisioning (in web interface) makes RAW
images, and PREALLOCATES to full size instead of min size 1GB.
Preallocation occurs even before formatting to ext4.
Tried several times, same result.

How to fix this behavior?
Thanks in advance.
Andrei
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Trouble Connecting ISCSI Storage to Hosted Engine VM

2017-11-14 Thread Yaniv Kaul
On Tue, Nov 14, 2017 at 4:44 PM, Kyle Conti 
wrote:

> Sending this again in case I sent this prior to being fully setup as an
> ovirt-user subscriber (received confirmation after I sent).  If you did
> receive this already, my apologies:
>
> Hello,
>
> I'm brand new to Ovirt and trying to get my Hosted Engine setup configured
> with ISCSI storage.  I have ~8TB usable storage available on an LVM
> partition.  This storage is on the same server that is hosting the ovirt
> engine virtual machine.  After I use the discovery/sendtargets command via
> Centos 7 engine vm, it shows the correct IQN.  When I use ovirt's storage
> discovery in GUI, I can see the storage IQN just fine as well, but when I
> try to connect to it, I get the following:
>
> "Error while executing action: Failed to login to iSCSI node due to
> authorization failure"
>

Do you have CHAP configured?
Y.


>
> Is NFS recommended instead when trying to connect the storage from server
> host to Ovirt Engine VM?  There is nothing in this storage domain yet.
> This is a brand new setup.
>
> One other thing to note...I have iscsi storage working with a NAS for my
> ISO storage domain.  I don't want to use the NAS for the virtual machine
> storage domain.  What's so different about the Ovirt Engine vm?
>
> Any help would be much appreciated.  Please let me know If I'm taking the
> wrong approach here, or I'm trying to do something that this system is not
> meant to do.
>
> Regards,
>
> *KC*
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Installing oVirt node 4.1 in nested virtualization on KVM / ESXi.

2017-11-14 Thread Open tech
Looked into CPU settings for KVM. Looks like nested virtualization is not
the default behavior in KVM.

Check for Nested Virtualization:

sudo virt-host-validate


[root@ovirt1 ~]# sudo virt-host-validate

  QEMU: Checking for hardware virtualization
  : FAIL (Only emulated CPUs are available, performance will be
significantly limited)

  QEMU: Checking if device /dev/vhost-net exists
  : PASS

  QEMU: Checking if device /dev/net/tun exists
  : PASS


Follow the following to enable nested virt :

https://www.server-world.info/en/note?os=CentOS_7=kvm=7


Shut down the node VMs. Check the "*Copy host CPU configuration* checkbox".



I was able to proceed to hosted engine installation.


Regards,

HK

On Wed, Nov 15, 2017 at 1:06 AM, Open tech  wrote:

> Hi All,
>  I am trying to build a Ovirt node/Gluster Storage and Hosted Engine lab
> using nested virtualization.
> I have been following this tutorial.
> https://www.ovirt.org/blog/2017/04/up-and-running-with-
> ovirt-4.1-and-gluster-storage/
>
> First I tried on ESXi 6.5. Got three nodes installed with dual networks &
> passwordless ssh enabled. Ran the Hosted Engine with Gluster install via
> the Ovirt node. It ran successfully.
>
> Went to "continue with hosted engine deployment"
>
>  Hosted Engine deployment failed with error.
>
> * Failed to execute stage 'Environment setup': Hardware does not support
> virtualization*
> * Hosted Engine deployment failed*
>
> Went over some message boards. seemed like ovirt nested virtualization
> would not be supported on ESXi.
>
> Spent another day building the whole thing again from scratch on CentOS/KVM
>
> I am getting the same error with KVM as well.
>
> Going over some discussions I gather that this should be supported.
>
> Any ideas on what might be the issue here. Any setting on KVM virtual
> machine etc that might help here ?.
>
> Log file is attached
>
> Regards,
> HK
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-11-14 Thread Open tech
Hi Kasturi,
   Thanks a lot for taking a look at this. I think its
"grafton-sanity-check.sh" . Following is the complete output from the
install attempt. Ansible ver is 2.4. Gdeploy is 2.0.2.

Do you have a tested step by step for 4.1.6/7 ?. Would be great if you can
share it.


PLAY [gluster_servers]
*

TASK [Run a shell script]
**
changed: [ovirt2] =>
(item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sda1 -h
ovirt1,ovirt2,ovirt3)
changed: [ovirt3] =>
(item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sda1 -h
ovirt1,ovirt2,ovirt3)
changed: [ovirt1] =>
(item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sda1 -h
ovirt1,ovirt2,ovirt3)

PLAY RECAP
*
ovirt1 : ok=1changed=1unreachable=0failed=0

ovirt2 : ok=1changed=1unreachable=0failed=0

ovirt3 : ok=1changed=1unreachable=0failed=0



PLAY [gluster_servers]
*

TASK [Enable or disable services]
**
ok: [ovirt1] => (item=chronyd)
ok: [ovirt3] => (item=chronyd)
ok: [ovirt2] => (item=chronyd)

PLAY RECAP
*
ovirt1 : ok=1changed=0unreachable=0failed=0

ovirt2 : ok=1changed=0unreachable=0failed=0

ovirt3 : ok=1changed=0unreachable=0failed=0



PLAY [gluster_servers]
*

TASK [start/stop/restart/reload services]
**
changed: [ovirt3] => (item=chronyd)
changed: [ovirt1] => (item=chronyd)
changed: [ovirt2] => (item=chronyd)

PLAY RECAP
*
ovirt1 : ok=1changed=1unreachable=0failed=0

ovirt2 : ok=1changed=1unreachable=0failed=0

ovirt3 : ok=1changed=1unreachable=0failed=0



PLAY [gluster_servers]
*

TASK [Run a command in the shell]
**
changed: [ovirt2] => (item=vdsm-tool configure --force)
changed: [ovirt3] => (item=vdsm-tool configure --force)
changed: [ovirt1] => (item=vdsm-tool configure --force)

PLAY RECAP
*
ovirt1 : ok=1changed=1unreachable=0failed=0

ovirt2 : ok=1changed=1unreachable=0failed=0

ovirt3 : ok=1changed=1unreachable=0failed=0



PLAY [gluster_servers]
*

TASK [Run a shell script]
**
fatal: [ovirt2]: FAILED! => {"failed": true, "msg": "The conditional check
'result.rc != 0' failed. The error was: error while evaluating conditional
(result.rc != 0): 'dict object' has no attribute 'rc'"}
fatal: [ovirt3]: FAILED! => {"failed": true, "msg": "The conditional check
'result.rc != 0' failed. The error was: error while evaluating conditional
(result.rc != 0): 'dict object' has no attribute 'rc'"}
fatal: [ovirt1]: FAILED! => {"failed": true, "msg": "The conditional check
'result.rc != 0' failed. The error was: error while evaluating conditional
(result.rc != 0): 'dict object' has no attribute 'rc'"}
to retry, use: --limit @/tmp/tmpEkEkpR/run-script.retry

PLAY RECAP
*
ovirt1 : ok=0changed=0unreachable=0failed=1

ovirt2 : ok=0changed=0unreachable=0failed=1

ovirt3 : ok=0changed=0unreachable=0failed=1


Error: Ansible(>= 2.2) is not installed.
Some of the features might not work if not installed.


[root@ovirt2 ~]# yum info ansible

Loaded plugins: fastestmirror, imgbased-persist

Loading mirror speeds from cached hostfile

 * epel: mirror01.idc.hinet.net

 * ovirt-4.1: ftp.nluug.nl

 * ovirt-4.1-epel: mirror01.idc.hinet.net

Installed Packages

Name: *ansible*

Arch: noarch

Version : 2.4.0.0

Release : 5.el7

Size: 38 M

Repo: installed

Summary : SSH-based configuration management, deployment, and task
execution system

URL : http://ansible.com

License : GPLv3+

Description :

: Ansible is a radically simple model-driven configuration
management,

: multi-node deployment, and remote task execution system.
Ansible works

: over SSH and does not require any software or daemons to be
installed

: on remote nodes. Extension modules can 

Re: [ovirt-users] Unable to use Netgear RN3220 for NFS storage domain

2017-11-14 Thread Mikhail Krasnobaev
Hi, I had similar problems with rn104, I think they share the same OS inside, haven't found an answer to using nfs, only iscsi.Have you seen this thread on netgear forum https://community.netgear.com/t5/New-to-ReadyNAS/ReadyNAS-4220-NFS-configuration/td-p/1191554 ?There is a working solution (as a guy claims in post 13) but it is still a hack, not an official one. Best regards, Mikhail.23:27, 14 ноября 2017 г., Walt Holman :Any advice or other things to check? I'm at a loss as to what's causing this and I can't add more nodes to the same cluster without a working NFS as I understand it.-Walt- Original Message -From: "Walt Holman" To: "Fred Rolland" Cc: "users" Sent: Sunday, November 5, 2017 10:55:00 AMSubject: Re: [ovirt-users] Unable to use Netgear RN3220 for NFS storage domainI believe it's using protocol version 3, and the nfs-kernel-server package is 1.2.8-9. The Netgear is basically Debian Linux with their proprietary packages thrown in for management. The kernel version on the Netgear is 4.4.91.x86_64.1You think SELinux on the NAS or the oVirt box? I don't have the oVirt box in enforcing mode, it's in Permissive mode due to an earlier SELinux issue I ran into on it. I don't remember the details on why at the moment, but it was when I originally setup the box in February. - Original Message -From: "Fred Rolland" To: "Walt Holman" Cc: "users" Sent: Sunday, November 5, 2017 10:18:30 AMSubject: Re: [ovirt-users] Unable to use Netgear RN3220 for NFS storage domainWhat version of NFS do you have on the Netgear? It could be a SELinux configuration. On Sun, Nov 5, 2017 at 3:19 PM, Walt Holman < [ mailto:whol...@lawrenceburgtn.gov | whol...@lawrenceburgtn.gov ] > wrote: Currently running version 4.1.6.2-1.el7.centos The logfile is attached and the relevant entries are at the bottom of the log. It evidently doesn't get rotated, so it contains entries going back to when I first setup the box in 2/17. I've perused that page countless times now, but I still can't get it to work on this setup. In the sanlock log file, the attempts to add the domain end in "s2 add_lockspace fail result -19" I don't know much about sanlock, but I wish I knew what result -19 meant, perhaps it would clue me in. Thanks for your help. -Walt - Original Message - From: "Fred Rolland" < [ mailto:froll...@redhat.com | froll...@redhat.com ] > To: "Walt Holman" < [ mailto:whol...@lawrenceburgtn.gov | whol...@lawrenceburgtn.gov ] > Cc: "users" < [ mailto:users@ovirt.org | users@ovirt.org ] > Sent: Sunday, November 5, 2017 5:22:05 AM Subject: Re: [ovirt-users] Unable to use Netgear RN3220 for NFS storage domain Hi, Which version are you using ? Can you provide /var/log/sanlock.log ? Also check this page: [ [ https://www.ovirt.org/documentation/how-to/troubleshooting/troubleshooting-nfs-storage-issues | https://www.ovirt.org/documentation/how-to/troubleshooting/troubleshooting-nfs-storage-issues ] | [ https://www.ovirt.org/documentation/how-to/troubleshooting/troubleshooting-nfs-storage-issues | https://www.ovirt.org/documentation/how-to/troubleshooting/troubleshooting-nfs-storage-issues ] ] Thanks, Freddy On Fri, Nov 3, 2017 at 7:00 PM, Walt Holman < [ mailto: [ mailto:whol...@lawrenceburgtn.gov | whol...@lawrenceburgtn.gov ] | [ mailto:whol...@lawrenceburgtn.gov | whol...@lawrenceburgtn.gov ] ] > wrote: Hello all, I've got an oVirt setup that I'm trying to add an NFS storage domain from a Netgear ReadyNAS 3220 NAS server. I've setup the NFS exports as follows: "/data/Virtual_Machines" 127.0.0.1(insecure,insecure_locks,no_subtree_check,crossmnt,anonuid=36,anongid=36,all_squash,ro,sync) 10.50.31.83(insecure,insecure_locks,no_subtree_check,rw,root_squash,anonuid=36,anongid=36,crossmnt,sync) Where 10.50.31.83 is my oVirt box. I've made sure that /data/Virtual_Machines is chowned to 36:36 and each time I add it, it fails after performing the following steps: The oVirt box mounts the file share, creates all the files and directories, they are owned by 36:36 correctly, the new storage domain box closes and I am returned to the oVirt Management console where within a couple of seconds, I'm presented with a dialog box that says: Operation Canceled Error while executing action Attach Storage Domain: AcquireHostIDFailure I've looked through logs and found the there is a SanlockException(19, "Sanlock lockspace add failure', 'No such device')) that seems to be the culprit. Just to test, I've used the same exports file on my local machine (which has limited storage), and shared out a directory via NFS. I can successfully connect and use that storage within oVirt, but of course it does me no good. The RN3220 has approximately 10TB free on it at the moment, so it's not a space issue and as mentioned the directories and files all get created and are owned by vdsm:kvm It really 

[ovirt-users] how to move network bridge?

2017-11-14 Thread Rudi Ahlers
Hi,

Can someone please help me "move" the network bridge to another port? I
have tried a couple times but it doesn't seem to work.
Our internal LAN IP range is 192.168.102.0/24 and the storage area network
is 10.10.10.0/24

When I installed hosted-engine, I accidently told it to use the 10.10.10.81
interface and a  bridge was created accordingly. But I cannot access the
10.10.10.0/24 IP range via our VPN (for security reasons) so I can't access
the hosted-engine

I tried to remove the bridge interface with brctl but it didn't work as
expected.

root@virt1 ~]# brctl show
bridge name bridge id   STP enabled interfaces
;vdsmdummy; 8000.   no
ovirtmgmt   8000.0cc47aeaffd8   no  ens2f0


Even though I tried to remove ens2f0 and added ens2f3, it never worked. The
bridge / network would stop working and I wold need to login to the
server's IPMI card and remove ens2f0.

Can someone please guide me in moving the bridge to the 192.168.102.0/24
network?



-- 
Kind Regards
Rudi Ahlers
Website: http://www.rudiahlers.co.za
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to use Netgear RN3220 for NFS storage domain

2017-11-14 Thread Walt Holman
Any advice or other things to check? I'm at a loss as to what's causing this 
and I can't add more nodes to the same cluster without a working NFS as I 
understand it.

-Walt

- Original Message -
From: "Walt Holman" 
To: "Fred Rolland" 
Cc: "users" 
Sent: Sunday, November 5, 2017 10:55:00 AM
Subject: Re: [ovirt-users] Unable to use Netgear RN3220 for NFS storage domain

I believe it's using protocol version 3, and the nfs-kernel-server package is 
1.2.8-9. The Netgear is basically Debian Linux with their proprietary packages 
thrown in for management. The kernel version on the Netgear is 4.4.91.x86_64.1

You think SELinux on the NAS or the oVirt box? I don't have the oVirt box in 
enforcing mode, it's in Permissive mode due to an earlier SELinux issue I ran 
into on it. I don't remember the details on why at the moment, but it was when 
I originally setup the box in February. 


- Original Message -
From: "Fred Rolland" 
To: "Walt Holman" 
Cc: "users" 
Sent: Sunday, November 5, 2017 10:18:30 AM
Subject: Re: [ovirt-users] Unable to use Netgear RN3220 for NFS storage domain

What version of NFS do you have on the Netgear? 
It could be a SELinux configuration. 

On Sun, Nov 5, 2017 at 3:19 PM, Walt Holman < [ 
mailto:whol...@lawrenceburgtn.gov | whol...@lawrenceburgtn.gov ] > wrote: 


Currently running version 4.1.6.2-1.el7.centos 

The logfile is attached and the relevant entries are at the bottom of the log. 
It evidently doesn't get rotated, so it contains entries going back to when I 
first setup the box in 2/17. 

I've perused that page countless times now, but I still can't get it to work on 
this setup. In the sanlock log file, the attempts to add the domain end in "s2 
add_lockspace fail result -19" I don't know much about sanlock, but I wish I 
knew what result -19 meant, perhaps it would clue me in. Thanks for your help. 

-Walt 

- Original Message - 
From: "Fred Rolland" < [ mailto:froll...@redhat.com | froll...@redhat.com ] > 
To: "Walt Holman" < [ mailto:whol...@lawrenceburgtn.gov | 
whol...@lawrenceburgtn.gov ] > 
Cc: "users" < [ mailto:users@ovirt.org | users@ovirt.org ] > 
Sent: Sunday, November 5, 2017 5:22:05 AM 
Subject: Re: [ovirt-users] Unable to use Netgear RN3220 for NFS storage domain 

Hi, 

Which version are you using ? 
Can you provide /var/log/sanlock.log ? 

Also check this page: 
[ [ 
https://www.ovirt.org/documentation/how-to/troubleshooting/troubleshooting-nfs-storage-issues
 | 
https://www.ovirt.org/documentation/how-to/troubleshooting/troubleshooting-nfs-storage-issues
 ] | [ 
https://www.ovirt.org/documentation/how-to/troubleshooting/troubleshooting-nfs-storage-issues
 | 
https://www.ovirt.org/documentation/how-to/troubleshooting/troubleshooting-nfs-storage-issues
 ] ] 

Thanks, 
Freddy 


On Fri, Nov 3, 2017 at 7:00 PM, Walt Holman < [ mailto: [ 
mailto:whol...@lawrenceburgtn.gov | whol...@lawrenceburgtn.gov ] | [ 
mailto:whol...@lawrenceburgtn.gov | whol...@lawrenceburgtn.gov ] ] > wrote: 


Hello all, 

I've got an oVirt setup that I'm trying to add an NFS storage domain from a 
Netgear ReadyNAS 3220 NAS server. I've setup the NFS exports as follows: 

"/data/Virtual_Machines" 
127.0.0.1(insecure,insecure_locks,no_subtree_check,crossmnt,anonuid=36,anongid=36,all_squash,ro,sync)
 
10.50.31.83(insecure,insecure_locks,no_subtree_check,rw,root_squash,anonuid=36,anongid=36,crossmnt,sync)
 

Where 10.50.31.83 is my oVirt box. I've made sure that /data/Virtual_Machines 
is chowned to 36:36 and each time I add it, it fails after performing the 
following steps: The oVirt box mounts the file share, creates all the files and 
directories, they are owned by 36:36 correctly, the new storage domain box 
closes and I am returned to the oVirt Management console where within a couple 
of seconds, I'm presented with a dialog box that says: 

Operation Canceled 
Error while executing action Attach Storage Domain: AcquireHostIDFailure 

I've looked through logs and found the there is a SanlockException(19, "Sanlock 
lockspace add failure', 'No such device')) that seems to be the culprit. 

Just to test, I've used the same exports file on my local machine (which has 
limited storage), and shared out a directory via NFS. I can successfully 
connect and use that storage within oVirt, but of course it does me no good. 
The RN3220 has approximately 10TB free on it at the moment, so it's not a space 
issue and as mentioned the directories and files all get created and are owned 
by vdsm:kvm It really feels like something's up with the ReadyNAS, but I'm not 
sure what? I've attached the vdsm.log file and have access to any other logs 
you may need. 

Any help would be greatly appreciated as I'm about to bring up another node and 
really need access to network storage. I'm currently using only local storage 
attached to the host. 

-Walt 

Re: [ovirt-users] hosted exchange failed to install

2017-11-14 Thread Rudi Ahlers
Thanx Alan.

On Tue, Nov 14, 2017 at 10:11 PM, Alan Griffiths 
wrote:

> Mount the Gluster volume and delete everything on it.  Should be a
> directory with a UUID name and a file called __DIRECT_IO_TEST__
>
> On 14 November 2017 at 18:54, Rudi Ahlers  wrote:
> > Hi
> >
> > Please be a bit more specific. What exactly do I need to delete? It's on
> > GlusterFS
> >
> > On Tue, Nov 14, 2017 at 7:47 PM, Alan Griffiths  >
> > wrote:
> >>
> >> Looks like you need to clean out your storage domain left over from
> >> the previous install attempt. What are you using, gluster, NFS?
> >>
> >> On 14 November 2017 at 14:35, Rudi Ahlers  wrote:
> >> > Hi,
> >> >
> >> > Can someone please help?
> >> >
> >> > I installed hosted exchange but specified the wrong interface, and
> thus
> >> > couldn't access it. So I removed it (yum install) and reinstalled it,
> >> > and
> >> > re-ran the deploy but got the following error:
> >> >
> >> >  Please confirm installation settings (Yes, No)[Yes]:
> >> > [ INFO  ] Stage: Transaction setup
> >> > [ INFO  ] Stage: Misc configuration
> >> > [ INFO  ] Stage: Package installation
> >> > [ INFO  ] Stage: Misc configuration
> >> > [ INFO  ] Configuring libvirt
> >> > [ INFO  ] Configuring VDSM
> >> > [WARNING] VDSM configuration file not found: creating a new
> >> > configuration
> >> > file
> >> > [ INFO  ] Starting vdsmd
> >> > [ INFO  ] Creating Storage Domain
> >> > [ ERROR ] Failed to execute stage 'Misc configuration': Storage domain
> >> > is
> >> > not empty - requires cleaning: (u'srv1:/engine',)
> >> > [ INFO  ] Yum Performing yum transaction rollback
> >> > [ INFO  ] Stage: Clean up
> >> > [ INFO  ] Generating answer file
> >> > '/var/lib/ovirt-hosted-engine-setup/answers/answers-
> 20171114162130.conf'
> >> > [ INFO  ] Stage: Pre-termination
> >> > [ INFO  ] Stage: Termination
> >> > [ ERROR ] Hosted Engine deployment failed: this system is not
> reliable,
> >> > please check the issue,fix and redeploy
> >> >   Log file is located at
> >> >
> >> > /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-
> setup-20171114161520-km3qok.log
> >> >
> >> >
> >> >
> >> > I am honestly not sure why it would think "this system is not
> reliable".
> >> > How
> >> > do I check what is actually wrong?
> >> >
> >> > The log file shows the same error:
> >> >
> >> >
> >> > tail -f
> >> >
> >> > /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-
> setup-20171114161520-km3qok.log
> >> > 2017-11-14 16:21:30 DEBUG otopi.context context._executeMethod:134
> >> > condition
> >> > False
> >> > 2017-11-14 16:21:30 INFO otopi.context context.runSequence:687 Stage:
> >> > Termination
> >> > 2017-11-14 16:21:30 DEBUG otopi.context context.runSequence:691 STAGE
> >> > terminate
> >> > 2017-11-14 16:21:30 DEBUG otopi.context context._executeMethod:128
> Stage
> >> > terminate METHOD otopi.plugins.gr_he_common.
> core.misc.Plugin._terminate
> >> > 2017-11-14 16:21:30 ERROR otopi.plugins.gr_he_common.core.misc
> >> > misc._terminate:178 Hosted Engine deployment failed: this system is
> not
> >> > reliable, please check the issue,fix and redeploy
> >> > 2017-11-14 16:21:30 DEBUG otopi.plugins.otopi.dialog.human
> >> > dialog.__logString:204 DIALOG:SEND Log file is located
> >> > at
> >> >
> >> > /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-
> setup-20171114161520-km3qok.log
> >> > 2017-11-14 16:21:30 DEBUG otopi.context context._executeMethod:128
> Stage
> >> > terminate METHOD otopi.plugins.otopi.dialog.human.Plugin._terminate
> >> > 2017-11-14 16:21:30 DEBUG otopi.context context._executeMethod:128
> Stage
> >> > terminate METHOD otopi.plugins.otopi.dialog.machine.Plugin._terminate
> >> > 2017-11-14 16:21:30 DEBUG otopi.context context._executeMethod:134
> >> > condition
> >> > False
> >> > 2017-11-14 16:21:30 DEBUG otopi.context context._executeMethod:128
> Stage
> >> > terminate METHOD otopi.plugins.otopi.core.log.Plugin._terminate
> >> >
> >> >
> >> > --
> >> > Kind Regards
> >> > Rudi Ahlers
> >> > Website: http://www.rudiahlers.co.za
> >> >
> >> > ___
> >> > Users mailing list
> >> > Users@ovirt.org
> >> > http://lists.ovirt.org/mailman/listinfo/users
> >> >
> >
> >
> >
> >
> > --
> > Kind Regards
> > Rudi Ahlers
> > Website: http://www.rudiahlers.co.za
>



-- 
Kind Regards
Rudi Ahlers
Website: http://www.rudiahlers.co.za
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted exchange failed to install

2017-11-14 Thread Alan Griffiths
Mount the Gluster volume and delete everything on it.  Should be a
directory with a UUID name and a file called __DIRECT_IO_TEST__

On 14 November 2017 at 18:54, Rudi Ahlers  wrote:
> Hi
>
> Please be a bit more specific. What exactly do I need to delete? It's on
> GlusterFS
>
> On Tue, Nov 14, 2017 at 7:47 PM, Alan Griffiths 
> wrote:
>>
>> Looks like you need to clean out your storage domain left over from
>> the previous install attempt. What are you using, gluster, NFS?
>>
>> On 14 November 2017 at 14:35, Rudi Ahlers  wrote:
>> > Hi,
>> >
>> > Can someone please help?
>> >
>> > I installed hosted exchange but specified the wrong interface, and thus
>> > couldn't access it. So I removed it (yum install) and reinstalled it,
>> > and
>> > re-ran the deploy but got the following error:
>> >
>> >  Please confirm installation settings (Yes, No)[Yes]:
>> > [ INFO  ] Stage: Transaction setup
>> > [ INFO  ] Stage: Misc configuration
>> > [ INFO  ] Stage: Package installation
>> > [ INFO  ] Stage: Misc configuration
>> > [ INFO  ] Configuring libvirt
>> > [ INFO  ] Configuring VDSM
>> > [WARNING] VDSM configuration file not found: creating a new
>> > configuration
>> > file
>> > [ INFO  ] Starting vdsmd
>> > [ INFO  ] Creating Storage Domain
>> > [ ERROR ] Failed to execute stage 'Misc configuration': Storage domain
>> > is
>> > not empty - requires cleaning: (u'srv1:/engine',)
>> > [ INFO  ] Yum Performing yum transaction rollback
>> > [ INFO  ] Stage: Clean up
>> > [ INFO  ] Generating answer file
>> > '/var/lib/ovirt-hosted-engine-setup/answers/answers-20171114162130.conf'
>> > [ INFO  ] Stage: Pre-termination
>> > [ INFO  ] Stage: Termination
>> > [ ERROR ] Hosted Engine deployment failed: this system is not reliable,
>> > please check the issue,fix and redeploy
>> >   Log file is located at
>> >
>> > /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20171114161520-km3qok.log
>> >
>> >
>> >
>> > I am honestly not sure why it would think "this system is not reliable".
>> > How
>> > do I check what is actually wrong?
>> >
>> > The log file shows the same error:
>> >
>> >
>> > tail -f
>> >
>> > /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20171114161520-km3qok.log
>> > 2017-11-14 16:21:30 DEBUG otopi.context context._executeMethod:134
>> > condition
>> > False
>> > 2017-11-14 16:21:30 INFO otopi.context context.runSequence:687 Stage:
>> > Termination
>> > 2017-11-14 16:21:30 DEBUG otopi.context context.runSequence:691 STAGE
>> > terminate
>> > 2017-11-14 16:21:30 DEBUG otopi.context context._executeMethod:128 Stage
>> > terminate METHOD otopi.plugins.gr_he_common.core.misc.Plugin._terminate
>> > 2017-11-14 16:21:30 ERROR otopi.plugins.gr_he_common.core.misc
>> > misc._terminate:178 Hosted Engine deployment failed: this system is not
>> > reliable, please check the issue,fix and redeploy
>> > 2017-11-14 16:21:30 DEBUG otopi.plugins.otopi.dialog.human
>> > dialog.__logString:204 DIALOG:SEND Log file is located
>> > at
>> >
>> > /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20171114161520-km3qok.log
>> > 2017-11-14 16:21:30 DEBUG otopi.context context._executeMethod:128 Stage
>> > terminate METHOD otopi.plugins.otopi.dialog.human.Plugin._terminate
>> > 2017-11-14 16:21:30 DEBUG otopi.context context._executeMethod:128 Stage
>> > terminate METHOD otopi.plugins.otopi.dialog.machine.Plugin._terminate
>> > 2017-11-14 16:21:30 DEBUG otopi.context context._executeMethod:134
>> > condition
>> > False
>> > 2017-11-14 16:21:30 DEBUG otopi.context context._executeMethod:128 Stage
>> > terminate METHOD otopi.plugins.otopi.core.log.Plugin._terminate
>> >
>> >
>> > --
>> > Kind Regards
>> > Rudi Ahlers
>> > Website: http://www.rudiahlers.co.za
>> >
>> > ___
>> > Users mailing list
>> > Users@ovirt.org
>> > http://lists.ovirt.org/mailman/listinfo/users
>> >
>
>
>
>
> --
> Kind Regards
> Rudi Ahlers
> Website: http://www.rudiahlers.co.za
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Non-responsive host, VM's are still running - how to resolve?

2017-11-14 Thread Piotr Kliczewski
On Tue, Nov 14, 2017 at 7:09 PM, Artem Tambovskiy
 wrote:
> Thanks, Darrell!
>
> Restarted vdsmd but it didn't helped.
> systemctl status vdsmd -l showing following:
>
> ● vdsmd.service - Virtual Desktop Server Manager
>Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor
> preset: enabled)
>Active: active (running) since Tue 2017-11-14 21:01:31 MSK; 4min 53s ago
>   Process: 54674 ExecStopPost=/usr/libexec/vdsm/vdsmd_init_common.sh
> --post-stop (code=exited, status=0/SUCCESS)
>   Process: 54677 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh
> --pre-start (code=exited, status=0/SUCCESS)
>  Main PID: 54971 (vdsm)
>CGroup: /system.slice/vdsmd.service
>├─54971 /usr/bin/python2 /usr/share/vdsm/vdsm
>└─55099 /usr/libexec/ioprocess --read-pipe-fd 84 --write-pipe-fd
> 83 --max-threads 10 --max-queued-requests 10
>
> Nov 14 21:01:33 ovirt2.telia.ru vdsm[54971]: vdsm vds WARN Not ready yet,
> ignoring event u'|virt|VM_status|e0970bbf-11d8-4517-acff-0f8dccbb10a9'
> args={u'e0970bbf-11d8-4517-acff-0f8dccbb10a9': {'status': 'Up',
> 'displayInfo': [{'tlsPort': '5901', 'ipAddress': '80.239.162.106', 'type':
> u'spice', 'port': '-1'}], 'hash': '-6982259661244130819', 'displayIp':
> '80.239.162.106', 'displayPort': '-1', 'displaySecurePort': '5901',
> 'timeOffset': u'0', 'pauseCode': 'NOERR', 'vcpuQuota': '-1', 'cpuUser':
> '0.00', 'monitorResponse': '0', 'elapsedTime': '370019', 'displayType':
> 'qxl', 'cpuSys': '0.00', 'clientIp': '172.16.11.6', 'vcpuPeriod': 10L}}
> Nov 14 21:01:33 ovirt2.telia.ru vdsm[54971]: vdsm vds WARN Not ready yet,
> ignoring event u'|virt|VM_status|b366e466-b0ea-4a09-866b-d0248d7523a6'
> args={u'b366e466-b0ea-4a09-866b-d0248d7523a6': {'status': 'Up',
> 'displayInfo': [{'tlsPort': '5900', 'ipAddress': '0', 'type': u'spice',
> 'port': '-1'}], 'hash': '1858968312777883492', 'displayIp': '0',
> 'displayPort': '-1', 'displaySecurePort': '5900', 'timeOffset': '0',
> 'pauseCode': 'NOERR', 'vcpuQuota': '-1', 'cpuUser': '0.00',
> 'monitorResponse': '0', 'elapsedTime': '453444', 'displayType': 'qxl',
> 'cpuSys': '0.00', 'clientIp': '', 'vcpuPeriod': 10L}}
> Nov 14 21:01:33 ovirt2.telia.ru vdsm[54971]: vdsm vds WARN Not ready yet,
> ignoring event u'|virt|VM_status|ca2815c5-f815-469d-869d-a8fe1cb8c2e7'
> args={u'ca2815c5-f815-469d-869d-a8fe1cb8c2e7': {'status': 'Up',
> 'displayInfo': [{'tlsPort': '5904', 'ipAddress': '80.239.162.106', 'type':
> u'spice', 'port': '-1'}], 'hash': '1149212890076264321', 'displayIp':
> '80.239.162.106', 'displayPort': '-1', 'displaySecurePort': '5904',
> 'timeOffset': u'0', 'pauseCode': 'NOERR', 'vcpuQuota': '-1', 'cpuUser':
> '0.00', 'monitorResponse': '0', 'elapsedTime': '105160', 'displayType':
> 'qxl', 'cpuSys': '0.00', 'clientIp': '172.16.11.6', 'vcpuPeriod': 10L}}
> Nov 14 21:01:33 ovirt2.telia.ru vdsm[54971]: vdsm vds WARN Not ready yet,
> ignoring event u'|virt|VM_status|a083da47-3e39-458c-8822-459af3d2d93a'
> args={u'a083da47-3e39-458c-8822-459af3d2d93a': {'status': 'Up',
> 'displayInfo': [{'tlsPort': '5902', 'ipAddress': '80.239.162.106', 'type':
> u'spice', 'port': '-1'}], 'hash': '5529949835126538749', 'displayIp':
> '80.239.162.106', 'displayPort': '-1', 'displaySecurePort': '5902',
> 'timeOffset': u'0', 'pauseCode': 'NOERR', 'vcpuQuota': '-1', 'cpuUser':
> '0.00', 'monitorResponse': '0', 'elapsedTime': '365326', 'displayType':
> 'qxl', 'cpuSys': '0.00', 'clientIp': '', 'vcpuPeriod': 10L}}
> Nov 14 21:01:33 ovirt2.telia.ru vdsm[54971]: vdsm vds WARN Not ready yet,
> ignoring event u'|virt|VM_status|0b7d02df-0286-4e0e-a50b-1d02915ba81c'
> args={u'0b7d02df-0286-4e0e-a50b-1d02915ba81c': {'status': 'Up',
> 'displayInfo': [{'tlsPort': '5903', 'ipAddress': '80.239.162.106', 'type':
> u'spice', 'port': '-1'}], 'hash': '3267121054607612619', 'displayIp':
> '80.239.162.106', 'displayPort': '-1', 'displaySecurePort': '5903',
> 'timeOffset': '-1', 'pauseCode': 'NOERR', 'vcpuQuota': '-1', 'cpuUser':
> '0.00', 'monitorResponse': '0', 'elapsedTime': '275708', 'displayType':
> 'qxl', 'cpuSys': '0.00', 'clientIp': '', 'vcpuPeriod': 10L}}
> Nov 14 21:01:33 ovirt2.telia.ru vdsm[54971]: vdsm throttled WARN MOM not
> available.
> Nov 14 21:01:33 ovirt2.telia.ru vdsm[54971]: vdsm throttled WARN MOM not
> available, KSM stats will be missing.
> Nov 14 21:01:34 ovirt2.telia.ru vdsm[54971]: vdsm vds WARN Not ready yet,
> ignoring event u'|virt|VM_status|0b7d02df-0286-4e0e-a50b-1d02915ba81c'
> args={u'0b7d02df-0286-4e0e-a50b-1d02915ba81c': {'status': 'Up', 'username':
> 'Unknown', 'memUsage': '36', 'guestFQDN': '', 'memoryStats': {u'swap_out':
> '0', u'majflt': '0', u'swap_usage': '0', u'mem_cached': '548192',
> u'mem_free': '2679664', u'mem_buffers': '231016', u'swap_in': '0',
> u'swap_total': '786428', u'pageflt': '4346', u'mem_total': '3922564',
> u'mem_unused': '1900456'}, 'session': 'Unknown', 'netIfaces': [],
> 'guestCPUCount': -1, 'appsList': (), 'guestIPs': '', 

Re: [ovirt-users] Non-responsive host, VM's are still running - how to resolve?

2017-11-14 Thread Artem Tambovskiy
Thanks, Darrell!

Restarted vdsmd but it didn't helped.
systemctl status vdsmd -l showing following:

● vdsmd.service - Virtual Desktop Server Manager
   Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor
preset: enabled)
   Active: active (running) since Tue 2017-11-14 21:01:31 MSK; 4min 53s ago
  Process: 54674 ExecStopPost=/usr/libexec/vdsm/vdsmd_init_common.sh
--post-stop (code=exited, status=0/SUCCESS)
  Process: 54677 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh
--pre-start (code=exited, status=0/SUCCESS)
 Main PID: 54971 (vdsm)
   CGroup: /system.slice/vdsmd.service
   ├─54971 /usr/bin/python2 /usr/share/vdsm/vdsm
   └─55099 /usr/libexec/ioprocess --read-pipe-fd 84 --write-pipe-fd
83 --max-threads 10 --max-queued-requests 10

Nov 14 21:01:33 ovirt2.telia.ru vdsm[54971]: vdsm vds WARN Not ready yet,
ignoring event u'|virt|VM_status|e0970bbf-11d8-4517-acff-0f8dccbb10a9'
args={u'e0970bbf-11d8-4517-acff-0f8dccbb10a9': {'status': 'Up',
'displayInfo': [{'tlsPort': '5901', 'ipAddress': '80.239.162.106', 'type':
u'spice', 'port': '-1'}], 'hash': '-6982259661244130819', 'displayIp':
'80.239.162.106', 'displayPort': '-1', 'displaySecurePort': '5901',
'timeOffset': u'0', 'pauseCode': 'NOERR', 'vcpuQuota': '-1', 'cpuUser':
'0.00', 'monitorResponse': '0', 'elapsedTime': '370019', 'displayType':
'qxl', 'cpuSys': '0.00', 'clientIp': '172.16.11.6', 'vcpuPeriod': 10L}}
Nov 14 21:01:33 ovirt2.telia.ru vdsm[54971]: vdsm vds WARN Not ready yet,
ignoring event u'|virt|VM_status|b366e466-b0ea-4a09-866b-d0248d7523a6'
args={u'b366e466-b0ea-4a09-866b-d0248d7523a6': {'status': 'Up',
'displayInfo': [{'tlsPort': '5900', 'ipAddress': '0', 'type': u'spice',
'port': '-1'}], 'hash': '1858968312777883492', 'displayIp': '0',
'displayPort': '-1', 'displaySecurePort': '5900', 'timeOffset': '0',
'pauseCode': 'NOERR', 'vcpuQuota': '-1', 'cpuUser': '0.00',
'monitorResponse': '0', 'elapsedTime': '453444', 'displayType': 'qxl',
'cpuSys': '0.00', 'clientIp': '', 'vcpuPeriod': 10L}}
Nov 14 21:01:33 ovirt2.telia.ru vdsm[54971]: vdsm vds WARN Not ready yet,
ignoring event u'|virt|VM_status|ca2815c5-f815-469d-869d-a8fe1cb8c2e7'
args={u'ca2815c5-f815-469d-869d-a8fe1cb8c2e7': {'status': 'Up',
'displayInfo': [{'tlsPort': '5904', 'ipAddress': '80.239.162.106', 'type':
u'spice', 'port': '-1'}], 'hash': '1149212890076264321', 'displayIp':
'80.239.162.106', 'displayPort': '-1', 'displaySecurePort': '5904',
'timeOffset': u'0', 'pauseCode': 'NOERR', 'vcpuQuota': '-1', 'cpuUser':
'0.00', 'monitorResponse': '0', 'elapsedTime': '105160', 'displayType':
'qxl', 'cpuSys': '0.00', 'clientIp': '172.16.11.6', 'vcpuPeriod': 10L}}
Nov 14 21:01:33 ovirt2.telia.ru vdsm[54971]: vdsm vds WARN Not ready yet,
ignoring event u'|virt|VM_status|a083da47-3e39-458c-8822-459af3d2d93a'
args={u'a083da47-3e39-458c-8822-459af3d2d93a': {'status': 'Up',
'displayInfo': [{'tlsPort': '5902', 'ipAddress': '80.239.162.106', 'type':
u'spice', 'port': '-1'}], 'hash': '5529949835126538749', 'displayIp':
'80.239.162.106', 'displayPort': '-1', 'displaySecurePort': '5902',
'timeOffset': u'0', 'pauseCode': 'NOERR', 'vcpuQuota': '-1', 'cpuUser':
'0.00', 'monitorResponse': '0', 'elapsedTime': '365326', 'displayType':
'qxl', 'cpuSys': '0.00', 'clientIp': '', 'vcpuPeriod': 10L}}
Nov 14 21:01:33 ovirt2.telia.ru vdsm[54971]: vdsm vds WARN Not ready yet,
ignoring event u'|virt|VM_status|0b7d02df-0286-4e0e-a50b-1d02915ba81c'
args={u'0b7d02df-0286-4e0e-a50b-1d02915ba81c': {'status': 'Up',
'displayInfo': [{'tlsPort': '5903', 'ipAddress': '80.239.162.106', 'type':
u'spice', 'port': '-1'}], 'hash': '3267121054607612619', 'displayIp':
'80.239.162.106', 'displayPort': '-1', 'displaySecurePort': '5903',
'timeOffset': '-1', 'pauseCode': 'NOERR', 'vcpuQuota': '-1', 'cpuUser':
'0.00', 'monitorResponse': '0', 'elapsedTime': '275708', 'displayType':
'qxl', 'cpuSys': '0.00', 'clientIp': '', 'vcpuPeriod': 10L}}
Nov 14 21:01:33 ovirt2.telia.ru vdsm[54971]: vdsm throttled WARN MOM not
available.
Nov 14 21:01:33 ovirt2.telia.ru vdsm[54971]: vdsm throttled WARN MOM not
available, KSM stats will be missing.
Nov 14 21:01:34 ovirt2.telia.ru vdsm[54971]: vdsm vds WARN Not ready yet,
ignoring event u'|virt|VM_status|0b7d02df-0286-4e0e-a50b-1d02915ba81c'
args={u'0b7d02df-0286-4e0e-a50b-1d02915ba81c': {'status': 'Up', 'username':
'Unknown', 'memUsage': '36', 'guestFQDN': '', 'memoryStats': {u'swap_out':
'0', u'majflt': '0', u'swap_usage': '0', u'mem_cached': '548192',
u'mem_free': '2679664', u'mem_buffers': '231016', u'swap_in': '0',
u'swap_total': '786428', u'pageflt': '4346', u'mem_total': '3922564',
u'mem_unused': '1900456'}, 'session': 'Unknown', 'netIfaces': [],
'guestCPUCount': -1, 'appsList': (), 'guestIPs': '', 'disksUsage': []}}
Nov 14 21:01:34 ovirt2.telia.ru vdsm[54971]: vdsm vds WARN Not ready yet,
ignoring event u'|virt|VM_status|a083da47-3e39-458c-8822-459af3d2d93a'
args={u'a083da47-3e39-458c-8822-459af3d2d93a': {'status': 'Up', 'username':

Re: [ovirt-users] Non-responsive host, VM's are still running - how to resolve?

2017-11-14 Thread Darrell Budic
Try restarting vdsmd from the shell, “systemctl restart vdsmd”.


> From: Artem Tambovskiy 
> Subject: [ovirt-users] Non-responsive host, VM's are still running - how to 
> resolve?
> Date: November 14, 2017 at 11:23:32 AM CST
> To: users
> 
> Apparently, i lost the host which was running hosted-engine and another 4 
> VM's exactly during migration of second host from bare-metal to second host 
> in the cluster. For some reason first host entered the "Non reponsive" state. 
> The interesting thing is that hosted-engine and all other VM's up and 
> running, so its like a communication problem between hosted-engine and host. 
> 
> The engine.log at hosted-engine is full of following messages:
> 
> 2017-11-14 17:06:43,158Z INFO  
> [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] 
> Connecting to ovirt2/80.239.162.106 
> 2017-11-14 17:06:43,159Z ERROR 
> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand] 
> (DefaultQuartzScheduler9) [50938c3] Command 'GetAllVmStatsVDSCommand(HostName 
> = ovirt2.telia.ru , 
> VdsIdVDSCommandParametersBase:{runAsync='true', 
> hostId='3970247c-69eb-4bd8-b263-9100703a8243'})' execution failed: 
> java.net.NoRouteToHostException: No route to host
> 2017-11-14 17:06:43,159Z INFO  
> [org.ovirt.engine.core.vdsbroker.monitoring.PollVmStatsRefresher] 
> (DefaultQuartzScheduler9) [50938c3] Failed to fetch vms info for host 
> 'ovirt2.telia.ru ' - skipping VMs monitoring.
> 2017-11-14 17:06:45,929Z INFO  
> [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] 
> Connecting to ovirt2/80.239.162.106 
> 2017-11-14 17:06:45,930Z ERROR 
> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] 
> (DefaultQuartzScheduler2) [6080f1cc] Command 
> 'GetCapabilitiesVDSCommand(HostName = ovirt2.telia.ru 
> , 
> VdsIdAndVdsVDSCommandParametersBase:{runAsync='true', 
> hostId='3970247c-69eb-4bd8-b263-9100703a8243', vds='Host[ovirt2.telia.ru 
> ,3970247c-69eb-4bd8-b263-9100703a8243]'})' execution 
> failed: java.net.NoRouteToHostException: No route to host
> 2017-11-14 17:06:45,930Z ERROR 
> [org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring] 
> (DefaultQuartzScheduler2) [6080f1cc] Failure to refresh host 'ovirt2.telia.ru 
> ' runtime info: java.net.NoRouteToHostException: No 
> route to host
> 2017-11-14 17:06:48,933Z INFO  
> [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] 
> Connecting to ovirt2/80.239.162.106 
> 2017-11-14 17:06:48,934Z ERROR 
> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] 
> (DefaultQuartzScheduler6) [1a64dfea] Command 
> 'GetCapabilitiesVDSCommand(HostName = ovirt2.telia.ru 
> , 
> VdsIdAndVdsVDSCommandParametersBase:{runAsync='true', 
> hostId='3970247c-69eb-4bd8-b263-9100703a8243', vds='Host[ovirt2.telia.ru 
> ,3970247c-69eb-4bd8-b263-9100703a8243]'})' execution 
> failed: java.net.NoRouteToHostException: No route to host
> 2017-11-14 17:06:48,934Z ERROR 
> [org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring] 
> (DefaultQuartzScheduler6) [1a64dfea] Failure to refresh host 'ovirt2.telia.ru 
> ' runtime info: java.net.NoRouteToHostException: No 
> route to host
> 2017-11-14 17:06:50,931Z INFO  
> [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] 
> Connecting to ovirt2/80.239.162.106 
> 2017-11-14 17:06:50,932Z ERROR 
> [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStatusVDSCommand] 
> (DefaultQuartzScheduler4) [6b19d168] Command 'SpmStatusVDSCommand(HostName = 
> ovirt2.telia.ru , 
> SpmStatusVDSCommandParameters:{runAsync='true', 
> hostId='3970247c-69eb-4bd8-b263-9100703a8243', 
> storagePoolId='5a044257-02ec-0382-0243-01f2'})' execution failed: 
> java.net.NoRouteToHostException: No route to host
> 2017-11-14 17:06:50,939Z INFO  
> [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] 
> Connecting to ovirt2/80.239.162.106 
> 2017-11-14 17:06:50,940Z ERROR 
> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] 
> (DefaultQuartzScheduler4) [6b19d168] IrsBroker::Failed::GetStoragePoolInfoVDS
> 2017-11-14 17:06:50,940Z ERROR 
> [org.ovirt.engine.core.vdsbroker.irsbroker.GetStoragePoolInfoVDSCommand] 
> (DefaultQuartzScheduler4) [6b19d168] Command 'GetStoragePoolInfoVDSCommand( 
> GetStoragePoolInfoVDSCommandParameters:{runAsync='true', 
> storagePoolId='5a044257-02ec-0382-0243-01f2', 
> ignoreFailoverLimit='true'})' execution failed: IRSProtocolException: 
> 2017-11-14 17:06:51,937Z INFO  
> [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] 
> Connecting 

Re: [ovirt-users] hosted exchange failed to install

2017-11-14 Thread Alan Griffiths
Looks like you need to clean out your storage domain left over from
the previous install attempt. What are you using, gluster, NFS?

On 14 November 2017 at 14:35, Rudi Ahlers  wrote:
> Hi,
>
> Can someone please help?
>
> I installed hosted exchange but specified the wrong interface, and thus
> couldn't access it. So I removed it (yum install) and reinstalled it, and
> re-ran the deploy but got the following error:
>
>  Please confirm installation settings (Yes, No)[Yes]:
> [ INFO  ] Stage: Transaction setup
> [ INFO  ] Stage: Misc configuration
> [ INFO  ] Stage: Package installation
> [ INFO  ] Stage: Misc configuration
> [ INFO  ] Configuring libvirt
> [ INFO  ] Configuring VDSM
> [WARNING] VDSM configuration file not found: creating a new configuration
> file
> [ INFO  ] Starting vdsmd
> [ INFO  ] Creating Storage Domain
> [ ERROR ] Failed to execute stage 'Misc configuration': Storage domain is
> not empty - requires cleaning: (u'srv1:/engine',)
> [ INFO  ] Yum Performing yum transaction rollback
> [ INFO  ] Stage: Clean up
> [ INFO  ] Generating answer file
> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20171114162130.conf'
> [ INFO  ] Stage: Pre-termination
> [ INFO  ] Stage: Termination
> [ ERROR ] Hosted Engine deployment failed: this system is not reliable,
> please check the issue,fix and redeploy
>   Log file is located at
> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20171114161520-km3qok.log
>
>
>
> I am honestly not sure why it would think "this system is not reliable". How
> do I check what is actually wrong?
>
> The log file shows the same error:
>
>
> tail -f
> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20171114161520-km3qok.log
> 2017-11-14 16:21:30 DEBUG otopi.context context._executeMethod:134 condition
> False
> 2017-11-14 16:21:30 INFO otopi.context context.runSequence:687 Stage:
> Termination
> 2017-11-14 16:21:30 DEBUG otopi.context context.runSequence:691 STAGE
> terminate
> 2017-11-14 16:21:30 DEBUG otopi.context context._executeMethod:128 Stage
> terminate METHOD otopi.plugins.gr_he_common.core.misc.Plugin._terminate
> 2017-11-14 16:21:30 ERROR otopi.plugins.gr_he_common.core.misc
> misc._terminate:178 Hosted Engine deployment failed: this system is not
> reliable, please check the issue,fix and redeploy
> 2017-11-14 16:21:30 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:204 DIALOG:SEND Log file is located at
> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20171114161520-km3qok.log
> 2017-11-14 16:21:30 DEBUG otopi.context context._executeMethod:128 Stage
> terminate METHOD otopi.plugins.otopi.dialog.human.Plugin._terminate
> 2017-11-14 16:21:30 DEBUG otopi.context context._executeMethod:128 Stage
> terminate METHOD otopi.plugins.otopi.dialog.machine.Plugin._terminate
> 2017-11-14 16:21:30 DEBUG otopi.context context._executeMethod:134 condition
> False
> 2017-11-14 16:21:30 DEBUG otopi.context context._executeMethod:128 Stage
> terminate METHOD otopi.plugins.otopi.core.log.Plugin._terminate
>
>
> --
> Kind Regards
> Rudi Ahlers
> Website: http://www.rudiahlers.co.za
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Non-responsive host, VM's are still running - how to resolve?

2017-11-14 Thread Artem Tambovskiy
Apparently, i lost the host which was running hosted-engine and another 4
VM's exactly during migration of second host from bare-metal to second host
in the cluster. For some reason first host entered the "Non reponsive"
state. The interesting thing is that hosted-engine and all other VM's up
and running, so its like a communication problem between hosted-engine and
host.

The engine.log at hosted-engine is full of following messages:

2017-11-14 17:06:43,158Z INFO
[org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor)
[] Connecting to ovirt2/80.239.162.106
2017-11-14 17:06:43,159Z ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand]
(DefaultQuartzScheduler9) [50938c3] Command
'GetAllVmStatsVDSCommand(HostName = ovirt2.telia.ru,
VdsIdVDSCommandParametersBase:{runAsync='true',
hostId='3970247c-69eb-4bd8-b263-9100703a8243'})' execution failed:
java.net.NoRouteToHostException: No route to host
2017-11-14 17:06:43,159Z INFO
[org.ovirt.engine.core.vdsbroker.monitoring.PollVmStatsRefresher]
(DefaultQuartzScheduler9) [50938c3] Failed to fetch vms info for host '
ovirt2.telia.ru' - skipping VMs monitoring.
2017-11-14 17:06:45,929Z INFO
[org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor)
[] Connecting to ovirt2/80.239.162.106
2017-11-14 17:06:45,930Z ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler2) [6080f1cc] Command
'GetCapabilitiesVDSCommand(HostName = ovirt2.telia.ru,
VdsIdAndVdsVDSCommandParametersBase:{runAsync='true',
hostId='3970247c-69eb-4bd8-b263-9100703a8243',
vds='Host[ovirt2.telia.ru,3970247c-69eb-4bd8-b263-9100703a8243]'})'
execution failed: java.net.NoRouteToHostException: No route to host
2017-11-14 17:06:45,930Z ERROR
[org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring]
(DefaultQuartzScheduler2) [6080f1cc] Failure to refresh host '
ovirt2.telia.ru' runtime info: java.net.NoRouteToHostException: No route to
host
2017-11-14 17:06:48,933Z INFO
[org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor)
[] Connecting to ovirt2/80.239.162.106
2017-11-14 17:06:48,934Z ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler6) [1a64dfea] Command
'GetCapabilitiesVDSCommand(HostName = ovirt2.telia.ru,
VdsIdAndVdsVDSCommandParametersBase:{runAsync='true',
hostId='3970247c-69eb-4bd8-b263-9100703a8243',
vds='Host[ovirt2.telia.ru,3970247c-69eb-4bd8-b263-9100703a8243]'})'
execution failed: java.net.NoRouteToHostException: No route to host
2017-11-14 17:06:48,934Z ERROR
[org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring]
(DefaultQuartzScheduler6) [1a64dfea] Failure to refresh host '
ovirt2.telia.ru' runtime info: java.net.NoRouteToHostException: No route to
host
2017-11-14 17:06:50,931Z INFO
[org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor)
[] Connecting to ovirt2/80.239.162.106
2017-11-14 17:06:50,932Z ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStatusVDSCommand]
(DefaultQuartzScheduler4) [6b19d168] Command 'SpmStatusVDSCommand(HostName
= ovirt2.telia.ru, SpmStatusVDSCommandParameters:{runAsync='true',
hostId='3970247c-69eb-4bd8-b263-9100703a8243',
storagePoolId='5a044257-02ec-0382-0243-01f2'})' execution failed:
java.net.NoRouteToHostException: No route to host
2017-11-14 17:06:50,939Z INFO
[org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor)
[] Connecting to ovirt2/80.239.162.106
2017-11-14 17:06:50,940Z ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(DefaultQuartzScheduler4) [6b19d168]
IrsBroker::Failed::GetStoragePoolInfoVDS
2017-11-14 17:06:50,940Z ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.GetStoragePoolInfoVDSCommand]
(DefaultQuartzScheduler4) [6b19d168] Command 'GetStoragePoolInfoVDSCommand(
GetStoragePoolInfoVDSCommandParameters:{runAsync='true',
storagePoolId='5a044257-02ec-0382-0243-01f2',
ignoreFailoverLimit='true'})' execution failed: IRSProtocolException:
2017-11-14 17:06:51,937Z INFO
[org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor)
[] Connecting to ovirt2/80.239.162.106
2017-11-14 17:06:51,938Z ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler7) [7f23a3bd] Command
'GetCapabilitiesVDSCommand(HostName = ovirt2.telia.ru,
VdsIdAndVdsVDSCommandParametersBase:{runAsync='true',
hostId='3970247c-69eb-4bd8-b263-9100703a8243',
vds='Host[ovirt2.telia.ru,3970247c-69eb-4bd8-b263-9100703a8243]'})'
execution failed: java.net.NoRouteToHostException: No route to host
2017-11-14 17:06:51,938Z ERROR
[org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring]
(DefaultQuartzScheduler7) [7f23a3bd] Failure to refresh host '
ovirt2.telia.ru' runtime info: java.net.NoRouteToHostException: No route to
host
2017-11-14 17:06:54,941Z INFO
[org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor)
[] Connecting to ovirt2/80.239.162.106
2017-11-14 

Re: [ovirt-users] Issue migrating hard drive to new vm store

2017-11-14 Thread Benny Zlotnik
Can you please provide full vdsm logs (only the engine log is attached) and
the versions of the engine, vdsm, gluster?

On Tue, Nov 14, 2017 at 6:16 PM, Bryan Sockel  wrote:

> Having an issue moving a hard disk from one vm data store new a newly
> created gluster data store.  I can shut down the machine and copy the hard
> drive, detach the old hard drive and attach the new hard drive, but i would
> prefer to keep the vm on line when moving the disk.
>
> I have attached a portion of the vdsm.log file.
>
>
>
> Thanks
> Bryan
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Issue migrating hard drive to new vm store

2017-11-14 Thread Bryan Sockel
Having an issue moving a hard disk from one vm data store new a newly 
created gluster data store.  I can shut down the machine and copy the hard 
drive, detach the old hard drive and attach the new hard drive, but i would 
prefer to keep the vm on line when moving the disk.

I have attached a portion of the vdsm.log file.



Thanks
Bryan 
2017-11-14 09:43:58,824-06 INFO  
[org.ovirt.engine.core.bll.storage.disk.MoveDisksCommand] (default task-61) 
[bae5243a-a7a1-4f13-8d5e-04132f98b35d] Running command: MoveDisksCommand 
internal: false. Entities affected :  ID: 57b69fdf-93dd-444c-977b-8803fa83507b 
Type: DiskAction group CONFIGURE_DISK_STORAGE with role type USER
2017-11-14 09:43:58,911-06 INFO  
[org.ovirt.engine.core.bll.storage.lsm.LiveMigrateVmDisksCommand] (default 
task-61) [bae5243a-a7a1-4f13-8d5e-04132f98b35d] Lock Acquired to object 
'EngineLock:{exclusiveLocks='[57b69fdf-93dd-444c-977b-8803fa83507b=DISK]', 
sharedLocks='[7c48c7f7-31d4-4627-a26f-6a239bf92f21=VM]'}'
2017-11-14 09:43:59,061-06 INFO  
[org.ovirt.engine.core.bll.storage.lsm.LiveMigrateVmDisksCommand] 
(org.ovirt.thread.pool-6-thread-22) [bae5243a-a7a1-4f13-8d5e-04132f98b35d] 
Running command: LiveMigrateVmDisksCommand internal: false. Entities affected : 
 ID: 57b69fdf-93dd-444c-977b-8803fa83507b Type: DiskAction group 
DISK_LIVE_STORAGE_MIGRATION with role type USER
2017-11-14 09:43:59,136-06 INFO  
[org.ovirt.engine.core.bll.snapshots.CreateAllSnapshotsFromVmCommand] 
(org.ovirt.thread.pool-6-thread-22) [bae5243a-a7a1-4f13-8d5e-04132f98b35d] 
Running command: CreateAllSnapshotsFromVmCommand internal: true. Entities 
affected :  ID: 7c48c7f7-31d4-4627-a26f-6a239bf92f21 Type: VMAction group 
MANIPULATE_VM_SNAPSHOTS with role type USER
2017-11-14 09:43:59,162-06 INFO  
[org.ovirt.engine.core.bll.snapshots.CreateSnapshotCommand] 
(org.ovirt.thread.pool-6-thread-22) [bae5243a-a7a1-4f13-8d5e-04132f98b35d] 
Running command: CreateSnapshotCommand internal: true. Entities affected :  ID: 
---- Type: Storage
2017-11-14 09:43:59,187-06 INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateSnapshotVDSCommand] 
(org.ovirt.thread.pool-6-thread-22) [bae5243a-a7a1-4f13-8d5e-04132f98b35d] 
START, CreateSnapshotVDSCommand( 
CreateSnapshotVDSCommandParameters:{runAsync='true', 
storagePoolId='02abeab8-fac8-41a9-bc2d-7fb0eaac0e07', 
ignoreFailoverLimit='false', 
storageDomainId='0e201616-a896-4c4a-90e6-791933ea5393', 
imageGroupId='57b69fdf-93dd-444c-977b-8803fa83507b', 
imageSizeInBytes='107374182400', volumeFormat='COW', 
newImageId='0028c34f-1117-4b06-b9d9-83be9bf263cf', imageType='Sparse', 
newImageDescription='', imageInitialSizeInBytes='0', 
imageId='d2c2e7a9-ae5b-4481-9d8c-185ab5cd8c14', 
sourceImageGroupId='57b69fdf-93dd-444c-977b-8803fa83507b'}), log id: 7d8623d3
2017-11-14 09:43:59,187-06 INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateSnapshotVDSCommand] 
(org.ovirt.thread.pool-6-thread-22) [bae5243a-a7a1-4f13-8d5e-04132f98b35d] -- 
executeIrsBrokerCommand: calling 'createVolume' with two new parameters: 
description and UUID
2017-11-14 09:44:00,274-06 INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateSnapshotVDSCommand] 
(org.ovirt.thread.pool-6-thread-22) [bae5243a-a7a1-4f13-8d5e-04132f98b35d] 
FINISH, CreateSnapshotVDSCommand, return: 0028c34f-1117-4b06-b9d9-83be9bf263cf, 
log id: 7d8623d3
2017-11-14 09:44:00,279-06 INFO  
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask] 
(org.ovirt.thread.pool-6-thread-22) [bae5243a-a7a1-4f13-8d5e-04132f98b35d] 
CommandAsyncTask::Adding CommandMultiAsyncTasks object for command 
'edece02c-623b-482e-a5fa-7caad2d15a9f'
2017-11-14 09:44:00,280-06 INFO  
[org.ovirt.engine.core.bll.CommandMultiAsyncTasks] 
(org.ovirt.thread.pool-6-thread-22) [bae5243a-a7a1-4f13-8d5e-04132f98b35d] 
CommandMultiAsyncTasks::attachTask: Attaching task 
'ef8cc0ca-ef62-4a92-9564-457d0fa82584' to command 
'edece02c-623b-482e-a5fa-7caad2d15a9f'.
2017-11-14 09:44:00,298-06 INFO  
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager] 
(org.ovirt.thread.pool-6-thread-22) [bae5243a-a7a1-4f13-8d5e-04132f98b35d] 
Adding task 'ef8cc0ca-ef62-4a92-9564-457d0fa82584' (Parent Command 
'CreateSnapshot', Parameters Type 
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'), polling hasn't 
started yet..
2017-11-14 09:44:00,382-06 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(org.ovirt.thread.pool-6-thread-22) [bae5243a-a7a1-4f13-8d5e-04132f98b35d] 
EVENT_ID: USER_CREATE_SNAPSHOT(45), Correlation ID: 
bae5243a-a7a1-4f13-8d5e-04132f98b35d, Job ID: 
b2c0c8f8-57d1-4012-ab4c-e0b7544036a7, Call Stack: null, Custom ID: null, Custom 
Event ID: -1, Message: Snapshot 'Auto-generated for Live Storage Migration' 
creation for VM 'ansible.altn.int' was initiated by admin@internal-authz.
2017-11-14 09:44:00,383-06 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] 
(org.ovirt.thread.pool-6-thread-22) [bae5243a-a7a1-4f13-8d5e-04132f98b35d] 

Re: [ovirt-users] 4.1 engine-iso-uploader / root password glitch

2017-11-14 Thread andreil1
Hi,


Here is with —verbose option.
Please note this is draft test all-in-one system (everyhting on one PC, not 
self-hosted engine). 
Although it is not recommended, somehow it works.


[root@vm-hostengine vmhosts]# engine-iso-uploader --verbose --ssh-user=root 
--iso-domain=iso upload KNOPPIX_V8.1-2017-09-05-EN.iso 
Please provide the REST API password for the admin@internal oVirt Engine user 
(CTRL+D to abort): 
DEBUG: API Vendor(ovirt.org)API Version(4.1.0)
DEBUG: id=0f70d19b-458d-4b39-93ca-5bf2cbdc9aa1 address=vm-hostengine.com 
path=/vmhosts/iso
Uploading, please wait...
INFO: Start uploading KNOPPIX_V8.1-2017-09-05-EN.iso 
DEBUG: file (KNOPPIX_V8.1-2017-09-05-EN.iso)
DEBUG: /usr/bin/ssh -p 22  r...@vm-hostengine.com "/usr/bin/test -e 
/vmhosts/iso/0f70d19b-458d-4b39-93ca-5bf2cbdc9aa1/images/----/KNOPPIX_V8.1-2017-09-05-EN.iso"
DEBUG: /usr/bin/ssh -p 22  r...@vm-hostengine.com "/usr/bin/test -e 
/vmhosts/iso/0f70d19b-458d-4b39-93ca-5bf2cbdc9aa1/images/----/KNOPPIX_V8.1-2017-09-05-EN.iso"
DEBUG: _cmds(['/usr/bin/ssh', '-p', '22', 'r...@vm-hostengine.com', 
'/usr/bin/test -e 
/vmhosts/iso/0f70d19b-458d-4b39-93ca-5bf2cbdc9aa1/images/----/KNOPPIX_V8.1-2017-09-05-EN.iso'])
r...@vm-hostengine.com's password: 
DEBUG: returncode(1)
DEBUG: STDOUT()
DEBUG: STDERR()
DEBUG: exists returning false
DEBUG: Mount point size test command is (/usr/bin/ssh -p 22  
r...@vm-hostengine.com "/usr/bin/python -c 'import os; dir_stat = 
os.statvfs(\"/vmhosts/iso\"); print (dir_stat.f_bavail * dir_stat.f_frsize)'" )
DEBUG: /usr/bin/ssh -p 22  r...@vm-hostengine.com "/usr/bin/python -c 'import 
os; dir_stat = os.statvfs(\"/vmhosts/iso\"); print (dir_stat.f_bavail * 
dir_stat.f_frsize)'" 
DEBUG: _cmds(['/usr/bin/ssh', '-p', '22', 'r...@vm-hostengine.com', 
'/usr/bin/python -c \'import os; dir_stat = os.statvfs("/vmhosts/iso"); print 
(dir_stat.f_bavail * dir_stat.f_frsize)\''])
r...@vm-hostengine.com's password: 



> On 14 Nov 2017, at 09:41, Yedidyah Bar David  wrote:
> 
> On Mon, Nov 13, 2017 at 11:47 PM, andre...@starlett.lv
>  wrote:
>> Hi,
>> 
>> Here are logs:
>> 
>> engine-iso-uploader list
>> Please provide the REST API password for the admin@internal oVirt Engine
>> user (CTRL+D to abort):
>> ISO Storage Domain Name   | ISO Domain Status
>> iso   | ok
>> 
>> engine-iso-uploader --ssh-user=root --iso-domain=iso upload
> 
> Can you please run it with --verbose? Thanks.
> 
>> /vmhosts/virtio-win.iso
>> Please provide the REST API password for the admin@internal oVirt Engine
>> user (CTRL+D to abort):
>> Uploading, please wait...
>> INFO: Start uploading /vmhosts/virtio-win.iso
>> r...@vm-hostengine.com's password:
>> r...@vm-hostengine.com's password: (endless)
>> 
>> iso-uploader log:
>> 2017-11-13 23:38:16::INFO::engine-iso-uploader::1033::root:: Start
>> uploading /vmhosts/virtio-win.iso
>> ... and nothing else
>> 
>> tail -n 5000 messages | grep ssh
>> Gives nothing useful except
>> vm-hostengine ovirt-vmconsole-proxy-sshd:
>> /usr/share/ovirt-vmconsole/ovirt-vmconsole-proxy/ovirt-vmconsole-proxy-sshd/sshd_config
>> line 22: Deprecated option RSAAuthentication
>> which have no relation to upload.
>> 
>> 
>> On 11/13/2017 09:54 AM, Yedidyah Bar David wrote:
>>> On Mon, Nov 13, 2017 at 8:57 AM, andre...@starlett.lv
>>>  wrote:
 On 11/13/2017 05:36 AM, Yihui Zhao wrote:
 
 can you try the admin password?
 
 
 already did, same result.
 
 On Mon, Nov 13, 2017 at 3:10 AM, andre...@starlett.lv 
 
 wrote:
> Hi,
> 
> I'm trying to upload iso with this coomand.
> engine-iso-uploader --ssh-user=root --iso-domain=iso upload suse.iso
> 
> Please provide the REST API password for the admin@internal oVirt Engine
> user (CTRL+D to abort):
> This go OK.
> 
> However, then it asks root password, I enter it, then it asks again and
> again. Root password is correct for sure, becuase I can coonect vis ssh
> from terminal.
> 
> How to fix this problem?
>>> Can you please share the log? Thanks.
>>> 
> May be its possible just to copy files manually?
>>> It is. Locate the iso domain on your storage server.
>>> Inside it you'll find a directory whose name is a random
>>> uuid, inside it 'images', and inside it a directory named:
>>> '----'.
>>> You can put your iso files inside that one. Make sure they
>>> are readable by user:group 36:36.
>>> 
> Thanks in advance.
> Andrei
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
 
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
>>> 
>>> 

Re: [ovirt-users] Enabling libgfapi disk access with oVirt 4.2

2017-11-14 Thread Bryan Sockel
Hrm, not sure what i am doing wrong then, does not seem to be working for 
me.  I am not using the hosted engine, but a direct install on a physical 
server.  I thought i have enabled support for libgfapi with this command:

# engine-config -g LibgfApiSupported
LibgfApiSupported: false version: 3.6
LibgfApiSupported: false version: 4.0
LibgfApiSupported: true version: 4.1

restarted the engine, shutdown the vm completely and started it back up a 
short time later.

I am using this command to check:
 ps ax | grep qemu | grep 'file=gluster\|file=/rhev'

Output is 
 file=gluster://10.20.102.181/gl-vm12/

Thanks
Bryan 
-Original Message-
From: Kasturi Narra 
To: Bryan Sockel 
Cc: Alessandro De Salvo , users 

Date: Tue, 14 Nov 2017 12:56:49 +0530
Subject: Re: [ovirt-users] Enabling libgfapi disk access with oVirt 4.2

yes, it  does work with 4.1.7.6 version

On Tue, Nov 14, 2017 at 4:49 AM, Bryan Sockel  wrote:
Is libgfapi supposed to be working in 4.1.7.6?
Bryan 
-Original Message-
From: Alessandro De Salvo 
To: users@ovirt.org
Date: Thu, 9 Nov 2017 09:35:01 +0100
Subject: Re: [ovirt-users] Enabling libgfapi disk access with oVirt 4.2
 
Hi again,
OK, tried to stop all the vms, except the engine, set engine-config -s 
LibgfApiSupported=true (for 4.2 only) and restarted the engine.
When I tried restarting the VMs they are still not using gfapi, so it does 
not seem to help.
Cheers,

Alessandro


Il 09/11/17 09:12, Alessandro De Salvo ha scritto:

Hi,
where should I enable gfapi via the UI?
The only command I tried was engine-config -s LibgfApiSupported=true but the 
result is what is shown in my output below, so it’s set to true for v4.2. 
Is it enough?
I’ll try restarting the engine. Is it really needed to stop all the VMs 
and restart them all? Of course this is a test setup and I can do it, but 
for production clusters in the future it may be a problem.
Thanks,

   Alessandro

Il giorno 09 nov 2017, alle ore 07:23, Kasturi Narra  ha 
scritto:

Hi ,

The procedure to enable gfapi is below.

1) stop all the vms running
2) Enable gfapi via UI or using engine-config command
3) Restart ovirt-engine service
4) start the vms.

Hope you have not missed any !!

Thanks
kasturi 

On Wed, Nov 8, 2017 at 11:58 PM, Alessandro De Salvo 
 wrote:
Hi,

I'm using the latest 4.2 beta release and want to try the gfapi access, but 
I'm currently failing to use it.

My test setup has an external glusterfs cluster v3.12, not managed by oVirt.

The compatibility flag is correctly showing gfapi should be enabled with 
4.2:

# engine-config -g LibgfApiSupported
LibgfApiSupported: false version: 3.6
LibgfApiSupported: false version: 4.0
LibgfApiSupported: false version: 4.1
LibgfApiSupported: true version: 4.2

The data center and cluster have the 4.2 compatibility flags as well.

However, when starting a VM with a disk on gluster I can still see the disk 
is mounted via fuse.

Any clue of what I'm still missing?

Thanks,


   Alessandro

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Trouble Connecting ISCSI Storage to Hosted Engine VM

2017-11-14 Thread Kyle Conti
Sending this again in case I sent this prior to being fully setup as an 
ovirt-user subscriber (received confirmation after I sent). If you did receive 
this already, my apologies: 

Hello, 

I'm brand new to Ovirt and trying to get my Hosted Engine setup configured with 
ISCSI storage. I have ~8TB usable storage available on an LVM partition. This 
storage is on the same server that is hosting the ovirt engine virtual machine. 
After I use the discovery/sendtargets command via Centos 7 engine vm, it shows 
the correct IQN. When I use ovirt's storage discovery in GUI, I can see the 
storage IQN just fine as well, but when I try to connect to it, I get the 
following: 

"Error while executing action: Failed to login to iSCSI node due to 
authorization failure" 

Is NFS recommended instead when trying to connect the storage from server host 
to Ovirt Engine VM? There is nothing in this storage domain yet. This is a 
brand new setup. 

One other thing to note...I have iscsi storage working with a NAS for my ISO 
storage domain. I don't want to use the NAS for the virtual machine storage 
domain. What's so different about the Ovirt Engine vm? 

Any help would be much appreciated. Please let me know If I'm taking the wrong 
approach here, or I'm trying to do something that this system is not meant to 
do. 

Regards, 

KC 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Trouble Connecting ISCSI Storage to Hosted Engine

2017-11-14 Thread Kyle Conti
Hello, 

I'm brand new to Ovirt and trying to get my Hosted Engine setup configured with 
ISCSI storage. I have ~8TB usable storage available on an LVM partition. This 
storage is on the same server that is hosting the ovirt engine virtual machine. 
After I use the discovery/sendtargets command via Centos 7 engine vm, it shows 
the correct IQN. When I use ovirt's storage discovery in GUI, I can see the 
storage IQN just fine as well, but when I try to connect to it, I get the 
following: 

"Error while executing action: Failed to login to iSCSI node due to 
authorization failure" 

Is NFS recommended instead when trying to connect the storage from server host 
to Ovirt Engine VM? There is nothing in this storage domain yet. This is a 
brand new setup. 

One other thing to note...I have iscsi storage working with a NAS for my ISO 
storage domain. I don't want to use the NAS for the virtual machine storage 
domain. What's so different about the Ovirt Engine vm? 

Any help would be much appreciated. Please let me know If I'm taking the wrong 
approach here, or I'm trying to do something that this system is not meant to 
do. 

Regards, 

KC 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] hosted exchange failed to install

2017-11-14 Thread Rudi Ahlers
Hi,

Can someone please help?

I installed hosted exchange but specified the wrong interface, and thus
couldn't access it. So I removed it (yum install) and reinstalled it, and
re-ran the deploy but got the following error:

 Please confirm installation settings (Yes, No)[Yes]:
[ INFO  ] Stage: Transaction setup
[ INFO  ] Stage: Misc configuration
[ INFO  ] Stage: Package installation
[ INFO  ] Stage: Misc configuration
[ INFO  ] Configuring libvirt
[ INFO  ] Configuring VDSM
[WARNING] VDSM configuration file not found: creating a new configuration
file
[ INFO  ] Starting vdsmd
[ INFO  ] Creating Storage Domain
[ ERROR ] Failed to execute stage 'Misc configuration': Storage domain is
not empty - requires cleaning: (u'srv1:/engine',)
[ INFO  ] Yum Performing yum transaction rollback
[ INFO  ] Stage: Clean up
[ INFO  ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20171114162130.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed: this system is not reliable,
please check the issue,fix and redeploy
  Log file is located at
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20171114161520-km3qok.log



I am honestly not sure why it would think "this system is not reliable".
How do I check what is actually wrong?

The log file shows the same error:


tail -f
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20171114161520-km3qok.log
2017-11-14 16:21:30 DEBUG otopi.context context._executeMethod:134
condition False
2017-11-14 16:21:30 INFO otopi.context context.runSequence:687 Stage:
Termination
2017-11-14 16:21:30 DEBUG otopi.context context.runSequence:691 STAGE
terminate
2017-11-14 16:21:30 DEBUG otopi.context context._executeMethod:128 Stage
terminate METHOD otopi.plugins.gr_he_common.core.misc.Plugin._terminate
2017-11-14 16:21:30 ERROR otopi.plugins.gr_he_common.core.misc
misc._terminate:178 Hosted Engine deployment failed: this system is not
reliable, please check the issue,fix and redeploy
2017-11-14 16:21:30 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:204 DIALOG:SEND Log file is located at
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20171114161520-km3qok.log
2017-11-14 16:21:30 DEBUG otopi.context context._executeMethod:128 Stage
terminate METHOD otopi.plugins.otopi.dialog.human.Plugin._terminate
2017-11-14 16:21:30 DEBUG otopi.context context._executeMethod:128 Stage
terminate METHOD otopi.plugins.otopi.dialog.machine.Plugin._terminate
2017-11-14 16:21:30 DEBUG otopi.context context._executeMethod:134
condition False
2017-11-14 16:21:30 DEBUG otopi.context context._executeMethod:128 Stage
terminate METHOD otopi.plugins.otopi.core.log.Plugin._terminate


-- 
Kind Regards
Rudi Ahlers
Website: http://www.rudiahlers.co.za
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] CIFS Share

2017-11-14 Thread Arthur Melo
Great answare. Thanks!

Atenciosamente,
Arthur Melo
Linux User #302250


2017-11-12 6:23 GMT-02:00 Yaniv Kaul :

>
>
> On Thu, Nov 9, 2017 at 8:28 PM, Arthur Melo  wrote:
>
>> Is it possible to mount a export share using CIFS?
>>
>
> We generally support any POSIX compliant file system which also support
> Direct IO for data domain - I'm not sure if CIFS in general and the
> specific implementation you use support both.
> If it does, it should work for the data domain - which you can detach and
> attach between environments.
> Export domain functionality is really done with NFS.
>
> You could also upload and download disks via your browser and place the
> disks on a CIFS share.
> Y.
>
>
>>
>> Atenciosamente,
>> Arthur Melo
>> Linux User #302250
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] how to clean stuck task

2017-11-14 Thread Gianluca Cecchi
In the mean time, as I had to give an answer for the snapshotted VM, I
decided to follow one of the suggestions to run engine-setup and so also to
pass my engine from 4.1.6 to 4.1.7.
And indeed the 2 stale tasks have been cleaned.
The lock symbol has gone away from the apex VM too.

Probably the steps solving the problems were these during engine-setup:

[ INFO  ] Cleaning async tasks and compensations
[ INFO  ] Unlocking existing entities

Does this mean that in general I can also run engine-setup without
upgrading at all? Is the clean part run also in that case or only during
effective upgrades?

I initiated a clone of the taken snapshot on apex VM and it seems to go
correctly and in task pane I see only that task and no more.

In SPM now I have indeed

[root@ov300 ~]# vdsClient -s 0 getAllTasksStatuses
{'status': {'message': 'OK', 'code': 0}, 'allTasksStatus':
{'20fa401f-b6f8-43f5-b0fd-6767d46e2335': {'message': 'running job 1 of 1',
'code': 0, 'taskID': '20fa401f-b6f8-43f5-b0fd-6767d46e2335', 'taskResult':
'', 'taskState': 'running'}}}

[root@ov300 ~]#

It should take about half an hour to complete and I will see.

Anyway in my opinion it would be nice to have some more in deep
documentation about how to run taskcleaner.sh or simply officially say to
leave it to developers if this is the intended case (or to Red Hat support
in case of RHEV usage)

Cheers,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Error during SSO authentication Cannot authenticate user 'admin@internal'

2017-11-14 Thread Dominik Holler
Can you connect to http://hostname:8080/ovirt-engine/api/ using this
credentials?

Even if the already posted stacktrace looks like expected, maybe you
can share your /etc/ovirt-provider-ovn (without
ovirt-sso-client-secret, which seems to be correct)?

Thanks,
Dominik

On Tue, 14 Nov 2017 09:51:27 +0100
Martin Perina  wrote:

> On Tue, Nov 14, 2017 at 12:44 AM, Sverker Abrahamsson <
> sver...@abrahamsson.com> wrote:  
> 
> > Since upgrading my test lab to ovirt 4.2 I can't get
> > ovirt-provider-ovn to work. From ovirt-provider-ovn.log:
> >
> > 2017-11-14 00:40:15,795   Request: POST : /v2.0///tokens
> > 2017-11-14 00:40:15,795   Request body:
> > {
> >   "auth" : {
> > "passwordCredentials" : {
> >   "username" : "admin@internal",
> >   "password" : "x"
> > }
> >   }
> > }
> > 2017-11-14 00:40:15,819   Starting new HTTPS connection (1): h2-int
> > 2017-11-14 00:40:20,829   "POST /ovirt-engine/sso/oauth/token
> > HTTP/1.1" 400 118
> > 2017-11-14 00:40:20,830   Error during SSO authentication Cannot
> > authenticate user 'admin@internal': The username or password is
> > incorrect.. : access_deniedNone
> > Traceback (most recent call last):
> >   File "/usr/share/ovirt-provider-ovn/handlers/base_handler.py",
> > line 119, in _handle_request
> > method, path_parts, content)
> >   File
> > "/usr/share/ovirt-provider-ovn/handlers/selecting_handler.py", line
> > 177, in handle_request handler, content, parameters
> >   File "/usr/share/ovirt-provider-ovn/handlers/keystone.py", line
> > 28, in call_response_handler
> > return response_handler(content, parameters)
> >   File
> > "/usr/share/ovirt-provider-ovn/handlers/keystone_responses.py",
> > line 58, in post_tokens user_password=user_password)
> >   File "/usr/share/ovirt-provider-ovn/auth/plugin_facade.py", line
> > 26, in create_token
> > return auth.core.plugin.create_token(user_at_domain,
> > user_password) File
> > "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/plugin.py", line
> > 48, in create_token timeout=self._timeout())
> >   File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py",
> > line 62, in create_token
> > username, password, engine_url, ca_file, timeout)
> >   File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py",
> > line 54, in wrapper
> > _check_for_error(response)
> >   File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py",
> > line 168, in _check_for_error
> > result['error'], details))
> > Unauthorized: Error during SSO authentication Cannot authenticate
> > user 'admin@internal': The username or password is incorrect.. :
> > access_deniedNone
> >
> > And in engine.log:
> >
> > 2017-11-14 00:40:20,828+01 ERROR
> > [org.ovirt.engine.core.sso.utils.SsoUtils] (default task-16) []
> > OAuthException access_denied: Cannot authenticate user
> > 'admin@internal': The username or password is incorrect.. 
> 
> ​Could you please provide full engine logs so we can investigate?
> 
> ​
> 
> ​Thanks
> 
> Martin
> ​
> 
> >
> > The password in the request is the same as used to log in to the
> > admin portal and works fine there.
> >
> > /Sverker
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >  
> 
> 
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Host Power Management Configuration questions

2017-11-14 Thread Martin Perina
On Tue, Nov 14, 2017 at 10:06 AM, Gianluca Cecchi  wrote:

> On Tue, Nov 14, 2017 at 9:33 AM, Artem Tambovskiy <
> artem.tambovs...@gmail.com> wrote:
>
>> Trying to configure power management for a certain host and fence agent
>> always fail when I'm pressing Test button.
>>
>>  At the same time from command line on the same host all looks good:
>>
>> [root@ovirt ~]# fence_ipmilan -a 172.16.22.1 -l user -p pwd -o status -v
>> -P
>> Executing: /usr/bin/ipmitool -I lanplus -H 172.16.22.1 -p 623 -U user -P
>> pwd -L ADMINISTRATOR chassis power status
>>
>> 0 Chassis Power is on
>>
>> Status: ON
>> [root@ovirt ~]#
>>
>> What could be the reason?
>>
>> Regards,
>> Artem
>>
>>
>>
>>
> What do you put in options line of the configuration window for power mgmt
> when you test?
>
> In my case with Dell M610 blades, it works with ipmilan agent and setting
> this
>
> privlvl=operator,lanplus=on
>
> I think in your case you need at least "lanplus=on" that shouldn't be the
> default of the command executed.
>

​lanplus=1 is the default for ipmilan fence agent, no need to specify in
Options
​


>
> I see that your command line seems to expand in "-L ADMINISTRATOR".
> In my case in iDRAC for the blade I have configured my dedicate fencing
> user with privilege capped to operator
>
> HIH,
> Gianluca
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 
Martin Perina
Associate Manager, Software Engineering
Red Hat Czech s.r.o.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Host Power Management Configuration questions

2017-11-14 Thread Martin Perina
On Tue, Nov 14, 2017 at 10:10 AM, Artem Tambovskiy <
artem.tambovs...@gmail.com> wrote:

> Hi,
>
> In the engine.log appears following:
>
> 2017-11-14 12:04:33,081+03 ERROR 
> [org.ovirt.engine.core.bll.pm.FenceProxyLocator]
> (default task-184) [32fe1ce0-2e25-4e2e-a6bf-59f39a65b2f1] Can not run
> fence action on host 'ovirt.prod.env', no suitable proxy host was found.
>

​Above is the important message, you don't have any other host which could
execute fence action. You need to have at least one other host in status Up
to be able to perform fence action.
​


> 2017-11-14 12:04:36,534+03 INFO  
> [org.ovirt.engine.core.bll.hostdeploy.UpdateVdsCommand]
> (default task-186) [d83ce46d-ce89-4804-aba1-761103e93e8c] Running
> command: UpdateVdsCommand internal: false. Entities affected :  ID:
> a9bb1c6f-b9c9-4dc3-a24e-b83b2004552d Type: VDSAction group
> EDIT_HOST_CONFIGURATION with role type ADMIN
> 2017-11-14 12:04:36,704+03 ERROR 
> [org.ovirt.engine.core.bll.pm.FenceProxyLocator]
> (default task-186) [d83ce46d-ce89-4804-aba1-761103e93e8c] Can not run
> fence action on host 'ovirt.prod.env', no suitable proxy host was found.
> 2017-11-14 12:04:36,705+03 INFO  [org.ovirt.engine.core.dal.
> dbbroker.auditloghandling.AuditLogableBase] (default task-186)
> [d83ce46d-ce89-4804-aba1-761103e93e8c] Failed to get vds
> 'a9bb1c6f-b9c9-4dc3-a24e-b83b2004552d', error: null
> 2017-11-14 12:04:36,705+03 INFO  [org.ovirt.engine.core.dal.
> dbbroker.auditloghandling.AuditLogableBase] (default task-186)
> [d83ce46d-ce89-4804-aba1-761103e93e8c] Failed to get vds
> 'a9bb1c6f-b9c9-4dc3-a24e-b83b2004552d', error: null
> 2017-11-14 12:04:36,705+03 INFO  [org.ovirt.engine.core.dal.
> dbbroker.auditloghandling.AuditLogableBase] (default task-186)
> [d83ce46d-ce89-4804-aba1-761103e93e8c] Failed to get vds
> 'a9bb1c6f-b9c9-4dc3-a24e-b83b2004552d', error null
> 2017-11-14 12:04:36,705+03 INFO  [org.ovirt.engine.core.dal.
> dbbroker.auditloghandling.AuditLogableBase] (default task-186)
> [d83ce46d-ce89-4804-aba1-761103e93e8c] Failed to get vds
> 'a9bb1c6f-b9c9-4dc3-a24e-b83b2004552d', error null
> 2017-11-14 12:04:36,720+03 WARN  [org.ovirt.engine.core.dal.
> dbbroker.auditloghandling.AuditLogDirector] (default task-186)
> [d83ce46d-ce89-4804-aba1-761103e93e8c] EVENT_ID:
> VDS_ALERT_PM_HEALTH_CHECK_START_MIGHT_FAIL(9,010), Correlation ID: null,
> Call Stack: null, Custom Event ID: -1, Message: Health check on Host
>  indicates that future attempts to Start this host using
> Power-Management are expected to fail.
> 2017-11-14 12:04:36,720+03 INFO  [org.ovirt.engine.core.dal.
> dbbroker.auditloghandling.AuditLogableBase] (default task-186)
> [d83ce46d-ce89-4804-aba1-761103e93e8c] Failed to get vds
> 'a9bb1c6f-b9c9-4dc3-a24e-b83b2004552d', error: null
> 2017-11-14 12:04:36,720+03 INFO  [org.ovirt.engine.core.dal.
> dbbroker.auditloghandling.AuditLogableBase] (default task-186)
> [d83ce46d-ce89-4804-aba1-761103e93e8c] Failed to get vds
> 'a9bb1c6f-b9c9-4dc3-a24e-b83b2004552d', error: null
> 2017-11-14 12:04:36,720+03 INFO  [org.ovirt.engine.core.dal.
> dbbroker.auditloghandling.AuditLogableBase] (default task-186)
> [d83ce46d-ce89-4804-aba1-761103e93e8c] Failed to get vds
> 'a9bb1c6f-b9c9-4dc3-a24e-b83b2004552d', error null
> 2017-11-14 12:04:36,720+03 INFO  [org.ovirt.engine.core.dal.
> dbbroker.auditloghandling.AuditLogableBase] (default task-186)
> [d83ce46d-ce89-4804-aba1-761103e93e8c] Failed to get vds
> 'a9bb1c6f-b9c9-4dc3-a24e-b83b2004552d', error null
> 2017-11-14 12:04:36,731+03 WARN  [org.ovirt.engine.core.dal.
> dbbroker.auditloghandling.AuditLogDirector] (default task-186)
> [d83ce46d-ce89-4804-aba1-761103e93e8c] EVENT_ID:
> VDS_ALERT_PM_HEALTH_CHECK_STOP_MIGHT_FAIL(9,011), Correlation ID: null,
> Call Stack: null, Custom Event ID: -1, Message: Health check on Host
>  indicates that future attempts to Stop this host using
> Power-Management are expected to fail.
> 2017-11-14 12:04:36,765+03 WARN  [org.ovirt.engine.core.dal.
> dbbroker.auditloghandling.AuditLogDirector] (default task-186)
> [d83ce46d-ce89-4804-aba1-761103e93e8c] EVENT_ID: 
> KDUMP_DETECTION_NOT_CONFIGURED_ON_VDS(617),
> Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Kdump
> integration is enabled for host ovirt.prod.env, but kdump is not configured
> properly on host.
> 2017-11-14 12:04:36,781+03 INFO  [org.ovirt.engine.core.dal.
> dbbroker.auditloghandling.AuditLogDirector] (default task-186)
> [d83ce46d-ce89-4804-aba1-761103e93e8c] EVENT_ID: USER_UPDATE_VDS(43),
> Correlation ID: d83ce46d-ce89-4804-aba1-761103e93e8c, Call Stack: null,
> Custom Event ID: -1, Message: Host ovirt.prod.env configuration was updated
> by arta00@internal-authz.
>
> Just let me know if more logs are needed.
>
> Regards,
> Artem
>
> On Tue, Nov 14, 2017 at 11:52 AM, Martin Perina 
> wrote:
>
>> Hi,
>>
>> could you please provide engine logs so we can investigate?
>>
>> Thanks
>>
>> Martin
>>
>>
>> On Tue, Nov 14, 2017 at 9:33 AM, Artem 

[ovirt-users] Transfer from one Storage to Other is very slow

2017-11-14 Thread Jon bae
Hello everybody,
I have a node where I installed nfs storage and I have a second nfs network
storage. My node have a bond (4) with two nics and the other storage have a
bond with 4 nics.

My oVirt engine runs as a VM on my network storage.

The bond on the node side is relative new, before I had this setup, the
speed was good. But at the same time I also move my ovirt engine from a
third server to the network storage.

My problem is now, when I move a VM disk from the network storage to the
storage on the node, I have very pure speed. A VM with 140GB takes more
then an hour to transfer.

When I make speed tests with iperf3 I got this speed:
- from oVirt to network storage: 20Gbits/s
- from network storage to node storage: 9.5Gbits/s
- from ovrit to node storage 9.5 Gbits/s

When I transfer a VM disk, iftop shows almost no traffic.

Have you an idea what is happen here?

Regards
Jonathan
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Host Power Management Configuration questions

2017-11-14 Thread Artem Tambovskiy
Hi,

In the engine.log appears following:

2017-11-14 12:04:33,081+03 ERROR
[org.ovirt.engine.core.bll.pm.FenceProxyLocator] (default task-184)
[32fe1ce0-2e25-4e2e-a6bf-59f39a65b2f1] Can not run fence action on host
'ovirt.prod.env', no suitable proxy host was found.
2017-11-14 12:04:36,534+03 INFO
 [org.ovirt.engine.core.bll.hostdeploy.UpdateVdsCommand] (default task-186)
[d83ce46d-ce89-4804-aba1-761103e93e8c] Running command: UpdateVdsCommand
internal: false. Entities affected :  ID:
a9bb1c6f-b9c9-4dc3-a24e-b83b2004552d Type: VDSAction group
EDIT_HOST_CONFIGURATION with role type ADMIN
2017-11-14 12:04:36,704+03 ERROR
[org.ovirt.engine.core.bll.pm.FenceProxyLocator] (default task-186)
[d83ce46d-ce89-4804-aba1-761103e93e8c] Can not run fence action on host
'ovirt.prod.env', no suitable proxy host was found.
2017-11-14 12:04:36,705+03 INFO
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogableBase]
(default task-186) [d83ce46d-ce89-4804-aba1-761103e93e8c] Failed to get vds
'a9bb1c6f-b9c9-4dc3-a24e-b83b2004552d', error: null
2017-11-14 12:04:36,705+03 INFO
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogableBase]
(default task-186) [d83ce46d-ce89-4804-aba1-761103e93e8c] Failed to get vds
'a9bb1c6f-b9c9-4dc3-a24e-b83b2004552d', error: null
2017-11-14 12:04:36,705+03 INFO
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogableBase]
(default task-186) [d83ce46d-ce89-4804-aba1-761103e93e8c] Failed to get vds
'a9bb1c6f-b9c9-4dc3-a24e-b83b2004552d', error null
2017-11-14 12:04:36,705+03 INFO
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogableBase]
(default task-186) [d83ce46d-ce89-4804-aba1-761103e93e8c] Failed to get vds
'a9bb1c6f-b9c9-4dc3-a24e-b83b2004552d', error null
2017-11-14 12:04:36,720+03 WARN
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-186) [d83ce46d-ce89-4804-aba1-761103e93e8c] EVENT_ID:
VDS_ALERT_PM_HEALTH_CHECK_START_MIGHT_FAIL(9,010), Correlation ID: null,
Call Stack: null, Custom Event ID: -1, Message: Health check on Host
 indicates that future attempts to Start this host using
Power-Management are expected to fail.
2017-11-14 12:04:36,720+03 INFO
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogableBase]
(default task-186) [d83ce46d-ce89-4804-aba1-761103e93e8c] Failed to get vds
'a9bb1c6f-b9c9-4dc3-a24e-b83b2004552d', error: null
2017-11-14 12:04:36,720+03 INFO
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogableBase]
(default task-186) [d83ce46d-ce89-4804-aba1-761103e93e8c] Failed to get vds
'a9bb1c6f-b9c9-4dc3-a24e-b83b2004552d', error: null
2017-11-14 12:04:36,720+03 INFO
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogableBase]
(default task-186) [d83ce46d-ce89-4804-aba1-761103e93e8c] Failed to get vds
'a9bb1c6f-b9c9-4dc3-a24e-b83b2004552d', error null
2017-11-14 12:04:36,720+03 INFO
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogableBase]
(default task-186) [d83ce46d-ce89-4804-aba1-761103e93e8c] Failed to get vds
'a9bb1c6f-b9c9-4dc3-a24e-b83b2004552d', error null
2017-11-14 12:04:36,731+03 WARN
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-186) [d83ce46d-ce89-4804-aba1-761103e93e8c] EVENT_ID:
VDS_ALERT_PM_HEALTH_CHECK_STOP_MIGHT_FAIL(9,011), Correlation ID: null,
Call Stack: null, Custom Event ID: -1, Message: Health check on Host
 indicates that future attempts to Stop this host using
Power-Management are expected to fail.
2017-11-14 12:04:36,765+03 WARN
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-186) [d83ce46d-ce89-4804-aba1-761103e93e8c] EVENT_ID:
KDUMP_DETECTION_NOT_CONFIGURED_ON_VDS(617), Correlation ID: null, Call
Stack: null, Custom Event ID: -1, Message: Kdump integration is enabled for
host ovirt.prod.env, but kdump is not configured properly on host.
2017-11-14 12:04:36,781+03 INFO
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-186) [d83ce46d-ce89-4804-aba1-761103e93e8c] EVENT_ID:
USER_UPDATE_VDS(43), Correlation ID: d83ce46d-ce89-4804-aba1-761103e93e8c,
Call Stack: null, Custom Event ID: -1, Message: Host ovirt.prod.env
configuration was updated by arta00@internal-authz.

Just let me know if more logs are needed.

Regards,
Artem

On Tue, Nov 14, 2017 at 11:52 AM, Martin Perina  wrote:

> Hi,
>
> could you please provide engine logs so we can investigate?
>
> Thanks
>
> Martin
>
>
> On Tue, Nov 14, 2017 at 9:33 AM, Artem Tambovskiy <
> artem.tambovs...@gmail.com> wrote:
>
>> Trying to configure power management for a certain host and fence agent
>> always fail when I'm pressing Test button.
>>
>>  At the same time from command line on the same host all looks good:
>>
>> [root@ovirt ~]# fence_ipmilan -a 172.16.22.1 -l user -p pwd -o status -v
>> -P
>> Executing: /usr/bin/ipmitool -I lanplus -H 172.16.22.1 -p 623 -U user -P
>> pwd -L ADMINISTRATOR chassis power status
>>
>> 0 Chassis Power is on

Re: [ovirt-users] Host Power Management Configuration questions

2017-11-14 Thread Gianluca Cecchi
On Tue, Nov 14, 2017 at 9:33 AM, Artem Tambovskiy <
artem.tambovs...@gmail.com> wrote:

> Trying to configure power management for a certain host and fence agent
> always fail when I'm pressing Test button.
>
>  At the same time from command line on the same host all looks good:
>
> [root@ovirt ~]# fence_ipmilan -a 172.16.22.1 -l user -p pwd -o status -v
> -P
> Executing: /usr/bin/ipmitool -I lanplus -H 172.16.22.1 -p 623 -U user -P
> pwd -L ADMINISTRATOR chassis power status
>
> 0 Chassis Power is on
>
> Status: ON
> [root@ovirt ~]#
>
> What could be the reason?
>
> Regards,
> Artem
>
>
>
>
What do you put in options line of the configuration window for power mgmt
when you test?

In my case with Dell M610 blades, it works with ipmilan agent and setting
this

privlvl=operator,lanplus=on

I think in your case you need at least "lanplus=on" that shouldn't be the
default of the command executed.

I see that your command line seems to expand in "-L ADMINISTRATOR".
In my case in iDRAC for the blade I have configured my dedicate fencing
user with privilege capped to operator

HIH,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Host Power Management Configuration questions

2017-11-14 Thread Martin Perina
Hi,

could you please provide engine logs so we can investigate?

Thanks

Martin


On Tue, Nov 14, 2017 at 9:33 AM, Artem Tambovskiy <
artem.tambovs...@gmail.com> wrote:

> Trying to configure power management for a certain host and fence agent
> always fail when I'm pressing Test button.
>
>  At the same time from command line on the same host all looks good:
>
> [root@ovirt ~]# fence_ipmilan -a 172.16.22.1 -l user -p pwd -o status -v
> -P
> Executing: /usr/bin/ipmitool -I lanplus -H 172.16.22.1 -p 623 -U user -P
> pwd -L ADMINISTRATOR chassis power status
>
> 0 Chassis Power is on
>
> Status: ON
> [root@ovirt ~]#
>
> What could be the reason?
>
> Regards,
> Artem
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 
Martin Perina
Associate Manager, Software Engineering
Red Hat Czech s.r.o.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Issue with hosted engine restore.

2017-11-14 Thread Krzysztof Wajda
Hi David,

Thanks for reply, I have Ovirt env which is in very bad shape, and as a new
employer I have to fix it :). I have 5 hosts in whole env. There is broken
HA for hosted engine, so engine can only works on host1. I can't add
another host because it shows me that it was deployed with version 3.5 (If
I'm not wrong it's fixed in 4.0). Also I can't update/upgrade ovirt-engine
because there is 500MB of free space (after cleaning up) without LVM, so
I'm afraid that I'll run out off space during update. Because of that I
decided to add completely new server and migrate hosted-engine on fixed HE
(with LVM) and properly configured HA on new host.

Below short summary:

Hosted engine:

CentOS Linux release 7.2.1511 (Core)

ovirt-engine-3.6.7.5-1.el7.centos.noarch

Host0: with running hosted Hosted-Engine which I need to update/upgrade

CentOS Linux release 7.2.1511 (Core)

ovirt-vmconsole-1.0.0-1.el7.centos.noarch
ovirt-hosted-engine-ha-1.3.5.1-1.el7.centos.noarch
ovirt-host-deploy-1.4.1-1.el7.centos.noarch
ovirt-release36-007-1.noarch
ovirt-engine-sdk-python-3.6.3.0-1.el7.centos.noarch
ovirt-image-uploader-3.6.0-1.el7.centos.noarch
ovirt-setup-lib-1.0.1-1.el7.centos.noarch
libgovirt-0.3.3-1.el7_2.1.x86_64
ovirt-vmconsole-host-1.0.0-1.el7.centos.noarch
ovirt-hosted-engine-setup-1.3.4.0-1.el7.centos.noarch
ovirt-engine-appliance-3.6-20160301.1.el7.centos.noarch

vdsm-jsonrpc-4.17.23.2-0.el7.centos.noarch
vdsm-yajsonrpc-4.17.23.2-0.el7.centos.noarch
vdsm-4.17.23.2-0.el7.centos.noarch
vdsm-python-4.17.23.2-0.el7.centos.noarch
vdsm-infra-4.17.23.2-0.el7.centos.noarch
vdsm-hook-vmfex-dev-4.17.23.2-0.el7.centos.noarch
vdsm-xmlrpc-4.17.23.2-0.el7.centos.noarch
vdsm-cli-4.17.23.2-0.el7.centos.noarch

Output from hosted-engine --vm-status from host1

--== Host 1 status ==--

Status up-to-date  : True
Hostname   : dev-ovirtnode0.example.com
Host ID: 1
Engine status  : {"health": "good", "vm": "up",
"detail": "up"}
Score  : 3400
stopped: False
Local maintenance  : False
crc32  : d7fdf8b6
Host timestamp : 1243846


--== Host 2 status ==--<- it's garbage because there is not
installed and configured HA on host2

Status up-to-date  : False
Hostname   : dev-ovirtnode1.example.com
Host ID: 2
Engine status  : unknown stale-data
Score  : 0
stopped: True
Local maintenance  : False
crc32  : fb5f379e
Host timestamp : 563


Remaining Hosts1 - 4 are updated and configured in the same way (it was
done by me). I had to replace network cards and now there is LACP on 4x10G
cards (before was only 1G card).
Because there is CentOS 7.4 I decided to install vdsm  in version 4.17.43
(from repo) to fix bugs. I aware that 3.6 is only supported with verision
7.2, but I want to update whole env to 3.6.X then to 4.0. then to 4.1 to be
up to date.

vdsm-jsonrpc-4.17.43-1.el7.centos.noarch
vdsm-xmlrpc-4.17.43-1.el7.centos.noarch
vdsm-4.17.43-1.el7.centos.noarch
vdsm-infra-4.17.43-1.el7.centos.noarch
vdsm-yajsonrpc-4.17.43-1.el7.centos.noarch
vdsm-cli-4.17.43-1.el7.centos.noarch
vdsm-python-4.17.43-1.el7.centos.noarch
vdsm-hook-vmfex-dev-4.17.43-1.el7.centos.noarch

On those host1-4 I have around 400 vm's for  used by developers, and I need
to shorten downtime as much as possible (the best option is without
downtime, but I'm not sure if it's possible). I decided to restore HE on
completely new host because I believe that in my case it's the easiest way
to update then upgrade the whole env :)

Many thanks for all advises

Regards

Krzysztof



2017-11-14 8:50 GMT+01:00 Yedidyah Bar David :

> On Mon, Nov 13, 2017 at 11:58 PM, Krzysztof Wajda 
> wrote:
> > Hello,
> >
> > I have to restore Hosted Engine on another host (completely new
> hardware).
> > Based on this
> > https://www.ovirt.org/documentation/self-hosted/
> chap-Backing_up_and_Restoring_an_EL-Based_Self-Hosted_Environment/
> > is not clear for me if vm's will be rebooted during synchronization hosts
> > with engine ?
>
> They should not be rebooted automatically, but you might need to do
> this yourself, see below.
>
> >
> > I have 5 hosts + 1 completely fresh. On host1 I have HE and there is no
> vm's
> > on other 4 (host1-4) there are around 400 vm which can't be rebooted.
> Host5
> > for restore HE.
>
> Please provide more details about your backup/restore flow.
> What died (storage? hosts? data?), what are you going to restore,
> how, etc.
>
> Which hosts are hosted-engine hosts. Do they have running VMs.
>
> We are working on updating the documentation, but it will take some time.
>
> For now, you should assume that the 

Re: [ovirt-users] Error during SSO authentication Cannot authenticate user 'admin@internal'

2017-11-14 Thread Martin Perina
On Tue, Nov 14, 2017 at 12:44 AM, Sverker Abrahamsson <
sver...@abrahamsson.com> wrote:

> Since upgrading my test lab to ovirt 4.2 I can't get ovirt-provider-ovn to
> work. From ovirt-provider-ovn.log:
>
> 2017-11-14 00:40:15,795   Request: POST : /v2.0///tokens
> 2017-11-14 00:40:15,795   Request body:
> {
>   "auth" : {
> "passwordCredentials" : {
>   "username" : "admin@internal",
>   "password" : "x"
> }
>   }
> }
> 2017-11-14 00:40:15,819   Starting new HTTPS connection (1): h2-int
> 2017-11-14 00:40:20,829   "POST /ovirt-engine/sso/oauth/token HTTP/1.1"
> 400 118
> 2017-11-14 00:40:20,830   Error during SSO authentication Cannot
> authenticate user 'admin@internal': The username or password is
> incorrect.. : access_deniedNone
> Traceback (most recent call last):
>   File "/usr/share/ovirt-provider-ovn/handlers/base_handler.py", line
> 119, in _handle_request
> method, path_parts, content)
>   File "/usr/share/ovirt-provider-ovn/handlers/selecting_handler.py",
> line 177, in handle_request
> handler, content, parameters
>   File "/usr/share/ovirt-provider-ovn/handlers/keystone.py", line 28, in
> call_response_handler
> return response_handler(content, parameters)
>   File "/usr/share/ovirt-provider-ovn/handlers/keystone_responses.py",
> line 58, in post_tokens
> user_password=user_password)
>   File "/usr/share/ovirt-provider-ovn/auth/plugin_facade.py", line 26, in
> create_token
> return auth.core.plugin.create_token(user_at_domain, user_password)
>   File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/plugin.py", line
> 48, in create_token
> timeout=self._timeout())
>   File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line
> 62, in create_token
> username, password, engine_url, ca_file, timeout)
>   File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line
> 54, in wrapper
> _check_for_error(response)
>   File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line
> 168, in _check_for_error
> result['error'], details))
> Unauthorized: Error during SSO authentication Cannot authenticate user
> 'admin@internal': The username or password is incorrect.. :
> access_deniedNone
>
> And in engine.log:
>
> 2017-11-14 00:40:20,828+01 ERROR [org.ovirt.engine.core.sso.utils.SsoUtils]
> (default task-16) [] OAuthException access_denied: Cannot authenticate user
> 'admin@internal': The username or password is incorrect..
>

​Could you please provide full engine logs so we can investigate?

​

​Thanks

Martin
​

>
> The password in the request is the same as used to log in to the admin
> portal and works fine there.
>
> /Sverker
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>



-- 
Martin Perina
Associate Manager, Software Engineering
Red Hat Czech s.r.o.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Host Power Management Configuration questions

2017-11-14 Thread Artem Tambovskiy
Trying to configure power management for a certain host and fence agent
always fail when I'm pressing Test button.

 At the same time from command line on the same host all looks good:

[root@ovirt ~]# fence_ipmilan -a 172.16.22.1 -l user -p pwd -o status -v -P
Executing: /usr/bin/ipmitool -I lanplus -H 172.16.22.1 -p 623 -U user -P
pwd -L ADMINISTRATOR chassis power status

0 Chassis Power is on

Status: ON
[root@ovirt ~]#

What could be the reason?

Regards,
Artem
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] recommendations for best performance and reliability

2017-11-14 Thread Sahina Bose
On Mon, Nov 13, 2017 at 3:33 PM, Rudi Ahlers  wrote:

> Hi,
>
> Can someone please give me some pointers, what would be the best setup for
> performance and reliability?
>
> We have the following hardware setup:
>
> 3x Supermicro server with following features per server:
> 128GB RAM
> 4x 8TB SATA HDD
> 2x SSD drives (intel_ssdsc2ba400g4 - 400GB DC S3710)
> 2x 12 core CPU (Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz
> Quad port 10Gbe Inter NIC
> 2x 10GB Cisco switches (to isolate storage network from LAN)
>
> One of the servers will be in another office, with a 600Mb wireless link
> for Disaster Recovery.
>

All 3 servers need to be on the same network with low latency between
servers. Gluster volume is used in replica 3 configuration in oVirt, and
data will be written to all 3 servers synchronously. If you have one of the
3 servers in a remote location, your writes will be slow and the storage
will seem to be unavailable to oVirt.

For Disaster recovery, you can use gluster's geo-replication feature. This
does need an additional server in the remote location. See
https://www.ovirt.org/develop/release-management/features/gluster/gluster-dr/


>
> What is recommended for the best setup in terms of redundancy and speed?
>
> I am guessing GlusterFS with a Distributed Striped Replicated Volume
> across 3 of the servers.
>


Replica 3 gluster volume with sharding turned on for the gluster volume. If
you use the deployment via Cockpit option, the volume is created with all
recommended options.

You can also manually set this by enabling the virt profile, like this
# gluster volume set  group virt


>
> For added performance I want to use the SSD drives, perhaps with dm-cache?
>
>

> Should I combine the 4x HDD's using LVM on each host node?
> What about RAID 6?
>

Gluster can work with either RAID or JBOD ( with replica 3, JBOD can be
used, as gluster has the redundancy built in on the other nodes). Users
often choose RAID to aggregate capacity from multiple disks as single
brick, and also as the brick rebuilt time is offloaded to RAID layer in
case of any of the hard disk failures. The answer is it depends on your
needs and the hardware you have.



>
>
>
> Virtual Machines will then reside on the oVirt Cluster and any one of the
> 3 host nodes can fail, or any single HDD can fail and all should still
> work, right/?
>
>
That's correct. With a replica 3 approach, the oVirt environment is
available even when one of the 3 hosts fail.


>
>
> --
> Kind Regards
> Rudi Ahlers
> Website: http://www.rudiahlers.co.za
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] how to redo oVirt cluster?

2017-11-14 Thread Luca 'remix_tj' Lorenzetto
Usually db is inside the engine vm. Is a postgres. Deleting it is not a
problem, engine installation tools should recreate itt.

Il 14 nov 2017 8:05 AM, "Rudi Ahlers"  ha scritto:

> Hi Luca,
>
> Where is the engine DB stored? Can I simply delete it, or would that
> compromise a reinstallation? Though I can't imagine it would be problem.
>
> On Tue, Nov 14, 2017 at 8:58 AM, Luca 'remix_tj' Lorenzetto <
> lorenzetto.l...@gmail.com> wrote:
>
>> Hello Rudi,
>>
>> I think that uninstalling ovirt-engine and removing vdsm from hosts
>> should be enough. Pay attention to cleaning up engine db which contains all
>> the engine data.
>>
>> Luca
>>
>>
>> Il 14 nov 2017 7:20 AM, "Rudi Ahlers"  ha scritto:
>>
>>> Hi,
>>>
>>> I have setup an oVirt cluster and did some tests. But how do I redo
>>> everything, without reinstalling CentOS as well?
>>> Would it be as simple as uninstalling all the ovirt? Or do I need to
>>> manually delete some config files and other traces off the install as well?
>>>
>>> --
>>> Kind Regards
>>> Rudi Ahlers
>>> Website: http://www.rudiahlers.co.za
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>
>
> --
> Kind Regards
> Rudi Ahlers
> Website: http://www.rudiahlers.co.za
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] how to redo oVirt cluster?

2017-11-14 Thread Yedidyah Bar David
On Tue, Nov 14, 2017 at 8:19 AM, Rudi Ahlers  wrote:
> Hi,
>
> I have setup an oVirt cluster and did some tests. But how do I redo
> everything, without reinstalling CentOS as well?
> Would it be as simple as uninstalling all the ovirt? Or do I need to
> manually delete some config files and other traces off the install as well?

On the engine machine, it should be enough to run 'engine-cleanup'.

I do not think we have something similar for hosts.

ovirt-hosted-engine-setup package has a superset script called
ovirt-hosted-engine-cleanup, you might take parts from it.

Regards,
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users