Re: [ovirt-users] Upgrading engine 3.5.5-1.el6 to latest patches issue (and resolved)

2015-12-02 Thread Yedidyah Bar David
On Wed, Dec 2, 2015 at 4:27 PM, Scott Worthington
 wrote:
> Hello,
>
> This is just an informational e-mail for others who are updating their 3.5
> systems on CentOS 6.7 who may experience the same issue that I experienced.
>
> I am upgrading a stand alone 3.5.5-1.el6 engine on CentOS 6.7 (no High
> Availability cruft) to the latest patches just recently released.
>
> The DWH and report engine are also installed on this same stand alone system.
>
> I did a 'yum update' and the following was downloaded & updated:
>
> Dec 02 08:26:26 Updated: postgresql-libs-8.4.20-4.el6_7.x86_64
> Dec 02 08:26:27 Updated: ovirt-engine-lib-3.5.6.2-1.el6.noarch
> Dec 02 08:26:28 Updated: ovirt-engine-setup-base-3.5.6.2-1.el6.noarch
> Dec 02 08:26:29 Updated:
> ovirt-engine-setup-plugin-ovirt-engine-common-3.5.6.2-1.el6.noarch
> Dec 02 08:26:30 Updated:
> ovirt-engine-setup-plugin-websocket-proxy-3.5.6.2-1.el6.noarch
> Dec 02 08:26:31 Updated:
> ovirt-engine-setup-plugin-ovirt-engine-3.5.6.2-1.el6.noarch
> Dec 02 08:26:33 Updated: postgresql-8.4.20-4.el6_7.x86_64
> Dec 02 08:26:35 Updated: postgresql-server-8.4.20-4.el6_7.x86_64
> Dec 02 08:26:35 Updated: ovirt-engine-setup-3.5.6.2-1.el6.noarch
> Dec 02 08:26:36 Updated: ovirt-engine-websocket-proxy-3.5.6.2-1.el6.noarch
> Dec 02 08:26:36 Updated: ovirt-engine-extensions-api-impl-3.5.6.2-1.el6.noarch
> Dec 02 08:26:37 Updated: ovirt-engine-sdk-python-3.5.6.0-1.el6.noarch
> Dec 02 08:26:41 Updated: ovirt-guest-tools-iso-3.5-8.fc22.noarch
> Dec 02 08:26:50 Updated: 1:java-1.6.0-openjdk-1.6.0.37-1.13.9.4.el6_7.x86_64
>
> Here is where I ran into an issue..
>
> Running an 'engine-setup' to load the latest patches caused this:
>
> [ INFO  ] Cleaning async tasks and compensations
> [ INFO  ] Checking the Engine database consistency
> [ INFO  ] Stage: Transaction setup
> [ INFO  ] Stopping dwh service
> [ INFO  ] Stopping reports service
> [ INFO  ] Stopping engine service
> [ INFO  ] Stopping ovirt-fence-kdump-listener service
> [ INFO  ] Stopping websocket-proxy service
> [ ERROR ] dwhd is currently running. Its hostname is
> tpa-ovirt-engine-01.allianceoneinc.com. Please stop it before running Setup.
> [ ERROR ] Failed to execute stage 'Transaction setup': dwhd is currently 
> running
> [ INFO  ] Yum Performing yum transaction rollback
> [ INFO  ] Stage: Clean up
>   Log file is located at
> /var/log/ovirt-engine/setup/ovirt-engine-setup-20151202083052-q8131e.log
> [ INFO  ] Generating answer file
> '/var/lib/ovirt-engine/setup/answers/20151202083138-setup.conf'
> [ INFO  ] Stage: Pre-termination
> [ INFO  ] Stage: Termination
> [ ERROR ] Execution of setup failed
>
> I did a "ps auxwww | grep ovirt" and "ps auxwww | grep dhw", but came up with
> nothing.  In fact, there was nothing obviously 'ovirt' related running.
>
> To get the "engine-setup" to work, I thought that perhaps something was stuck
> running.
>
> So I did...
>
> service ovirt-engine stop
> service ovirt-engine-dwhd stop
> service ovirt-engine-reportsd stop
> service ovirt-engine-websocket-proxy stop
>
> Followed by...
>
> engine-setup
>
> ...and still got the error that the DWH was still running.
>
> Not to waste time, I figured that a reboot might clear and/or reset out some
> process that I am not aware of.
>
> I did not want anything ovirt engine related starting up, so I examined what
> was running on start up:
>
> chkconfig --list | grep ovirt | grep on
>
> Which returned (sorry for wraps, it's thunderbird):
>
> ovirt-engine0:off   1:off   2:on3:on4:on5:on6:off
> ovirt-engine-dwhd   0:off   1:off   2:on3:on4:on5:on6:off
> ovirt-engine-reportsd   0:off   1:off   2:on3:on4:on5:on6:off
> ovirt-fence-kdump-listener  0:off   1:off   2:on3:on4:on5:on
>   6:off
> ovirt-websocket-proxy   0:off   1:off   2:on3:on4:on5:on6:off
>
> I turned these off on boot:
>
> chkconfig ovirt-engine off
> chkconfig ovirt-engine-dwhd off
> chkconfig ovirt-engine-reportsd off
> chkconfig ovirt-fence-kdump-listener off
> chkconfig ovirt-websocket-proxy off
>
> Rebooted the server, logged back in, and 'engine-setup' ran flawlessly.
>
> After the upgrade, I turned the services back on for the next reboot:
>
> chkconfig ovirt-engine on
> chkconfig ovirt-engine-dwhd on
> chkconfig ovirt-engine-reportsd on
> chkconfig ovirt-fence-kdump-listener on
> chkconfig ovirt-websocket-proxy on
>
> Hope that helps someone else.

This is a known issue [1]. See also [2].

Sorry you had to go through this.

Actually not sure how what you did above solved this. It's solvable by
restarting dwhd (comment 1 in [1]) or by manually updating the db [2].
Perhaps you did restart dwhd at some point? Other than that, can't think
of anything that will reset this flag.

Thanks for the report, anyway!

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1286441
[2] http://www.ovirt.org/Ovirt-engine-backup#DWH_up_during_backup

Best,
-- 
Didi

Re: [ovirt-users] Failing to add NFS storage domain

2015-12-02 Thread Roel de Rooy
Hi,

When looking inside the mounted NFS share, can you see if the needed folders 
are created (dom_md, etc.)?
We also had a same sort of error after trying to add a new NFS share to our 
development environment.

Adding the NFS storage would fail, but when looking at the host itself, the 
share would be connected.

In our case, we did see that the folders were created with permissions set as 
the root group, instead of the kvm group.
After changing permission settings on the NFS server, the share connected 
perfectly.

Kind regards,
Roel de Rooy


Van: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] Namens 
Jean-Pierre Ribeauville
Verzonden: 02 December 2015 11:51
Aan: users@ovirt.org
Onderwerp: [ovirt-users] Failing to add NFS storage domain


Hi,



By using 3.5 ovirt , I'm trying to add  an NFS  storage domain to my datacenter.





Then  I  got following error :


Error while executing action New NFS Storage Domain: Error creating a storage 
domain





I restart sanlock service on the host ( it's a RHEL 7)  ; no positive effect.



In engine.log  found this  :


2015-12-02 11:42:17,767 INFO  
[org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand] 
(ajp-/127.0.0.1:8702-8) [47ec013e] Lock Acquired to object EngineLock 
[exclusiveLocks= key: omniserv.lab.dc01.axway.int:/nfs/omnicol value: 
STORAGE_CONNECTION
, sharedLocks= ]
2015-12-02 11:42:17,771 INFO  
[org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand] 
(ajp-/127.0.0.1:8702-8) [47ec013e] Running command: 
AddStorageServerConnectionCommand internal: false. Entities affected :  ID: 
aaa0----123456789aaa Type: SystemAction group 
CREATE_STORAGE_DOMAIN with role type ADMIN
2015-12-02 11:42:17,771 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
(ajp-/127.0.0.1:8702-8) [47ec013e] START, 
ConnectStorageServerVDSCommand(HostName = ldc01omv01, HostId = 
09bb3024-170f-48a1-a78a-951a2c61c680, storagePoolId = 
----, storageType = NFS, connectionList = [{ 
id: null, connection: omniserv.lab.dc01.axway.int:/nfs/omnicol, iqn: null, 
vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, 
nfsTimeo: null };]), log id: 7651babb
2015-12-02 11:42:17,790 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
(ajp-/127.0.0.1:8702-8) [47ec013e] FINISH, ConnectStorageServerVDSCommand, 
return: {----=0}, log id: 7651babb
2015-12-02 11:42:17,793 INFO  
[org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand] 
(ajp-/127.0.0.1:8702-8) [47ec013e] Lock freed to object EngineLock 
[exclusiveLocks= key: omniserv.lab.dc01.axway.int:/nfs/omnicol value: 
STORAGE_CONNECTION
, sharedLocks= ]
2015-12-02 11:42:17,810 INFO  
[org.ovirt.engine.core.bll.storage.AddNFSStorageDomainCommand] 
(ajp-/127.0.0.1:8702-6) [66393037] Running command: AddNFSStorageDomainCommand 
internal: false. Entities affected :  ID: aaa0----123456789aaa 
Type: SystemAction group CREATE_STORAGE_DOMAIN with role type ADMIN
2015-12-02 11:42:17,813 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
(ajp-/127.0.0.1:8702-6) [66393037] START, 
ConnectStorageServerVDSCommand(HostName = ldc01omv01, HostId = 
09bb3024-170f-48a1-a78a-951a2c61c680, storagePoolId = 
----, storageType = NFS, connectionList = [{ 
id: ce05a2b0-cee9-44a9-845e-8a35349c7195, connection: 
omniserv.lab.dc01.axway.int:/nfs/omnicol, iqn: null, vfsType: null, 
mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]), 
log id: 7c9bb0db
2015-12-02 11:42:17,828 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
(ajp-/127.0.0.1:8702-6) [66393037] FINISH, ConnectStorageServerVDSCommand, 
return: {ce05a2b0-cee9-44a9-845e-8a35349c7195=0}, log id: 7c9bb0db
2015-12-02 11:42:17,828 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] 
(ajp-/127.0.0.1:8702-6) [66393037] START, 
CreateStorageDomainVDSCommand(HostName = ldc01omv01, HostId = 
09bb3024-170f-48a1-a78a-951a2c61c680, 
storageDomain=StorageDomainStatic[test_nfs, 
5c4b7602-47de-47a7-a278-95b1ac7b1f8d], 
args=omniserv.lab.dc01.axway.int:/nfs/omnicol), log id: 62401241
2015-12-02 11:42:17,986 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] 
(ajp-/127.0.0.1:8702-6) [66393037] Failed in CreateStorageDomainVDS method
2015-12-02 11:42:17,986 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] 
(ajp-/127.0.0.1:8702-6) [66393037] Command 
org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand return 
value
StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=351, mMessage=Error 
creating a storage domain: (u'storageType=1, 
sdUUID=5c4b7602-47de-47a7-a278-95b1ac7b1f8d, domainName=test_nfs, domClass=1, 
typeSpecificArg=omniserv.lab.dc01.axway.int:/nfs/omnicol 

Re: [ovirt-users] Problem with hosted engine setup - vsdmd does not start

2015-12-02 Thread Will Dennis
Ran the configure the first time and got an error...

root@ovirt-node-01 ~> vdsm-tool configure --force

Checking configuration status...

Current revision of multipath.conf detected, preserving
libvirt is not configured for vdsm yet
FAILED: conflicting vdsm and libvirt-qemu tls configuration.
vdsm.conf with ssl=True requires the following changes:
libvirtd.conf: listen_tcp=0, auth_tcp="sasl", listen_tls=1
qemu.conf: spice_tls=1.

Running configure...
Reconfiguration of sebool is done.
Reconfiguration of passwd is done.
Reconfiguration of libvirt is done.

Done configuring modules to VDSM.

So, made the appropriate edits to libvirtd.conf and qemu.conf

root@ovirt-node-01 ~> grep "^[a-z]" /etc/libvirt/libvirtd.conf
listen_tls = 1
listen_tcp = 0
auth_tcp = "sasl"
auth_unix_rw="sasl"
ca_file="/etc/pki/vdsm/certs/cacert.pem"
cert_file="/etc/pki/vdsm/certs/vdsmcert.pem"
host_uuid="2b026fff-bae2-429f-9386-c60ffc5f3f32"
keepalive_interval=-1
key_file="/etc/pki/vdsm/keys/vdsmkey.pem"
unix_sock_group="qemu"
unix_sock_rw_perms="0770"

root@ovirt-node-01 ~> grep "^[a-z]" /etc/libvirt/qemu.conf
spice_tls = 1
auto_dump_path="/var/log/core"
dynamic_ownership=0
lock_manager="sanlock"
remote_display_port_max=6923
remote_display_port_min=5900
save_image_format="lzop"
spice_tls=1
spice_tls_x509_cert_dir="/etc/pki/vdsm/libvirt-spice"

Then re-ran the config...

root@ovirt-node-01 ~> vdsm-tool configure --force

Checking configuration status...

Current revision of multipath.conf detected, preserving
libvirt is already configured for vdsm
SUCCESS: ssl configured to true. No conflicts

Running configure...
Reconfiguration of sebool is done.
Reconfiguration of libvirt is done.

Done configuring modules to VDSM.

Sadly, vdsmd STILL fails to start...

root@ovirt-node-01 ~> systemctl restart vdsmd
root@ovirt-node-01 ~> systemctl status vdsmd
vdsmd.service - Virtual Desktop Server Manager
   Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled)
   Active: activating (auto-restart) (Result: exit-code) since Wed 2015-12-02 
10:30:47 EST; 1s ago
  Process: 6522 ExecStopPost=/usr/libexec/vdsm/vdsmd_init_common.sh --post-stop 
(code=exited, status=0/SUCCESS)
  Process: 6506 ExecStart=/usr/share/vdsm/daemonAdapter -0 /dev/null -1 
/dev/null -2 /dev/null /usr/share/vdsm/vdsm (code=exited, status=1/FAILURE)
  Process: 6433 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start 
(code=exited, status=0/SUCCESS)
Main PID: 6506 (code=exited, status=1/FAILURE)

Dec 02 10:30:47 ovirt-node-01 systemd[1]: Unit vdsmd.service entered failed 
Dec 02 10:30:47 ovirt-node-01 systemd[1]: vdsmd.service holdoff time over, s
Hint: Some lines were ellipsized, use -l to show in full.

:(

From: Simone Tiraboschi [mailto:stira...@redhat.com]
Sent: Wednesday, December 02, 2015 3:57 AM
To: Will Dennis
Cc: Fabian Deutsch; users
Subject: Re: [ovirt-users] Problem with hosted engine setup - vsdmd does not 
start

[snip]

Now can you please now configure it with:
  vdsm-tool configure --force

Then you have to restart it


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] HA cluster

2015-12-02 Thread Ryan Barry
On Wed, Dec 2, 2015 at 5:05 AM,  wrote:

> On Wed, Dec 2, 2015 at 12:19 PM, Budur Nagaraju  wrote:
>
> > I have installed KVM in the nested environment  in ESXi6.x version is
> that
> > recommended ?
> >
>
> I often use KVM over KVM in nested environment but honestly I never tried
> to run KVM over ESXi but I suspect that all of your issues comes from
> there.
>
> It should be fine in ESXi as well, as long as the VMX for that VM has
vhv.enabled=true, and the config for the VM is reloaded.

>
> > apart from Hosted engine is there any other alternate way to configure
> > Engine HA cluster ?
> >
>
> Nothing else from the project. You can use two external VMs in cluster with
> pacemaker but it's completely up to you.
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Upgrading engine 3.5.5-1.el6 to latest patches issue (and resolved)

2015-12-02 Thread Scott Worthington
Hello,

This is just an informational e-mail for others who are updating their 3.5
systems on CentOS 6.7 who may experience the same issue that I experienced.

I am upgrading a stand alone 3.5.5-1.el6 engine on CentOS 6.7 (no High
Availability cruft) to the latest patches just recently released.

The DWH and report engine are also installed on this same stand alone system.

I did a 'yum update' and the following was downloaded & updated:

Dec 02 08:26:26 Updated: postgresql-libs-8.4.20-4.el6_7.x86_64
Dec 02 08:26:27 Updated: ovirt-engine-lib-3.5.6.2-1.el6.noarch
Dec 02 08:26:28 Updated: ovirt-engine-setup-base-3.5.6.2-1.el6.noarch
Dec 02 08:26:29 Updated:
ovirt-engine-setup-plugin-ovirt-engine-common-3.5.6.2-1.el6.noarch
Dec 02 08:26:30 Updated:
ovirt-engine-setup-plugin-websocket-proxy-3.5.6.2-1.el6.noarch
Dec 02 08:26:31 Updated:
ovirt-engine-setup-plugin-ovirt-engine-3.5.6.2-1.el6.noarch
Dec 02 08:26:33 Updated: postgresql-8.4.20-4.el6_7.x86_64
Dec 02 08:26:35 Updated: postgresql-server-8.4.20-4.el6_7.x86_64
Dec 02 08:26:35 Updated: ovirt-engine-setup-3.5.6.2-1.el6.noarch
Dec 02 08:26:36 Updated: ovirt-engine-websocket-proxy-3.5.6.2-1.el6.noarch
Dec 02 08:26:36 Updated: ovirt-engine-extensions-api-impl-3.5.6.2-1.el6.noarch
Dec 02 08:26:37 Updated: ovirt-engine-sdk-python-3.5.6.0-1.el6.noarch
Dec 02 08:26:41 Updated: ovirt-guest-tools-iso-3.5-8.fc22.noarch
Dec 02 08:26:50 Updated: 1:java-1.6.0-openjdk-1.6.0.37-1.13.9.4.el6_7.x86_64

Here is where I ran into an issue..

Running an 'engine-setup' to load the latest patches caused this:

[ INFO  ] Cleaning async tasks and compensations
[ INFO  ] Checking the Engine database consistency
[ INFO  ] Stage: Transaction setup
[ INFO  ] Stopping dwh service
[ INFO  ] Stopping reports service
[ INFO  ] Stopping engine service
[ INFO  ] Stopping ovirt-fence-kdump-listener service
[ INFO  ] Stopping websocket-proxy service
[ ERROR ] dwhd is currently running. Its hostname is
tpa-ovirt-engine-01.allianceoneinc.com. Please stop it before running Setup.
[ ERROR ] Failed to execute stage 'Transaction setup': dwhd is currently running
[ INFO  ] Yum Performing yum transaction rollback
[ INFO  ] Stage: Clean up
  Log file is located at
/var/log/ovirt-engine/setup/ovirt-engine-setup-20151202083052-q8131e.log
[ INFO  ] Generating answer file
'/var/lib/ovirt-engine/setup/answers/20151202083138-setup.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ ERROR ] Execution of setup failed

I did a "ps auxwww | grep ovirt" and "ps auxwww | grep dhw", but came up with
nothing.  In fact, there was nothing obviously 'ovirt' related running.

To get the "engine-setup" to work, I thought that perhaps something was stuck
running.

So I did...

service ovirt-engine stop
service ovirt-engine-dwhd stop
service ovirt-engine-reportsd stop
service ovirt-engine-websocket-proxy stop

Followed by...

engine-setup

...and still got the error that the DWH was still running.

Not to waste time, I figured that a reboot might clear and/or reset out some
process that I am not aware of.

I did not want anything ovirt engine related starting up, so I examined what
was running on start up:

chkconfig --list | grep ovirt | grep on

Which returned (sorry for wraps, it's thunderbird):

ovirt-engine0:off   1:off   2:on3:on4:on5:on6:off
ovirt-engine-dwhd   0:off   1:off   2:on3:on4:on5:on6:off
ovirt-engine-reportsd   0:off   1:off   2:on3:on4:on5:on6:off
ovirt-fence-kdump-listener  0:off   1:off   2:on3:on4:on5:on
  6:off
ovirt-websocket-proxy   0:off   1:off   2:on3:on4:on5:on6:off

I turned these off on boot:

chkconfig ovirt-engine off
chkconfig ovirt-engine-dwhd off
chkconfig ovirt-engine-reportsd off
chkconfig ovirt-fence-kdump-listener off
chkconfig ovirt-websocket-proxy off

Rebooted the server, logged back in, and 'engine-setup' ran flawlessly.

After the upgrade, I turned the services back on for the next reboot:

chkconfig ovirt-engine on
chkconfig ovirt-engine-dwhd on
chkconfig ovirt-engine-reportsd on
chkconfig ovirt-fence-kdump-listener on
chkconfig ovirt-websocket-proxy on

Hope that helps someone else.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] cloud-init and sealing of template

2015-12-02 Thread Gianluca Cecchi
On Wed, Dec 2, 2015 at 9:01 AM, Shahar Havivi  wrote:

>
> You are using a static IP which can be a problem on creating multiple VMs.
> In this case you will need to run cloud-init on each created VM with a new
> static IP or DHCP.
>
>
>
yes, I understand your point.
But my contingent need is to temporary setup a bunch of "fat" (8Gb of ram
and 2 cores) clients that have to stress a RAC DB in several ways
(concurrent HammerDB, load balancing and failover tests, ecc...)
They have to stay on a test network not under my control and where there is
not a dhcp server and I'm not allowed to set up one.
So I have only a pool of Ip addresses on this network. I will start with
2-3 clients, but then it could be that I need to instantiate also tenths of
them.
I have not so many resources and time to setup a more engineered
provisioning system (Foreman, Puppet, ManageIQ, ecc...) at least at this
time.
Coming back to CentOS 7 template, I made some combinations and finally I
found that the best result is obtained with NetworkManager enabled on the
source template from which I create the VM with cloud-init.
It would be nice to have a sort of translation to a user-data (or
similar...) file when I compose my cloud-init settings for a new vm inside
the gui, so that I can duplicate further similar instantiations from oVirt
CLI or other scripted commands, bypassing the web gui.
I'm not so acquainted with cloud-init in general and with its
implementation in oVirt. If there is any useful link for this kind of task
or a suggestion to pass a file for VM creation is welcome.

Thanks again anyway,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] cloud-init and sealing of template

2015-12-02 Thread Ekin Meroğlu
Hi,

You can always use REST-API to attach a cloud-init payload to the the
machines you create - see the API example in:

http://www.ovirt.org/Features/Cloud-Init_Integration

But the feature was somewhat broken in 3.5.5, recently fixed in 3.5.6:

https://bugzilla.redhat.com/show_bug.cgi?id=1269534

Please see that you should use the new true
tag for 3.5.6

I have no experience with it in previous versions tough...

For the template sealing, I only set hostname to localhost, remove the
infamous /etc/udev/rules.d/70-* files, delete HW lines form ifcfg-* files,
unregister the machine from satellite etc. and poweroff. I do not use
sys-unconfig (as it suggests), and did not need to run virt-sysprep - plain
old sealing with deleting the necessary info seemed enough for me...

Regards,
--
Ekin.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problem with hosted engine setup - vsdmd does not start

2015-12-02 Thread Simone Tiraboschi
On Wed, Dec 2, 2015 at 5:56 AM, Willard Dennis  wrote:

> Did the following:
>
> 1) yum remove vdsm*
> […]
> Removed:
>   vdsm.noarch 0:4.17.10.1-0.el7.centos vdsm-cli.noarch
> 0:4.17.10.1-0.el7.centosvdsm-gluster.noarch 0:4.17.10.1-0.el7.centos
>   vdsm-infra.noarch 0:4.17.10.1-0.el7.centos   vdsm-jsonrpc.noarch
> 0:4.17.10.1-0.el7.centosvdsm-python.noarch 0:4.17.10.1-0.el7.centos
>   vdsm-xmlrpc.noarch 0:4.17.10.1-0.el7.centos  vdsm-yajsonrpc.noarch
> 0:4.17.10.1-0.el7.centos
>
> Dependency Removed:
>   ovirt-hosted-engine-ha.noarch 0:1.3.2.1-1.el7.centos
> ovirt-hosted-engine-setup.noarch 0:1.3.0-1.el7.centos
>
>
> 2) yum install vdsm vdsm-cli vdsm-gluster vdsm-infra vdsm-jsonrpc
> vdsm-python vdsm-xmlrpc vdsm-yajsonrpc
> […]
> Installed:
>   vdsm.noarch 0:4.17.11-0.el7.centos   vdsm-cli.noarch
> 0:4.17.11-0.el7.centos  vdsm-gluster.noarch 0:4.17.11-0.el7.centos
>   vdsm-infra.noarch 0:4.17.11-0.el7.centos vdsm-jsonrpc.noarch
> 0:4.17.11-0.el7.centos  vdsm-python.noarch 0:4.17.11-0.el7.centos
>   vdsm-xmlrpc.noarch 0:4.17.11-0.el7.centosvdsm-yajsonrpc.noarch
> 0:4.17.11-0.el7.centos
>
> 3) yum install ovirt-hosted-engine-setup ovirt-hosted-engine-ha
> […]
> Installed:
>   ovirt-hosted-engine-ha.noarch 0:1.3.3-1.el7.centos
>  ovirt-hosted-engine-setup.noarch 0:1.3.1-1.el7.centos
>
> Dependency Updated:
>   ovirt-host-deploy.noarch 0:1.4.1-1.el7.centos
>
>
> Result:
> [root@ovirt-node-01 ~]# systemctl start vdsmd
> Job for vdsmd.service failed. See 'systemctl status vdsmd.service' and
> 'journalctl -xn' for details.
> [root@ovirt-node-01 ~]# systemctl status vdsmd.service
> vdsmd.service - Virtual Desktop Server Manager
>Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled)
>Active: failed (Result: start-limit) since Tue 2015-12-01 23:55:05 EST;
> 14s ago
>   Process: 21336 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh
> --pre-start (code=exited, status=1/FAILURE)
>  Main PID: 20589 (code=exited, status=1/FAILURE)
>
> Dec 01 23:55:05 ovirt-node-01 systemd[1]: Failed to start Virtual Desktop
> Server Manager.
> Dec 01 23:55:05 ovirt-node-01 systemd[1]: Unit vdsmd.service entered
> failed state.
> Dec 01 23:55:05 ovirt-node-01 systemd[1]: Starting Virtual Desktop Server
> Manager...
> Dec 01 23:55:05 ovirt-node-01 systemd[1]: vdsmd.service holdoff time over,
> scheduling restart.
> Dec 01 23:55:05 ovirt-node-01 systemd[1]: Stopping Virtual Desktop Server
> Manager...
> Dec 01 23:55:05 ovirt-node-01 systemd[1]: Starting Virtual Desktop Server
> Manager...
> Dec 01 23:55:05 ovirt-node-01 systemd[1]: vdsmd.service start request
> repeated too quickly, refusing to start.
> Dec 01 23:55:05 ovirt-node-01 systemd[1]: Failed to start Virtual Desktop
> Server Manager.
> Dec 01 23:55:05 ovirt-node-01 systemd[1]: Unit vdsmd.service entered
> failed state.
>
> Different error now, but vdsmd service still won’t restart correctly...
>

Now can you please now configure it with:
  vdsm-tool configure --force

Then you have to restart it


>
>
> On Dec 1, 2015, at 9:25 AM, Simone Tiraboschi  wrote:
>
> On Tue, Dec 1, 2015 at 3:15 PM, Will Dennis  wrote:
>
>> I believe sudo is NOT setup for passwordless (then again, didn't see that
>> in any pre-req instructions...) Who would be the sudoer in this case?
>>
>
> It's not required to take any manual action.
> Can you please try to reinstall vdsm from its rpms and try it again?
>
>
>>
>> -Original Message-
>> From: Fabian Deutsch [mailto:fdeut...@redhat.com]
>> Sent: Tuesday, December 01, 2015 12:58 AM
>> To: Will Dennis
>> Cc: Simone Tiraboschi; users
>> Subject: Re: [ovirt-users] Problem with hosted engine setup - vsdmd does
>> not start
>>
>> On Tue, Dec 1, 2015 at 4:52 AM, Will Dennis  wrote:
>> > Any clues out of the strace of vdsm?
>>
>> read(9, "sudo: a password is required\n", 4096) = 29
>>
>> Could it be that sudo is not configured to operate passwordless?
>>
>> The strat-up can then fail, because sudo requires a ty, but this isn't
>> available during service start.
>>
>> - fabian
>>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Setting up oVirt for the first time

2015-12-02 Thread Alan Murrell

On 01/12/2015 2:47 PM, Simone Tiraboschi wrote:


What you are asking for is generally called hyper-convergence. We tried
to have it for 3.6 with glusterfs on each node but it wasn't valuated
stable enough to  be released. We are still working on that for the next
release.


In such a setup (GlusterFS running on 3+ nodes, sharing their storage), 
is this also basically what VMware's "vSAN" is?  It seems very similar, 
at least.


Regards,

Alan

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] ISO issues

2015-12-02 Thread Budur Nagaraju
HI

Mounted a valid ISO image and selected first boot option a "CDrom" and
second as Hard disk,
all of sudden am unable to boot few days back it was working but now am
facing issues.

any ways to resolve the issue ?

Thanks,
Nagaraju
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] [ANN] oVirt 3.6.1 Second Release Candidate is now available for testing

2015-12-02 Thread Sandro Bonazzola
The oVirt Project is pleased to announce the availability
of the Second Release Candidate of oVirt 3.6.1 for testing, as of December
3rd, 2015.

This release is available now for Fedora 22,
Red Hat Enterprise Linux 6.7, CentOS Linux 6.7 (or similar) and
Red Hat Enterprise Linux >= 7.1, CentOS Linux >= 7.1 (or similar).

This release supports Hypervisor Hosts running
Red Hat Enterprise Linux >= 7.1, CentOS Linux >= 7.1 (or similar) and
Fedora 22.
Highly experimental support for Debian 8.2 Jessie has been added too.

This release candidate includes updated packages for:
- ovirt-engine
- ovirt-engine-extension-aaa-jdbc
- ovirt-hosted-engine-ha
- ovirt-hosted-engine-setup
- vdsm

This release of oVirt 3.6.1 includes numerous bug fixes.
See the release notes [1] for an initial list of the new features and bugs
fixed.

Please refer to release notes [1] for Installation / Upgrade instructions.
A new oVirt Live ISO will be available soon[2].

Please note that mirrors[3] may need usually one day before being
synchronized.

Please refer to the release notes for known issues in this release.

[1] http://www.ovirt.org/OVirt_3.6.1_Release_Notes
[2] http://resources.ovirt.org/pub/ovirt-3.6-pre/iso/
[3] http://www.ovirt.org/Repository_mirrors#Current_mirrors

-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Data(Master)

2015-12-02 Thread Greg Padgett

On 12/01/2015 02:39 PM, Sandvik Agustin wrote:

Hi users,

I'm using oVirt 3.4 on rhel 6.5 and it's working perfectly, but when i'm
setting up another Hypervisor and ovirt-engine I accidentally attach my
working storage "Data(Master)" to my 2nd one, and my problem is i'm unable
to re-attach it to my first one, is it possible to re-attach it without
loosing my vm guest disk images? And when I accidentally attach it to my
2nd one, It prompt me that "This operation might be unrecoverable and
distructive" is there a change that my 25 vm guest disk images will be
erased? God Bless.



Hi Sandvik,

I'm guessing this was iSCSI storage based on the error message.

If so, unfortunately the LVM metadata for the original storage domain will 
have been overwritten due to attaching the domain to the second setup, so 
it can't simply be re-attached to the first.


However, the images are probably still (mostly?) intact as long as the 
storage has been relatively untouched. If you feel it's worthwhile and are 
comfortable getting your hands a little dirty you may be able to recover 
some of the images by finding them on the LUN.


To do this, I'd check your hypervisors' /etc/lvm/archive directories and 
look for an lvm metadata backup as in [1],[2]. You want to look for a VG 
with the UUID of your old master storage domain. The LV names within would 
correspond to your old VM disk image UUIDs.  If you find this (via the 
backup, `pvck`, or even by scouring the first MB or so of the LUN), you 
might be able to recover it via some hints in [1] or [2] (of course 
maintenance/detach the domain from the second hypervisor first).  You can 
then copy the data onto a new storage domain.


Hope that helps,
Greg

[1] 
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Logical_Volume_Manager_Administration/mdatarecover.html


[2] 
http://serverfault.com/questions/223361/how-to-recover-logical-volume-deleted-with-lvremove




TIA
Sandvik


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Failing to add iSCSI storage domain

2015-12-02 Thread Jean-Pierre Ribeauville
Hi,

By using 3.5 ovirt , I'm trying to add  a iSCSI storage domain to my datacenter.


When clicking on discover button , I  got following error :

No new devices were found. This may be due to either: incorrect multipath 
configuration on the Host or wrong address of the iscsi target or a failure to 
authenticate on the target device. Please consult your Storage Administrator.

I restart sanlock service on the host ( it's a RHEL 7)  ; no positive effect.

In engine.log  found this  :

...
IQNListReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=475, mMessage=Failed 
discovery of iSCSI targets: "portal=IscsiPortal(hostname=u'10.147.60.90', 
port=3260), err=(21, [], ['iscsiadm: No portals found'])"]]
2015-12-02 11:28:31,468 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.DiscoverSendTargetsVDSCommand] 
(ajp-/127.0.0.1:8702-5) HostName = ldc01omv01
2015-12-02 11:28:31,468 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.DiscoverSendTargetsVDSCommand] 
(ajp-/127.0.0.1:8702-5) Command DiscoverSendTargetsVDSCommand(HostName = 
ldc01omv01, HostId = 09bb3024-170f-48a1-a78a-951a2c61c680, connection={ id: 
null, connection: 10.147.60.90, iqn: null, vfsType: null, mountOptions: null, 
nfsVersion: null, nfsRetrans: null, nfsTimeo: null };) execution failed. 
Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed in 
vdscommand to DiscoverSendTargetsVDS, error = Failed discovery of iSCSI 
targets: "portal=IscsiPortal(hostname=u'10.147.60.90', port=3260), err=(21, [], 
['iscsiadm: No portals found'])"
2015-12-02 11:28:31,468 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.DiscoverSendTargetsVDSCommand] 
(ajp-/127.0.0.1:8702-5) FINISH, DiscoverSendTargetsVDSCommand, log id: 3023392e
2015-12-02 11:28:31,468 ERROR 
[org.ovirt.engine.core.bll.storage.DiscoverSendTargetsQuery] 
(ajp-/127.0.0.1:8702-5) Query DiscoverSendTargetsQuery failed. Exception 
message is VdcBLLException: 
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: 
VDSGenericException: VDSErrorException: Failed in vdscommand to 
DiscoverSendTargetsVDS, error = Failed discovery of iSCSI targets: 
"portal=IscsiPortal(hostname=u'10.147.60.90', port=3260), err=(21, [], 
['iscsiadm: No portals found'])" (Failed with error iSCSIDiscoveryError and 
code 475) : org.ovirt.engine.core.common.errors.VdcBLLException: 
VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: 
VDSGenericException: VDSErrorException: Failed in vdscommand to 
DiscoverSendTargetsVDS, error = Failed discovery of iSCSI targets: 
"portal=IscsiPortal(hostname=u'10.147.60.90', port=3260), err=(21, [], 
['iscsiadm: No portals found'])" (Failed with error iSCSIDiscoveryError and 
code 475): org.ovirt.engine.core.common.errors.VdcBLLException: 
VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: 
VDSGenericException: VDSErrorException: Failed in vdscommand to 
DiscoverSendTargetsVDS, error = Failed discovery of iSCSI targets: 
"portal=IscsiPortal(hostname=u'10.147.60.90', port=3260), err=(21, [], 
['iscsiadm: No portals found'])" (Failed with error iSCSIDiscoveryError and 
code 475)
at 
org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:117) 
[bll.jar:]
at 
org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.RunVdsCommand(VDSBrokerFrontendImpl.java:33)
 [bll.jar:]
at 
org.ovirt.engine.core.bll.storage.DiscoverSendTargetsQuery.executeQueryCommand(DiscoverSendTargetsQuery.java:16)
 [bll.jar:]
at 
org.ovirt.engine.core.bll.QueriesCommandBase.executeCommand(QueriesCommandBase.java:73)
 [bll.jar:]

...



In order to check iSCSI discovering via another way, I run this command from  
ovirt manager system command prompt :

iscsiadm -m discovery -t st -p 

I got  a correct  device lists.

Thanks for help.



J.P. Ribeauville


P: +33.(0).1.47.17.20.49
.
Puteaux 3 Etage 5  Bureau 4

jpribeauvi...@axway.com
http://www.axway.com



P Pensez à l'environnement avant d'imprimer.



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] HA cluster

2015-12-02 Thread Simone Tiraboschi
On Wed, Dec 2, 2015 at 11:25 AM, Budur Nagaraju  wrote:

> pls fine the logs from the below mentioned URL,
>
> http://pastebin.com/ZeKyyFbN
>

OK, the issue is here:

Thread-88::ERROR::2015-12-02
15:06:27,735::vm::2358::vm.Vm::(_startUnderlyingVm)
vmId=`93db4369-285f-48bc-bc68-181d9de41a3c`::The vm start process failed
Traceback (most recent call last):
  File "/usr/share/vdsm/virt/vm.py", line 2298, in _startUnderlyingVm
self._run()
  File "/usr/share/vdsm/virt/vm.py", line 3363, in _run
self._connection.createXML(domxml, flags),
  File "/usr/lib/python2.6/site-packages/vdsm/libvirtconnection.py", line
119, in wrapper
ret = f(*args, **kwargs)
  File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2709, in
createXML
if ret is None:raise libvirtError('virDomainCreateXML() failed',
conn=self)
libvirtError: unsupported configuration: Domain requires KVM, but it is not
available. Check that virtualization is enabled in the host BIOS, and host
configuration is setup to load the kvm modules.
Thread-88::DEBUG::2015-12-02 15:06:27,751::vm::2813::vm.Vm::(setDownStatus)
vmId=`93db4369-285f-48bc-bc68-181d9de41a3c`::Changed state to Down:
unsupported configuration: Domain requires KVM, but it is not available.
Check that virtualization is enabled in the host BIOS, and host
configuration is setup to load the kvm modules. (code=1)

but it's pretty strange cause hosted-engine-setup already explicitly check
for visualization support and just exits with a clear error if not.
Did you played with the kvm module while hosted-engine-setup was running?

Can you please hosted-engine-setup logs?


>
> On Fri, Nov 27, 2015 at 6:39 PM, Simone Tiraboschi 
> wrote:
>
>>
>>
>> On Fri, Nov 27, 2015 at 12:42 PM, Maxim Kovgan  wrote:
>>
>>> Maybe even makes sense to open a bugzilla ticket already. Better safe
>>> than sorry.
>>>
>>
>> We still need at least one log file to understand what happened.
>>
>>
>>> On Nov 27, 2015 11:35 AM, "Simone Tiraboschi" 
>>> wrote:
>>>

 On Fri, Nov 27, 2015 at 10:10 AM, Budur Nagaraju 
 wrote:

> I do not know what logs you are expecting ? the logs which I got is
> pasted in the mail if you require in pastebin let me know I will upload
> there .
>


 Please run sosreport utility and share the resulting archive where you
 prefer.
 You can follow this guide:
 http://www.linuxtechi.com/how-to-create-sosreport-in-linux/

>
>
> On Fri, Nov 27, 2015 at 1:58 PM, Sandro Bonazzola  > wrote:
>
>>
>>
>> On Fri, Nov 27, 2015 at 8:34 AM, Budur Nagaraju 
>> wrote:
>>
>>> I got only 10lines to in the vdsm logs and are below ,
>>>
>>>
>> Can you please provide full sos report?
>>
>>
>>
>>>
>>> [root@he /]# tail -f /var/log/vdsm/vdsm.log
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,360::resourceManager::616::Storage.ResourceManager::(releaseResource)
>>> Trying to release resource 'Storage.HsmDomainMonitorLock'
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,360::resourceManager::635::Storage.ResourceManager::(releaseResource)
>>> Released resource 'Storage.HsmDomainMonitorLock' (0 active users)
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,360::resourceManager::641::Storage.ResourceManager::(releaseResource)
>>> Resource 'Storage.HsmDomainMonitorLock' is free, finding out if anyone 
>>> is
>>> waiting for it.
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,360::resourceManager::649::Storage.ResourceManager::(releaseResource)
>>> No one is waiting for resource 'Storage.HsmDomainMonitorLock', Clearing
>>> records.
>>> Thread-100::INFO::2015-11-27
>>> 12:58:57,360::logUtils::47::dispatcher::(wrapper) Run and protect:
>>> stopMonitoringDomain, Return response: None
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,361::task::1191::Storage.TaskManager.Task::(prepare)
>>> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::finished: None
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,361::task::595::Storage.TaskManager.Task::(_updateState)
>>> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::moving from state 
>>> preparing ->
>>> state finished
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,361::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
>>> Owner.releaseAll requests {} resources {}
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,361::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>>> Owner.cancelAll requests {}
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,361::task::993::Storage.TaskManager.Task::(_decref)
>>> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::ref 0 aborting False
>>>
>>>
>>>
>>> On Thu, Nov 26, 2015 at 4:20 PM, Simone Tiraboschi <

[ovirt-users] Failing to add NFS storage domain

2015-12-02 Thread Jean-Pierre Ribeauville
Hi,



By using 3.5 ovirt , I'm trying to add  an NFS  storage domain to my datacenter.





Then  I  got following error :


Error while executing action New NFS Storage Domain: Error creating a storage 
domain





I restart sanlock service on the host ( it's a RHEL 7)  ; no positive effect.



In engine.log  found this  :


2015-12-02 11:42:17,767 INFO  
[org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand] 
(ajp-/127.0.0.1:8702-8) [47ec013e] Lock Acquired to object EngineLock 
[exclusiveLocks= key: omniserv.lab.dc01.axway.int:/nfs/omnicol value: 
STORAGE_CONNECTION
, sharedLocks= ]
2015-12-02 11:42:17,771 INFO  
[org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand] 
(ajp-/127.0.0.1:8702-8) [47ec013e] Running command: 
AddStorageServerConnectionCommand internal: false. Entities affected :  ID: 
aaa0----123456789aaa Type: SystemAction group 
CREATE_STORAGE_DOMAIN with role type ADMIN
2015-12-02 11:42:17,771 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
(ajp-/127.0.0.1:8702-8) [47ec013e] START, 
ConnectStorageServerVDSCommand(HostName = ldc01omv01, HostId = 
09bb3024-170f-48a1-a78a-951a2c61c680, storagePoolId = 
----, storageType = NFS, connectionList = [{ 
id: null, connection: omniserv.lab.dc01.axway.int:/nfs/omnicol, iqn: null, 
vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, 
nfsTimeo: null };]), log id: 7651babb
2015-12-02 11:42:17,790 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
(ajp-/127.0.0.1:8702-8) [47ec013e] FINISH, ConnectStorageServerVDSCommand, 
return: {----=0}, log id: 7651babb
2015-12-02 11:42:17,793 INFO  
[org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand] 
(ajp-/127.0.0.1:8702-8) [47ec013e] Lock freed to object EngineLock 
[exclusiveLocks= key: omniserv.lab.dc01.axway.int:/nfs/omnicol value: 
STORAGE_CONNECTION
, sharedLocks= ]
2015-12-02 11:42:17,810 INFO  
[org.ovirt.engine.core.bll.storage.AddNFSStorageDomainCommand] 
(ajp-/127.0.0.1:8702-6) [66393037] Running command: AddNFSStorageDomainCommand 
internal: false. Entities affected :  ID: aaa0----123456789aaa 
Type: SystemAction group CREATE_STORAGE_DOMAIN with role type ADMIN
2015-12-02 11:42:17,813 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
(ajp-/127.0.0.1:8702-6) [66393037] START, 
ConnectStorageServerVDSCommand(HostName = ldc01omv01, HostId = 
09bb3024-170f-48a1-a78a-951a2c61c680, storagePoolId = 
----, storageType = NFS, connectionList = [{ 
id: ce05a2b0-cee9-44a9-845e-8a35349c7195, connection: 
omniserv.lab.dc01.axway.int:/nfs/omnicol, iqn: null, vfsType: null, 
mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]), 
log id: 7c9bb0db
2015-12-02 11:42:17,828 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
(ajp-/127.0.0.1:8702-6) [66393037] FINISH, ConnectStorageServerVDSCommand, 
return: {ce05a2b0-cee9-44a9-845e-8a35349c7195=0}, log id: 7c9bb0db
2015-12-02 11:42:17,828 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] 
(ajp-/127.0.0.1:8702-6) [66393037] START, 
CreateStorageDomainVDSCommand(HostName = ldc01omv01, HostId = 
09bb3024-170f-48a1-a78a-951a2c61c680, 
storageDomain=StorageDomainStatic[test_nfs, 
5c4b7602-47de-47a7-a278-95b1ac7b1f8d], 
args=omniserv.lab.dc01.axway.int:/nfs/omnicol), log id: 62401241
2015-12-02 11:42:17,986 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] 
(ajp-/127.0.0.1:8702-6) [66393037] Failed in CreateStorageDomainVDS method
2015-12-02 11:42:17,986 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] 
(ajp-/127.0.0.1:8702-6) [66393037] Command 
org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand return 
value
StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=351, mMessage=Error 
creating a storage domain: (u'storageType=1, 
sdUUID=5c4b7602-47de-47a7-a278-95b1ac7b1f8d, domainName=test_nfs, domClass=1, 
typeSpecificArg=omniserv.lab.dc01.axway.int:/nfs/omnicol domVersion=3',)]]
2015-12-02 11:42:17,986 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] 
(ajp-/127.0.0.1:8702-6) [66393037] HostName = ldc01omv01
2015-12-02 11:42:17,986 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] 
(ajp-/127.0.0.1:8702-6) [66393037] Command 
CreateStorageDomainVDSCommand(HostName = ldc01omv01, HostId = 
09bb3024-170f-48a1-a78a-951a2c61c680, 
storageDomain=StorageDomainStatic[test_nfs, 
5c4b7602-47de-47a7-a278-95b1ac7b1f8d], 
args=omniserv.lab.dc01.axway.int:/nfs/omnicol) execution failed. Exception: 
VDSErrorException: VDSGenericException: VDSErrorException: Failed to 
CreateStorageDomainVDS, error = Error creating a storage domain: 
(u'storageType=1, 

Re: [ovirt-users] HA cluster

2015-12-02 Thread Budur Nagaraju
pls fine the logs from the below mentioned URL,

http://pastebin.com/ZeKyyFbN

On Fri, Nov 27, 2015 at 6:39 PM, Simone Tiraboschi 
wrote:

>
>
> On Fri, Nov 27, 2015 at 12:42 PM, Maxim Kovgan  wrote:
>
>> Maybe even makes sense to open a bugzilla ticket already. Better safe
>> than sorry.
>>
>
> We still need at least one log file to understand what happened.
>
>
>> On Nov 27, 2015 11:35 AM, "Simone Tiraboschi" 
>> wrote:
>>
>>>
>>> On Fri, Nov 27, 2015 at 10:10 AM, Budur Nagaraju 
>>> wrote:
>>>
 I do not know what logs you are expecting ? the logs which I got is
 pasted in the mail if you require in pastebin let me know I will upload
 there .

>>>
>>>
>>> Please run sosreport utility and share the resulting archive where you
>>> prefer.
>>> You can follow this guide:
>>> http://www.linuxtechi.com/how-to-create-sosreport-in-linux/
>>>


 On Fri, Nov 27, 2015 at 1:58 PM, Sandro Bonazzola 
 wrote:

>
>
> On Fri, Nov 27, 2015 at 8:34 AM, Budur Nagaraju 
> wrote:
>
>> I got only 10lines to in the vdsm logs and are below ,
>>
>>
> Can you please provide full sos report?
>
>
>
>>
>> [root@he /]# tail -f /var/log/vdsm/vdsm.log
>> Thread-100::DEBUG::2015-11-27
>> 12:58:57,360::resourceManager::616::Storage.ResourceManager::(releaseResource)
>> Trying to release resource 'Storage.HsmDomainMonitorLock'
>> Thread-100::DEBUG::2015-11-27
>> 12:58:57,360::resourceManager::635::Storage.ResourceManager::(releaseResource)
>> Released resource 'Storage.HsmDomainMonitorLock' (0 active users)
>> Thread-100::DEBUG::2015-11-27
>> 12:58:57,360::resourceManager::641::Storage.ResourceManager::(releaseResource)
>> Resource 'Storage.HsmDomainMonitorLock' is free, finding out if anyone is
>> waiting for it.
>> Thread-100::DEBUG::2015-11-27
>> 12:58:57,360::resourceManager::649::Storage.ResourceManager::(releaseResource)
>> No one is waiting for resource 'Storage.HsmDomainMonitorLock', Clearing
>> records.
>> Thread-100::INFO::2015-11-27
>> 12:58:57,360::logUtils::47::dispatcher::(wrapper) Run and protect:
>> stopMonitoringDomain, Return response: None
>> Thread-100::DEBUG::2015-11-27
>> 12:58:57,361::task::1191::Storage.TaskManager.Task::(prepare)
>> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::finished: None
>> Thread-100::DEBUG::2015-11-27
>> 12:58:57,361::task::595::Storage.TaskManager.Task::(_updateState)
>> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::moving from state preparing 
>> ->
>> state finished
>> Thread-100::DEBUG::2015-11-27
>> 12:58:57,361::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
>> Owner.releaseAll requests {} resources {}
>> Thread-100::DEBUG::2015-11-27
>> 12:58:57,361::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>> Owner.cancelAll requests {}
>> Thread-100::DEBUG::2015-11-27
>> 12:58:57,361::task::993::Storage.TaskManager.Task::(_decref)
>> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::ref 0 aborting False
>>
>>
>>
>> On Thu, Nov 26, 2015 at 4:20 PM, Simone Tiraboschi <
>> stira...@redhat.com> wrote:
>>
>>>
>>>
>>> On Thu, Nov 26, 2015 at 11:05 AM, Budur Nagaraju 
>>> wrote:
>>>



 *Below are the entire logs*


>>> Sorry, with the entire log I mean if you can attach or share
>>> somewhere the whole /var/log/vdsm/vdsm.log  cause the latest ten lines 
>>> are
>>> not enough to point out the issue.
>>>
>>>




 *[root@he ~]# tail -f /var/log/vdsm/vdsm.log *

 Detector thread::DEBUG::2015-11-26
 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
 Detected protocol xml from 127.0.0.1:50944
 Detector thread::DEBUG::2015-11-26
 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
 http detected from ('127.0.0.1', 50944)
 Detector thread::DEBUG::2015-11-26
 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
 Adding connection from 127.0.0.1:50945
 Detector thread::DEBUG::2015-11-26
 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
 Connection removed from 127.0.0.1:50945
 Detector thread::DEBUG::2015-11-26
 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
 Detected protocol xml from 127.0.0.1:50945
 Detector thread::DEBUG::2015-11-26
 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml over
 http detected from ('127.0.0.1', 50945)

Re: [ovirt-users] HA cluster

2015-12-02 Thread Sandro Bonazzola
On Wed, Dec 2, 2015 at 11:25 AM, Budur Nagaraju  wrote:

> pls fine the logs from the below mentioned URL,
>
> http://pastebin.com/ZeKyyFbN
>

I'm sorry but without a full sos report we can't help you.



>
>
> On Fri, Nov 27, 2015 at 6:39 PM, Simone Tiraboschi 
> wrote:
>
>>
>>
>> On Fri, Nov 27, 2015 at 12:42 PM, Maxim Kovgan  wrote:
>>
>>> Maybe even makes sense to open a bugzilla ticket already. Better safe
>>> than sorry.
>>>
>>
>> We still need at least one log file to understand what happened.
>>
>>
>>> On Nov 27, 2015 11:35 AM, "Simone Tiraboschi" 
>>> wrote:
>>>

 On Fri, Nov 27, 2015 at 10:10 AM, Budur Nagaraju 
 wrote:

> I do not know what logs you are expecting ? the logs which I got is
> pasted in the mail if you require in pastebin let me know I will upload
> there .
>


 Please run sosreport utility and share the resulting archive where you
 prefer.
 You can follow this guide:
 http://www.linuxtechi.com/how-to-create-sosreport-in-linux/

>
>
> On Fri, Nov 27, 2015 at 1:58 PM, Sandro Bonazzola  > wrote:
>
>>
>>
>> On Fri, Nov 27, 2015 at 8:34 AM, Budur Nagaraju 
>> wrote:
>>
>>> I got only 10lines to in the vdsm logs and are below ,
>>>
>>>
>> Can you please provide full sos report?
>>
>>
>>
>>>
>>> [root@he /]# tail -f /var/log/vdsm/vdsm.log
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,360::resourceManager::616::Storage.ResourceManager::(releaseResource)
>>> Trying to release resource 'Storage.HsmDomainMonitorLock'
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,360::resourceManager::635::Storage.ResourceManager::(releaseResource)
>>> Released resource 'Storage.HsmDomainMonitorLock' (0 active users)
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,360::resourceManager::641::Storage.ResourceManager::(releaseResource)
>>> Resource 'Storage.HsmDomainMonitorLock' is free, finding out if anyone 
>>> is
>>> waiting for it.
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,360::resourceManager::649::Storage.ResourceManager::(releaseResource)
>>> No one is waiting for resource 'Storage.HsmDomainMonitorLock', Clearing
>>> records.
>>> Thread-100::INFO::2015-11-27
>>> 12:58:57,360::logUtils::47::dispatcher::(wrapper) Run and protect:
>>> stopMonitoringDomain, Return response: None
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,361::task::1191::Storage.TaskManager.Task::(prepare)
>>> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::finished: None
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,361::task::595::Storage.TaskManager.Task::(_updateState)
>>> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::moving from state 
>>> preparing ->
>>> state finished
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,361::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
>>> Owner.releaseAll requests {} resources {}
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,361::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>>> Owner.cancelAll requests {}
>>> Thread-100::DEBUG::2015-11-27
>>> 12:58:57,361::task::993::Storage.TaskManager.Task::(_decref)
>>> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::ref 0 aborting False
>>>
>>>
>>>
>>> On Thu, Nov 26, 2015 at 4:20 PM, Simone Tiraboschi <
>>> stira...@redhat.com> wrote:
>>>


 On Thu, Nov 26, 2015 at 11:05 AM, Budur Nagaraju  wrote:

>
>
>
> *Below are the entire logs*
>
>
 Sorry, with the entire log I mean if you can attach or share
 somewhere the whole /var/log/vdsm/vdsm.log  cause the latest ten lines 
 are
 not enough to point out the issue.


>
>
>
>
> *[root@he ~]# tail -f /var/log/vdsm/vdsm.log *
>
> Detector thread::DEBUG::2015-11-26
> 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor::(_handle_connection_read)
> Detected protocol xml from 127.0.0.1:50944
> Detector thread::DEBUG::2015-11-26
> 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) xml 
> over
> http detected from ('127.0.0.1', 50944)
> Detector thread::DEBUG::2015-11-26
> 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
> Adding connection from 127.0.0.1:50945
> Detector thread::DEBUG::2015-11-26
> 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor::(_remove_connection)
> Connection removed from 127.0.0.1:50945
> Detector thread::DEBUG::2015-11-26
> 

Re: [ovirt-users] . Failing to add iSCSI storage domain Users Digest, Vol 51, Issue 16

2015-12-02 Thread Jean-Pierre Ribeauville
CommandBase.executeCommand(QueriesCommandBase.java:73)
 [bll.jar:]

...



In order to check iSCSI discovering via another way, I run this command from  
ovirt manager system command prompt :

iscsiadm -m discovery -t st -p 

I got  a correct  device lists.

Thanks for help.



J.P. Ribeauville


P: +33.(0).1.47.17.20.49
.
Puteaux 3 Etage 5  Bureau 4

jpribeauvi...@axway.com<mailto:jpribeauvi...@axway.com>
http://www.axway.com<http://www.axway.com/>



P Pensez ? l'environnement avant d'imprimer.



-- next part --
An HTML attachment was scrubbed...
URL: 
<http://lists.ovirt.org/pipermail/users/attachments/20151202/e0f0bb04/attachment-0001.html>

--

Message: 2
Date: Wed, 2 Dec 2015 11:41:02 +0100
From: Simone Tiraboschi <stira...@redhat.com>
To: Budur Nagaraju <nbud...@gmail.com>
Cc: users <users@ovirt.org>
Subject: Re: [ovirt-users] HA cluster
Message-ID:
<can8-onrgt0got3jvgurpq6gam5b-rctjaxj4nd-fnk1f1ue...@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

On Wed, Dec 2, 2015 at 11:25 AM, Budur Nagaraju <nbud...@gmail.com> wrote:

> pls fine the logs from the below mentioned URL,
>
> http://pastebin.com/ZeKyyFbN
>

OK, the issue is here:

Thread-88::ERROR::2015-12-02
15:06:27,735::vm::2358::vm.Vm::(_startUnderlyingVm)
vmId=`93db4369-285f-48bc-bc68-181d9de41a3c`::The vm start process failed 
Traceback (most recent call last):
  File "/usr/share/vdsm/virt/vm.py", line 2298, in _startUnderlyingVm
self._run()
  File "/usr/share/vdsm/virt/vm.py", line 3363, in _run
self._connection.createXML(domxml, flags),
  File "/usr/lib/python2.6/site-packages/vdsm/libvirtconnection.py", line 119, 
in wrapper
ret = f(*args, **kwargs)
  File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2709, in createXML
if ret is None:raise libvirtError('virDomainCreateXML() failed',
conn=self)
libvirtError: unsupported configuration: Domain requires KVM, but it is not 
available. Check that virtualization is enabled in the host BIOS, and host 
configuration is setup to load the kvm modules.
Thread-88::DEBUG::2015-12-02 15:06:27,751::vm::2813::vm.Vm::(setDownStatus)
vmId=`93db4369-285f-48bc-bc68-181d9de41a3c`::Changed state to Down:
unsupported configuration: Domain requires KVM, but it is not available.
Check that virtualization is enabled in the host BIOS, and host configuration 
is setup to load the kvm modules. (code=1)

but it's pretty strange cause hosted-engine-setup already explicitly check for 
visualization support and just exits with a clear error if not.
Did you played with the kvm module while hosted-engine-setup was running?

Can you please hosted-engine-setup logs?


>
> On Fri, Nov 27, 2015 at 6:39 PM, Simone Tiraboschi 
> <stira...@redhat.com>
> wrote:
>
>>
>>
>> On Fri, Nov 27, 2015 at 12:42 PM, Maxim Kovgan <kovg...@gmail.com> wrote:
>>
>>> Maybe even makes sense to open a bugzilla ticket already. Better 
>>> safe than sorry.
>>>
>>
>> We still need at least one log file to understand what happened.
>>
>>
>>> On Nov 27, 2015 11:35 AM, "Simone Tiraboschi" <stira...@redhat.com>
>>> wrote:
>>>
>>>>
>>>> On Fri, Nov 27, 2015 at 10:10 AM, Budur Nagaraju 
>>>> <nbud...@gmail.com>
>>>> wrote:
>>>>
>>>>> I do not know what logs you are expecting ? the logs which I got 
>>>>> is pasted in the mail if you require in pastebin let me know I 
>>>>> will upload there .
>>>>>
>>>>
>>>>
>>>> Please run sosreport utility and share the resulting archive where 
>>>> you prefer.
>>>> You can follow this guide:
>>>> http://www.linuxtechi.com/how-to-create-sosreport-in-linux/
>>>>
>>>>>
>>>>>
>>>>> On Fri, Nov 27, 2015 at 1:58 PM, Sandro Bonazzola 
>>>>> <sbona...@redhat.com
>>>>> > wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, Nov 27, 2015 at 8:34 AM, Budur Nagaraju 
>>>>>> <nbud...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> I got only 10lines to in the vdsm logs and are below ,
>>>>>>>
>>>>>>>
>>>>>> Can you please provide full sos report?
>>>>>>
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> [root@he /]# tail -f /var/log/vdsm/vdsm.log
>>>>>>> Thread-100::DEBUG::2015-11-27
>>>>>>> 12:58:57,36

Re: [ovirt-users] HA cluster

2015-12-02 Thread Budur Nagaraju
I have installed KVM in the nested environment  in ESXi6.x version is that
recommended ? apart from Hosted engine is there any other alternate way to
configure Engine HA cluster ?

-Nagaraju


On Wed, Dec 2, 2015 at 4:11 PM, Simone Tiraboschi 
wrote:

>
>
> On Wed, Dec 2, 2015 at 11:25 AM, Budur Nagaraju  wrote:
>
>> pls fine the logs from the below mentioned URL,
>>
>> http://pastebin.com/ZeKyyFbN
>>
>
> OK, the issue is here:
>
> Thread-88::ERROR::2015-12-02
> 15:06:27,735::vm::2358::vm.Vm::(_startUnderlyingVm)
> vmId=`93db4369-285f-48bc-bc68-181d9de41a3c`::The vm start process failed
> Traceback (most recent call last):
>   File "/usr/share/vdsm/virt/vm.py", line 2298, in _startUnderlyingVm
> self._run()
>   File "/usr/share/vdsm/virt/vm.py", line 3363, in _run
> self._connection.createXML(domxml, flags),
>   File "/usr/lib/python2.6/site-packages/vdsm/libvirtconnection.py", line
> 119, in wrapper
> ret = f(*args, **kwargs)
>   File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2709, in
> createXML
> if ret is None:raise libvirtError('virDomainCreateXML() failed',
> conn=self)
> libvirtError: unsupported configuration: Domain requires KVM, but it is
> not available. Check that virtualization is enabled in the host BIOS, and
> host configuration is setup to load the kvm modules.
> Thread-88::DEBUG::2015-12-02
> 15:06:27,751::vm::2813::vm.Vm::(setDownStatus)
> vmId=`93db4369-285f-48bc-bc68-181d9de41a3c`::Changed state to Down:
> unsupported configuration: Domain requires KVM, but it is not available.
> Check that virtualization is enabled in the host BIOS, and host
> configuration is setup to load the kvm modules. (code=1)
>
> but it's pretty strange cause hosted-engine-setup already explicitly check
> for visualization support and just exits with a clear error if not.
> Did you played with the kvm module while hosted-engine-setup was running?
>
> Can you please hosted-engine-setup logs?
>
>
>>
>> On Fri, Nov 27, 2015 at 6:39 PM, Simone Tiraboschi 
>> wrote:
>>
>>>
>>>
>>> On Fri, Nov 27, 2015 at 12:42 PM, Maxim Kovgan 
>>> wrote:
>>>
 Maybe even makes sense to open a bugzilla ticket already. Better safe
 than sorry.

>>>
>>> We still need at least one log file to understand what happened.
>>>
>>>
 On Nov 27, 2015 11:35 AM, "Simone Tiraboschi" 
 wrote:

>
> On Fri, Nov 27, 2015 at 10:10 AM, Budur Nagaraju 
> wrote:
>
>> I do not know what logs you are expecting ? the logs which I got is
>> pasted in the mail if you require in pastebin let me know I will upload
>> there .
>>
>
>
> Please run sosreport utility and share the resulting archive where you
> prefer.
> You can follow this guide:
> http://www.linuxtechi.com/how-to-create-sosreport-in-linux/
>
>>
>>
>> On Fri, Nov 27, 2015 at 1:58 PM, Sandro Bonazzola <
>> sbona...@redhat.com> wrote:
>>
>>>
>>>
>>> On Fri, Nov 27, 2015 at 8:34 AM, Budur Nagaraju 
>>> wrote:
>>>
 I got only 10lines to in the vdsm logs and are below ,


>>> Can you please provide full sos report?
>>>
>>>
>>>

 [root@he /]# tail -f /var/log/vdsm/vdsm.log
 Thread-100::DEBUG::2015-11-27
 12:58:57,360::resourceManager::616::Storage.ResourceManager::(releaseResource)
 Trying to release resource 'Storage.HsmDomainMonitorLock'
 Thread-100::DEBUG::2015-11-27
 12:58:57,360::resourceManager::635::Storage.ResourceManager::(releaseResource)
 Released resource 'Storage.HsmDomainMonitorLock' (0 active users)
 Thread-100::DEBUG::2015-11-27
 12:58:57,360::resourceManager::641::Storage.ResourceManager::(releaseResource)
 Resource 'Storage.HsmDomainMonitorLock' is free, finding out if anyone 
 is
 waiting for it.
 Thread-100::DEBUG::2015-11-27
 12:58:57,360::resourceManager::649::Storage.ResourceManager::(releaseResource)
 No one is waiting for resource 'Storage.HsmDomainMonitorLock', Clearing
 records.
 Thread-100::INFO::2015-11-27
 12:58:57,360::logUtils::47::dispatcher::(wrapper) Run and protect:
 stopMonitoringDomain, Return response: None
 Thread-100::DEBUG::2015-11-27
 12:58:57,361::task::1191::Storage.TaskManager.Task::(prepare)
 Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::finished: None
 Thread-100::DEBUG::2015-11-27
 12:58:57,361::task::595::Storage.TaskManager.Task::(_updateState)
 Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::moving from state 
 preparing ->
 state finished
 Thread-100::DEBUG::2015-11-27
 12:58:57,361::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
 Owner.releaseAll requests {} resources {}

Re: [ovirt-users] HA cluster

2015-12-02 Thread Simone Tiraboschi
On Wed, Dec 2, 2015 at 12:19 PM, Budur Nagaraju  wrote:

> I have installed KVM in the nested environment  in ESXi6.x version is that
> recommended ?
>

I often use KVM over KVM in nested environment but honestly I never tried
to run KVM over ESXi but I suspect that all of your issues comes from there.


> apart from Hosted engine is there any other alternate way to configure
> Engine HA cluster ?
>

Nothing else from the project. You can use two external VMs in cluster with
pacemaker but it's completely up to you.


>
>
> -Nagaraju
>
>
> On Wed, Dec 2, 2015 at 4:11 PM, Simone Tiraboschi 
> wrote:
>
>>
>>
>> On Wed, Dec 2, 2015 at 11:25 AM, Budur Nagaraju 
>> wrote:
>>
>>> pls fine the logs from the below mentioned URL,
>>>
>>> http://pastebin.com/ZeKyyFbN
>>>
>>
>> OK, the issue is here:
>>
>> Thread-88::ERROR::2015-12-02
>> 15:06:27,735::vm::2358::vm.Vm::(_startUnderlyingVm)
>> vmId=`93db4369-285f-48bc-bc68-181d9de41a3c`::The vm start process failed
>> Traceback (most recent call last):
>>   File "/usr/share/vdsm/virt/vm.py", line 2298, in _startUnderlyingVm
>> self._run()
>>   File "/usr/share/vdsm/virt/vm.py", line 3363, in _run
>> self._connection.createXML(domxml, flags),
>>   File "/usr/lib/python2.6/site-packages/vdsm/libvirtconnection.py", line
>> 119, in wrapper
>> ret = f(*args, **kwargs)
>>   File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2709, in
>> createXML
>> if ret is None:raise libvirtError('virDomainCreateXML() failed',
>> conn=self)
>> libvirtError: unsupported configuration: Domain requires KVM, but it is
>> not available. Check that virtualization is enabled in the host BIOS, and
>> host configuration is setup to load the kvm modules.
>> Thread-88::DEBUG::2015-12-02
>> 15:06:27,751::vm::2813::vm.Vm::(setDownStatus)
>> vmId=`93db4369-285f-48bc-bc68-181d9de41a3c`::Changed state to Down:
>> unsupported configuration: Domain requires KVM, but it is not available.
>> Check that virtualization is enabled in the host BIOS, and host
>> configuration is setup to load the kvm modules. (code=1)
>>
>> but it's pretty strange cause hosted-engine-setup already explicitly
>> check for visualization support and just exits with a clear error if not.
>> Did you played with the kvm module while hosted-engine-setup was running?
>>
>> Can you please hosted-engine-setup logs?
>>
>>
>>>
>>> On Fri, Nov 27, 2015 at 6:39 PM, Simone Tiraboschi 
>>> wrote:
>>>


 On Fri, Nov 27, 2015 at 12:42 PM, Maxim Kovgan 
 wrote:

> Maybe even makes sense to open a bugzilla ticket already. Better safe
> than sorry.
>

 We still need at least one log file to understand what happened.


> On Nov 27, 2015 11:35 AM, "Simone Tiraboschi" 
> wrote:
>
>>
>> On Fri, Nov 27, 2015 at 10:10 AM, Budur Nagaraju 
>> wrote:
>>
>>> I do not know what logs you are expecting ? the logs which I got is
>>> pasted in the mail if you require in pastebin let me know I will upload
>>> there .
>>>
>>
>>
>> Please run sosreport utility and share the resulting archive where
>> you prefer.
>> You can follow this guide:
>> http://www.linuxtechi.com/how-to-create-sosreport-in-linux/
>>
>>>
>>>
>>> On Fri, Nov 27, 2015 at 1:58 PM, Sandro Bonazzola <
>>> sbona...@redhat.com> wrote:
>>>


 On Fri, Nov 27, 2015 at 8:34 AM, Budur Nagaraju 
 wrote:

> I got only 10lines to in the vdsm logs and are below ,
>
>
 Can you please provide full sos report?



>
> [root@he /]# tail -f /var/log/vdsm/vdsm.log
> Thread-100::DEBUG::2015-11-27
> 12:58:57,360::resourceManager::616::Storage.ResourceManager::(releaseResource)
> Trying to release resource 'Storage.HsmDomainMonitorLock'
> Thread-100::DEBUG::2015-11-27
> 12:58:57,360::resourceManager::635::Storage.ResourceManager::(releaseResource)
> Released resource 'Storage.HsmDomainMonitorLock' (0 active users)
> Thread-100::DEBUG::2015-11-27
> 12:58:57,360::resourceManager::641::Storage.ResourceManager::(releaseResource)
> Resource 'Storage.HsmDomainMonitorLock' is free, finding out if 
> anyone is
> waiting for it.
> Thread-100::DEBUG::2015-11-27
> 12:58:57,360::resourceManager::649::Storage.ResourceManager::(releaseResource)
> No one is waiting for resource 'Storage.HsmDomainMonitorLock', 
> Clearing
> records.
> Thread-100::INFO::2015-11-27
> 12:58:57,360::logUtils::47::dispatcher::(wrapper) Run and protect:
> stopMonitoringDomain, Return response: None
> Thread-100::DEBUG::2015-11-27
> 

Re: [ovirt-users] Setting up oVirt for the first time

2015-12-02 Thread Alexander Wels
On Tuesday, December 01, 2015 05:59:59 PM Gervais de Montbrun wrote:
> Hi All,
> 
> I've done a lot of reading and lots of comparison of different hypervisors
> and tools and have decided that oVirt would be the best option. My initial
> use case is a not too new server that I will setup to run multiple
> development environments for the devs here, but I wanted something that
> will scale out to production as the vm infrastructure, hardware, etc. grows
> here.
> 
> I'll soon have a second server to run my vm's on and want to setup a
> self-hosted engine. I'm trying to find the most recent version of a how-to
> on the same. I found a presentation showing off how much easier it is to do
> this in oVirt 3.6 but can't find the correct docs. I seem to always end up
> with 3.5 or older versions. Can someone point me at a how-to of how to best
> achieve this.
> 
> Also, while I have you all here... :-)
> Is it possible to setup the hypervisor hosts themselves as NFS servers to
> create Storage (I realize that this will play havoc with the HA). We do
> have an NFS server that we will be upgrading to add storage and faster
> drives, but I was thinking that I may be able to use the internal storage
> of the hypervisors themselves as a short term stopgap and then migrate vm's
> to the upgraded NFS server later. Will that even work, or will it break
> somehow?
> 

I saw Simone gave you an answer about Gluster. Here is my DEVELOPMENT setup, 
which has been running for several years like this now. I have some VMs with 
uptime in the 100+ days. I am NOT doing the hosted engine since I develop stuff 
for the engine and it makes no sense for me to have the hosted engine. Here is 
my setup, which is one option, I will outline some other options which might 
be more appropriate for what you are trying to do to:

* Development engine machine. this runs just the engine.
* 2x hosts, which both expose NFS shares for a data domain.
* Each host can see the NFS share of the other host.
* In my engine I have 2 data domains on one data center. One is the master 
domain the other a secondary data domain.

Pros of this setup:
* I can use the storage on both my hosts for VMs.
* I can live migrate VMs between hosts.
* I can storage migrate VMs between data domains.
* Its cheap and easy to setup

Cons of this setup:
* If any host goes down, the entire data center goes down since both hosts 
need to see all the storage. (Well technically only the VMs that are on the 
storage domain that went down will die and all the VMs on the host that went 
down). So this gives many points of failure for the entire thing to come 
crashing down. This is fine for me since its my development environment and 
nothing critical runs on it.
* Its not always clear on which data domain you will want to create a VM but 
again for me its not much of an issue as this is a development environment.


Here are some other possible setups, which I am not sure will work with hosted 
engine as I believe that requires some kind of shared storage. IIRC it will 
need some kind of shared storage, but it will be separate from the normal data 
domain. So lets assume you have hosted engine up and running and are looking 
at ways of using the storage on the host.

Option 1:
Create a local storage data center and add your hosts to that data center.

Pros of this setup:
* VMs are on local storage so should have decent IO.
* If one host goes down it will not affect any of the other hosts.
* Its cheap and easy to setup.

Cons of this setup:
* No live migration
* No storage migration
* Hard to migrate to a different storage solution later.

Option 2:
This is sort of a hybrid between my setup and option 1. Create several shared 
storage data centers, one for each host. Setup your NFS on each host. Add one 
host to each data center and use the NFS share as the data domain.

Pros of this setup:
* VMs are pretty much on local storage, there is some overhead of the NFS 
share, but it still basically local.
* If one host goes down it will not affect any of the other hosts.

Cons of this setup:
* No live migration
* No storage migration
* Hard to migrate to different storage solution later*.

Given your situation with getting some fast NFS storage at a later point, what 
you can do if you take option2. Once you get the new storage, you can easily 
add an import/export domain and export all your VMs to that storage. One you 
have all your VMs on the import/export domain, create a new data center, add 
your hosts, add your shiny new NFS as a data domain, also add the same 
import/export domain, and import all the VMs into the new data domain. Note 
this might also work for option 1 assuming you can attach import/export domain 
to a local storage data center (I have never tried so I don't know).

In 3.5+ I think you can also simply add a new data center, add your hosts, and 
import the data domains from the other data centers (after you disconnected 
them from there), then storage migrate the