Re: [ovirt-users] oVirt 3.5.3.1 - Clone_VM Process deleted Source-VM and the the Clone-VM does not contain any disk anymore

2015-08-30 Thread Omer Frenkel
On Fri, Aug 28, 2015 at 3:07 PM, Christian Rebel christian.re...@gmx.at
wrote:

 Dear all,



 I have started a Clone_VM over the GUI, but now the Source-VM has been
 deleted and the Target-VM does not contain any disk!

 The Task is displaying that “Copying Image” and “ Finalize” has been
 failed, I hope there is a way to restore the VM somehow – please help me…




​this bug was fixed in 3.5.4:
​*Bug 1236608* https://bugzilla.redhat.com/show_bug.cgi?id=1236608 - Source
VM is deleted after failed cloning attempt

unfortunately there is no easy way to recover this vm, first check if the
disk are somehow still in the storage (maybe the copy went through and the
dest disk is there):
disk 1:
src -
/rhev/data-center/0002-0002-0002-0002-0021/937822d9-8a59-490f-95b7-48371ae32253/6281b597-020d-4ea7-a954-bb798a0ca4f1/

dst -
/rhev/data-center/0002-0002-0002-0002-0021/937822d9-8a59-490f-95b7-48371ae32253/e64d7be5-7643-4ba1-b347-80c923f130e6


​disk 2:
src -
​/rhev/data-center/0002-0002-0002-0002-0021/937822d9-8a59-490f-95b7-48371ae32253/e7e99288-ad83-406e-9cb6-7a5aa443de9b

dst -
/rhev/data-center/0002-0002-0002-0002-0021/937822d9-8a59-490f-95b7-48371ae32253/7f5dd048-048f-49c1-9589-c935fcdccfdd

​if the disks are there, maybe it would be possible to recover them
manually using the image uploader ​
​


From the Logfile:



 2015-08-28 12:47:20,950 INFO
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
 (DefaultQuartzScheduler_Worker-7) Correlation ID: null, Call Stack: null,
 Custom Event ID: -1, Message: VM Katello is down. Exit message: User shut
 down from within the guest

 2015-08-28 12:47:20,955 INFO
 [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
 (DefaultQuartzScheduler_Worker-7) VM Katello
 (9013e3c2-3cd7-4eae-a3e6-f5e83a64db87) is running in db and not running in
 VDS itsatltovirtaio.domain.local

 2015-08-28 12:47:20,957 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand]
 (DefaultQuartzScheduler_Worker-7) START, FullListVdsCommand(HostName =
 itsatltovirtaio.domain.local, HostId =
 b783a2ee-4a63-46ca-9afc-b3b74f0e10ce,
 vds=Host[itsatltovirtaio.domain.local,b783a2ee-4a63-46ca-9afc-b3b74f0e10ce],
 vmIds=[9013e3c2-3cd7-4eae-a3e6-f5e83a64db87]), log id: 39590448

 2015-08-28 12:47:20,966 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand]
 (DefaultQuartzScheduler_Worker-7) FINISH, FullListVdsCommand, return: [],
 log id: 39590448

 2015-08-28 12:47:21,046 INFO
 [org.ovirt.engine.core.bll.ProcessDownVmCommand]
 (org.ovirt.thread.pool-8-thread-17) [82bee5d] Running command:
 ProcessDownVmCommand internal: true.

 2015-08-28 12:47:24,589 INFO
 [org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
 (DefaultQuartzScheduler_Worker-3) Polling and updating Async Tasks: 2
 tasks, 2 tasks to poll now

 2015-08-28 12:47:24,600 INFO
 [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
 (DefaultQuartzScheduler_Worker-3) SPMAsyncTask::PollTask: Polling task
 037b2c85-68d2-4159-8310-91c472038b5b (Parent Command
 ProcessOvfUpdateForStorageDomain, Parameters Type
 org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) returned
 status finished, result 'success'.

 2015-08-28 12:47:24,603 INFO
 [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
 (DefaultQuartzScheduler_Worker-3) BaseAsyncTask::onTaskEndSuccess: Task
 037b2c85-68d2-4159-8310-91c472038b5b (Parent Command
 ProcessOvfUpdateForStorageDomain, Parameters Type
 org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) ended
 successfully.

 2015-08-28 12:47:24,604 INFO
 [org.ovirt.engine.core.bll.CommandMultiAsyncTasks]
 (DefaultQuartzScheduler_Worker-3) Task with DB Task ID
 0e6a6d72-0cea-41aa-8fe9-9262bc53d558 and VDSM Task ID
 cd125365-3344-4f45-b67a-39c2fa5112ab is in state Polling. End action for
 command e9edfed0-915a-4534-b774-c07682bafa59 will proceed when all the
 entitys tasks are completed.

 2015-08-28 12:47:24,605 INFO
 [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
 (DefaultQuartzScheduler_Worker-3) SPMAsyncTask::PollTask: Polling task
 cd125365-3344-4f45-b67a-39c2fa5112ab (Parent Command
 ProcessOvfUpdateForStorageDomain, Parameters Type
 org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) returned
 status finished, result 'success'.

 2015-08-28 12:47:24,606 INFO
 [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
 (DefaultQuartzScheduler_Worker-3) BaseAsyncTask::onTaskEndSuccess: Task
 cd125365-3344-4f45-b67a-39c2fa5112ab (Parent Command
 ProcessOvfUpdateForStorageDomain, Parameters Type
 org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) ended
 successfully.

 2015-08-28 12:47:24,606 INFO
 [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
 (DefaultQuartzScheduler_Worker-3) CommandAsyncTask::endActionIfNecessary:
 All tasks of command e9edfed0-915a-4534-b774-c07682bafa59 has ended -
 executing endAction

 2015-08-28 12:47:24,607 INFO
 [org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
 (DefaultQuartzScheduler_Worker-3) 

Re: [ovirt-users] Couple issues found with oVirt 3.6.0 Third Beta Release

2015-08-30 Thread Yaniv Dary
Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109

Tel : +972 (9) 7692306
8272306
Email: yd...@redhat.com
IRC : ydary


On Fri, Aug 28, 2015 at 4:19 AM, SULLIVAN, Chris (WGK) 
chris.sulli...@woodgroupkenny.com wrote:

 Hi,

 I recently re-installed my test oVirt environment, upgrading from 3.5.2 to
 3.6.0 beta 3 (Engine 3.6.0-0.0.master.20150819134454.gite6b79a7.el7.centos,
 VDSM 4.17.3-0.el7.centos, GlusterFS 3.7.3-1.el7) in the process. Due to a
 software issue outside of oVirt I had to start with a clean install for
 3.6, however I kept all the old storage domains. All hosts and the engine
 are running CentOS 7.1 and I'm using hosted-engine.

 During the setup/recovery process I encountered the following issues:

 :: Could not create GlusterFS Data domain due to 'General Exception'
 I attempted to create a Data domain using a FQDN associated with a
 floating IP, so that hosts could still mount the domain when the specific
 GlusterFS host used to define the storage was down. This FQDN was
 resolvable and contactable on each host in the farm. The floating IP is
 shared between two of the four GlusterFS hosts. The logs reported an
 unhandled exception ('x not in list') raised by the below statement (line
 340 in
 https://github.com/oVirt/vdsm/blob/master/vdsm/storage/storageServer.py ):
 def _get_backup_servers_option(self):
 servers = [brick.split(:)[0] for brick in self.volinfo['bricks']]
 servers.remove(self._volfileserver)   #--- Exception thrown here
 if not servers:
 return 

 return backup-volfile-servers= + :.join(servers)

 My assumption (without looking too deep in the code) was that since I used
 a FQDN that did not have any bricks associated with it,
 'self._volfileserver' would be set to a name that would not appear in
 'servers', resulting in the exception. I patched it as per the following:
 def _get_backup_servers_option(self):
 servers = [brick.split(:)[0] for brick in self.volinfo['bricks']]
 if self._volfileserver in servers:
 self.log.warn(Removing current volfileserver %s... %
 self._volfileserver)
 servers.remove(self._volfileserver)
 else:
 self.log.warn(Current volfileserver not in servers.)
 if not servers:
 return 

 return backup-volfile-servers= + :.join(servers)

 Once patched the Data domain created successfully and appears to be
 working normally, although I'm not sure if the above change has any
 negative knock-on effects throughout the code or in specific situations.
 I'd suggest that the _get_backup_servers_option method be tweaked to handle
 this configuration gracefully by someone with more knowledge of the code,
 either by allowing the configuration or rejecting it with a suitable error
 message if the configuration is intended to be unsupported.

 :: Could not import VMs from old Data domain due to unsupported video type
 (VGA)
 Once the new data center was up and running, I attached the old Data
 domain and attempted to import the VMs/templates. Template import worked
 fine, however VM import failed with an error stating the video device
 (which came up as VGA) was not supported. I attempted to fix this by
 specifically defining the video type as 'qxl' in the .ovf file for the VM
 in the OVF_STORE for the old storage however the VM would always come up
 with video type VGA in the import dialog, and the import dialog does not
 permit the value to be changed.

 The workaround was to add 'vnc/vga' to the supported protocols list in a
 .properties file in the engine OSinfo folder, e.g.:
 os.other.devices.display.protocols.value =
 spice/qxl,vnc/cirrus,vnc/qxl,vnc/vga

 Once the engine was restarted the VM import process worked fine, and there
 have been no issues starting the VM with a VGA device or accessing the VM's
 console. To resolve the issue I'd suggest that either:
  - 'vnc/vga' be added to the default supported protocols list; or
 - the video type defined in the .ovf file for the VM to be imported is
 recognized/honoured by the import dialog; or
 - if the import dialog defaults to a particular video device, that it
 default to one that is supported by the engine for the OS defined in the
 VM's .ovf file.

 I can create Bugzilla entries for the above if required.



Please do.
Thanks!



 Cheers,

 Chris




 PLEASE CONSIDER THE ENVIRONMENT, DON'T PRINT THIS EMAIL UNLESS YOU REALLY
 NEED TO.

 This email and its attachments may contain information which is
 confidential and/or legally privileged. If you are not the intended
 recipient of this e-mail please notify the sender immediately by e-mail and
 delete this e-mail and its attachments from your computer and IT systems.
 You must not copy, re-transmit, use or disclose (other than to the sender)
 the existence or contents of this email or its attachments or permit anyone
 else to do so.

 

[ovirt-users] Automatically start VM after host reboot

2015-08-30 Thread gregor
Hi,

I installed oVirt allinone, is there a way to automatically start
certain VM's after the host reboots?

cheers
gregor
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Moving HostedEngine from one Cluster to another

2015-08-30 Thread Yaniv Dary
Can you open a bug on this please?

Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109

Tel : +972 (9) 7692306
8272306
Email: yd...@redhat.com
IRC : ydary


On Tue, Aug 25, 2015 at 5:14 PM, Groten, Ryan ryan.gro...@stantec.com
wrote:

 My HostedEngine exists in the Default cluster, but since I’m upgrading my
 hosts to RHEL7 I created a new Cluster and migrated all the VMs to it
 (including HostedEngine).  However in the GUI VM Tab HostedEngine still
 appears as in the Default Cluster.  Because of this I can’t remove this
 cluster (it thinks there’s a VM in it) even though there are no more hosts
 or VMs in it.



 I also can’t change the cluster of HostedEngine, it says “Cannot edit VM
 Cluster.  This VM is not managed by the engine”.



 Thanks,

 Ryan

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Trying hosted-engine on ovirt-3.6 beta

2015-08-30 Thread Elad Ben Aharon
Seems like an issue with the configuration image saved on the shared
storage. Simone, can you take a look?
Thanks



2015-08-28 17:18:30 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.storage.heconf
heconflib.create_heconfimage:230 stderr: dd: failed to open
‘/rhev/data-center/mnt/lich**FILTERED**:_nfs_ovirt-he_data/03eb6ca0-b532-4949-a0dd-085520bc54eb/images/bad49156-7aa8-448f-b8b7-174854821668/04bd6885-d1e4-4943-9450-638541234339’:
Permission denied

2015-08-28 17:18:30 DEBUG otopi.context context._executeMethod:155 method
exception
Traceback (most recent call last):
  File /usr/lib/python2.7/site-packages/otopi/context.py, line 145, in
_executeMethod
method['method']()
  File
/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/storage/heconf.py,
line 150, in _closeup_create_tar
dest
  File
/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/heconflib.py,
line 233, in create_heconfimage
raise RuntimeError('Unable to write HEConfImage')
RuntimeError: Unable to write HEConfImage
2015-08-28 17:18:30 ERROR otopi.context context._executeMethod:164 Failed
to execute stage 'Closing up': Unable to write HEConfImage

On Fri, Aug 28, 2015 at 6:33 PM, Joop jvdw...@xs4all.nl wrote:

 Hi All,

 I have been trying the above and keep getting an error at the end about
 unable to write to HEConfImage, see attached log.

 Host is Fedora22 (clean system), engine is Centos-7.1, followed the
 readme from the 3.6beta release notes but in short:
 - setup a nfs server on the fedora22 host
 - exported /nfs/ovirt-he/data
 - installed yum, installed the 3.6 beta repo
 - installed hosted engine
 - ran setup
 - installed centos7.1, ran engine-setup

 Tried with and without selinux/iptables/firewalld.

 Regards,

 Joop




 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] New storage domain on nfs share

2015-08-30 Thread Aharon Canan
don't you have an option to create data NFS domain? 

Regards, 
__ 
Aharon Canan 

- Original Message -

 From: gregor gregor_fo...@catrix.at
 To: users@ovirt.org
 Sent: Friday, August 28, 2015 8:37:59 PM
 Subject: [ovirt-users] New storage domain on nfs share

 Hi,

 what is the right way to create a storage on an NFS share to use as data
 storage for virtual machines?

 It is only possible to create a storage ISO/NFS or Export/NFS where
 I can not create disks for a virtual machine.

 cheers
 gregor
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Storage domain / new harddisk overwrites existing harddisk

2015-08-30 Thread Yaniv Dary
Can you please add logs?

Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109

Tel : +972 (9) 7692306
8272306
Email: yd...@redhat.com
IRC : ydary


On Tue, Aug 25, 2015 at 6:43 PM, Bernhard Krieger b...@noremorze.at wrote:

 Hello,

 after adding  a new vm/harddisk,  the existing disk of a running vm was
 overwritten.
 I do not know why this happened , so I need your help .



 * ovirt-engine
 - OS: CentOS 7
 - packages:
 ovirt-engine-sdk-python-3.5.2.1-1.el7.centos.noarch
 ovirt-engine-cli-3.5.0.6-1.el7.noarch
 ovirt-engine-lib-3.5.3.1-1.el7.centos.noarch
 ovirt-engine-jboss-as-7.1.1-1.el7.x86_64
 ovirt-engine-setup-plugin-websocket-proxy-3.5.3.1-1.el7.centos.noarch
 ovirt-engine-tools-3.5.3.1-1.el7.centos.noarch
 ovirt-engine-restapi-3.5.3.1-1.el7.centos.noarch
 ovirt-image-uploader-3.5.1-1.el7.centos.noarch
 ovirt-host-deploy-java-1.3.1-1.el7.noarch
 ovirt-engine-setup-plugin-ovirt-engine-common-3.5.3.1-1.el7.centos.noarch
 ovirt-engine-webadmin-portal-3.5.3.1-1.el7.centos.noarch
 ovirt-engine-userportal-3.5.3.1-1.el7.centos.noarch
 ovirt-iso-uploader-3.5.2-1.el7.centos.noarch
 ovirt-engine-extensions-api-impl-3.5.3.1-1.el7.centos.noarch
 ovirt-engine-dbscripts-3.5.3.1-1.el7.centos.noarch
 ovirt-engine-setup-plugin-ovirt-engine-3.5.3.1-1.el7.centos.noarch
 ovirt-engine-backend-3.5.3.1-1.el7.centos.noarch
 ovirt-host-deploy-1.3.1-1.el7.noarch
 ovirt-engine-setup-base-3.5.3.1-1.el7.centos.noarch
 ovirt-engine-websocket-proxy-3.5.3.1-1.el7.centos.noarch
 ovirt-engine-setup-3.5.3.1-1.el7.centos.noarch
 ovirt-engine-3.5.3.1-1.el7.centos.noarch
 ovirt-engine-extension-aaa-ldap-1.0.2-1.el7.noarch
 ovirt-release35-005-1.noarch


 * 4 ovirt hosts,  risn-ovirt0 to risn-ovirt3

 - OS: Centos 7
 - packages
 ovirt-release35-005-1.noarch
 vdsm-xmlrpc-4.16.20-0.el7.centos.noarch
 vdsm-4.16.20-0.el7.centos.x86_64
 vdsm-jsonrpc-4.16.20-0.el7.centos.noarch
 vdsm-yajsonrpc-4.16.20-0.el7.centos.noarch
 vdsm-cli-4.16.20-0.el7.centos.noarch
 vdsm-python-zombiereaper-4.16.20-0.el7.centos.noarch
 vdsm-python-4.16.20-0.el7.centos.noarch


 * 1 Storagedomain called space0.dc1 ( Data Fibre channel )
 This storagedomain is attached and accessable to all ovirt hosts.

 Some details of the space0.dc1

 [root@]# multipath -ll
 360060e80101e2500058be2200bb8 dm-2 HITACHI ,DF600F
 size=550G features='0' hwhandler='0' wp=rw
 |-+- policy='service-time 0' prio=1 status=active
 | |- 2:0:1:1 sdh 8:112  active ready  running
 | `- 3:0:1:1 sdt 65:48  active ready  running
 `-+- policy='service-time 0' prio=0 status=enabled
   |- 2:0:0:1 sdd 8:48   active ready  running
   `- 3:0:0:1 sdp 8:240  active ready  running

   --- Volume group ---
   VG Name   bb5b6729-c654-4ebe-ba96-8ddc96154595
   System ID
   Formatlvm2
   Metadata Areas2
   Metadata Sequence No  82
   VG Access read/write
   VG Status resizable
   MAX LV0
   Cur LV14
   Open LV   3
   Max PV0
   Cur PV1
   Act PV1
   VG Size   549.62 GiB
   PE Size   128.00 MiB
   Total PE  4397
   Alloc PE / Size   1401 / 175.12 GiB
   Free  PE / Size   2996 / 374.50 GiB
   VG UUID   cE1V2A-ghtI-PldK-UNKH-d83q-tKg0-BdTZdy

 * vm elvira (exisiting server)
 OS: Linux
 Disk size: 30GB
 harddisk: space0.dc1
 Image id: c11f91e2-ebec-41ee-b3b4-ceb013a58743

 * vm betriebsserver (new one)
 OS: Windows
 disk size: 300GB
 harddisk: space0.dc1
 image id: e19c6c85-cfa6-4350-9a01-48d007f6f934


 I did the following steps:

 * extended the  storagedomain to 550GB on our storage system.

 I executed the following commands on every ovirt hosts:
 -  for letter in {a..z} ; do echo 1 
 /sys/block/sd${letter}/device/rescan; done
 -  multipathd resize map 360060e80101e2500058be2200bb8
 -  pvresize /dev/mapper/360060e80101e2500058be2200bb8

 * After that i created a new server called betriebsserver

 * added a new harddisk with 300GB and attached it to the betriebsserver

 * installed the Windows OS.

 * At 13:39 i rebooted another vm called elvira, but the server wont
 coming up, because the harddisk is missing.

 Logfile of risn-ovirt3 where elvira was running.
 Thread-449591::ERROR::2015-08-25
 13:39:12,908::task::866::Storage.TaskManager.Task::(_setError)
 Task=`4b84a935-276e-441c-8c0b-3ddb809ce853`::Unexpected error
 Thread-449591::ERROR::2015-08-25
 13:39:12,911::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
 {'message': Volume does not exist:
 ('c11f91e2-ebec-41ee-b3b4-ceb013a58743',), 'code': 201}}

 The server was unable to boot due to missing harddisk.
 I tried it on every ovirt hosts, but no success.


 * i checked if the lvm exists on the risn-ovirt3
 [root@risn-ovirt3 vdsm]#  find / -name
 *c11f91e2-ebec-41ee-b3b4-ceb013a58743*

 

[ovirt-users] Daily online VM backups

2015-08-30 Thread gregor
Hi,

what is the best way to make daily backups of my VM's without shutting
them down?

I found the Backup-Restore API and other stuff but no running
tool/script which I can use. I plan to integrate it into backuppc. Or is
there any Best practice guide for backup? ;-)

In the meantime I integrated engine-backup into backuppc as a pre-script.

cheers
gregor
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] New storage domain on nfs share

2015-08-30 Thread Yaniv Dary
All in one is not going to be supported anymore the recommended install for
3.6+ is to use self hosted engine. Doing this will allow you to add NFS
data domain as well.

Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109

Tel : +972 (9) 7692306
8272306
Email: yd...@redhat.com
IRC : ydary


On Sun, Aug 30, 2015 at 11:13 AM, gregor gregor_fo...@catrix.at wrote:

 No, but only in the allinone installation. When I install only the
 engine, where I have to create a the cluster by my self, I can create a
 NFS Data.
 Another solution/workaround is to mount the NFS share in the underlying
 operating system and use this local folder.

 cheers
 gregor

 On 2015-08-30 09:49, Aharon Canan wrote:
  don't you have an option to create data NFS domain?
 
 
 
  Regards,
  __
  Aharon Canan
 
  
 
  *From: *gregor gregor_fo...@catrix.at
  *To: *users@ovirt.org
  *Sent: *Friday, August 28, 2015 8:37:59 PM
  *Subject: *[ovirt-users] New storage domain on nfs share
 
  Hi,
 
  what is the right way to create a storage on an NFS share to use as
 data
  storage for virtual machines?
 
  It is only possible to create a storage ISO/NFS or Export/NFS
 where
  I can not create disks for a virtual machine.
 
  cheers
  gregor
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] New storage domain on nfs share

2015-08-30 Thread gregor
No, but only in the allinone installation. When I install only the
engine, where I have to create a the cluster by my self, I can create a
NFS Data.
Another solution/workaround is to mount the NFS share in the underlying
operating system and use this local folder.

cheers
gregor

On 2015-08-30 09:49, Aharon Canan wrote:
 don't you have an option to create data NFS domain?
 
 
 
 Regards,
 __
 Aharon Canan
 
 
 
 *From: *gregor gregor_fo...@catrix.at
 *To: *users@ovirt.org
 *Sent: *Friday, August 28, 2015 8:37:59 PM
 *Subject: *[ovirt-users] New storage domain on nfs share
 
 Hi,
 
 what is the right way to create a storage on an NFS share to use as data
 storage for virtual machines?
 
 It is only possible to create a storage ISO/NFS or Export/NFS where
 I can not create disks for a virtual machine.
 
 cheers
 gregor
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Windows 2012 R2 template deploy unable to login

2015-08-30 Thread Yaniv Dary
We have not yet added win 2012 handling, so things might not work as you
encountered.

Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109

Tel : +972 (9) 7692306
8272306
Email: yd...@redhat.com
IRC : ydary


On Wed, Aug 26, 2015 at 1:24 PM, Ian Fraser ian.fra...@asm.org.uk wrote:

 Hi,
  I am failing to get a Windows Server 2012 R2 Template deploy
 successfully. It seems to go through the motions but I cannot log in with
 the password given in the initial run. It seems that something is failing
 in the sysprep part. The ovirt docs only mention Win 7/2008. Has anyone had
 any success in doing this?

 Best regards

 Ian Fraser

 Systems Administrator | Agency Sector Management (UK) Limited |
 www.asm.org.uk
 [t] +44 (0)1784 242200 | [f] +44 (0)1784 242012 | [e]
 ian.fra...@asm.org.uk
 Ashford House  41-45 Church Road  Ashford  Middx  TW15 2TQ
 Follow us on twitter @asmukltd

 

 The information in this message and any attachment is intended for the
 addressee and is confidential. If you are not that addressee, no action
 should be taken in reliance on the information and you should please reply
 to this message immediately to inform us of incorrect receipt and destroy
 this message and any attachments.

 For the purposes of internet level email security incoming and outgoing
 emails may be read by personnel other than the named recipient or sender.

 Whilst all reasonable efforts are made, ASM (UK) Ltd cannot guarantee that
 emails and attachments are virus free or compatible with your systems. You
 should make your own checks and ASM (UK) Ltd does not accept liability in
 respect of viruses or computer problems experienced.
 Registered address: Agency Sector Management (UK) Ltd. Ashford House,
 41-45 Church Road, Ashford, Middlesex, TW15 2TQ
 Registered in England No.2053849

 __
 This email has been scanned by the Symantec Email Security.cloud service.
 For more information please visit http://www.symanteccloud.com
 __
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Automatically start VM after host reboot

2015-08-30 Thread Eli Mesika


- Original Message -
 From: gregor gregor_fo...@catrix.at
 To: Users@ovirt.org
 Sent: Sunday, August 30, 2015 2:30:00 PM
 Subject: [ovirt-users] Automatically start VM after host reboot
 
 Hi,
 
 I installed oVirt allinone, is there a way to automatically start
 certain VM's after the host reboots?

See
http://lists.ovirt.org/pipermail/users/2014-November/029424.html

 
 cheers
 gregor
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Some VMs in status not responding in oVirt interface

2015-08-30 Thread Christian Hailer
Hello Yaniv,

 

here are the engine logs around the time it happened today, I just rebooted the 
server and started up the VMs, 15 minutes later the VM “Management” didn’t 
respond anymore…

Do you want to see the VM’s logs, meaning the logs of the OS which the VM is 
running? Or are there any oVirt logs for each VM? 

 

Best regards, Christian

 

2015-08-30 10:58:12,913 INFO  
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(DefaultQuartzScheduler_Worker-94) VM kube-minion3 
69d30e11-4fdc-4129-9e17-36ff37e32bfa moved from WaitForLaunch -- PoweringUp

2015-08-30 10:58:12,914 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand] 
(DefaultQuartzScheduler_Worker-94) START, FullListVdsCommand(HostName = 
ovirt-engine, HostId = 5af1efa7-5ddd-4df3-ac64-faa101bb505b, vds=Host[ovirt-en

gine,5af1efa7-5ddd-4df3-ac64-faa101bb505b], 
vmIds=[69d30e11-4fdc-4129-9e17-36ff37e32bfa]), log id: 70ac29e4

2015-08-30 10:58:12,920 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand] 
(DefaultQuartzScheduler_Worker-94) FINISH, FullListVdsCommand, return: 
[{acpiEnable=true, emulatedMachine=rhel6.5.0, vmId=69d30e11-4fdc-4129-9e17-36ff

37e32bfa, memGuaranteedSize=2048, transparentHugePages=true, 
displaySecurePort=5917, spiceSslCipherSuite=DEFAULT, cpuType=SandyBridge, 
smp=2, pauseCode=NOERR, smartcardEnable=false, 
custom={device_82b2d838-86c3-45fa-a5bb-52294d7d35c3device_a6c

48967-806c-4052-bc24-80d098d9330edevice_b92a9519-b2e0-438c-9969-e376edfad414=VmDevice
 {vmId=69d30e11-4fdc-4129-9e17-36ff37e32bfa, 
deviceId=b92a9519-b2e0-438c-9969-e376edfad414, device=unix, type=CHANNEL, 
bootOrder=0, specParams={}, address={bu

s=0, controller=0, type=virtio-serial, port=2}, managed=false, plugged=true, 
readOnly=false, deviceAlias=channel1, customProperties={}, snapshotId=null, 
logicalName=null}, 
device_82b2d838-86c3-45fa-a5bb-52294d7d35c3device_a6c48967-806c-4052-bc

24-80d098d9330e=VmDevice {vmId=69d30e11-4fdc-4129-9e17-36ff37e32bfa, 
deviceId=a6c48967-806c-4052-bc24-80d098d9330e, device=unix, type=CHANNEL, 
bootOrder=0, specParams={}, address={bus=0, controller=0, type=virtio-serial, 
port=1}, managed=false

, plugged=true, readOnly=false, deviceAlias=channel0, customProperties={}, 
snapshotId=null, logicalName=null}, 
device_82b2d838-86c3-45fa-a5bb-52294d7d35c3=VmDevice 
{vmId=69d30e11-4fdc-4129-9e17-36ff37e32bfa, deviceId=82b2d838-86c3-45fa-a5bb-52

294d7d35c3, device=ide, type=CONTROLLER, bootOrder=0, specParams={}, 
address={slot=0x01, bus=0x00, domain=0x, type=pci, function=0x1}, 
managed=false, plugged=true, readOnly=false, deviceAlias=ide0, 
customProperties={}, snapshotId=null, log

icalName=null}, 
device_82b2d838-86c3-45fa-a5bb-52294d7d35c3device_a6c48967-806c-4052-bc24-80d098d9330edevice_b92a9519-b2e0-438c-9969-e376edfad414device_ad58a63f-0aa4-4c3e-8232-c9e0f8d4ec4b=VmDevice
 {vmId=69d30e11-4fdc-4129-9e17-36ff37e32bfa, d

eviceId=ad58a63f-0aa4-4c3e-8232-c9e0f8d4ec4b, device=spicevmc, type=CHANNEL, 
bootOrder=0, specParams={}, address={bus=0, controller=0, type=virtio-serial, 
port=3}, managed=false, plugged=true, readOnly=false, deviceAlias=channel2, 
customProper

ties={}, snapshotId=null, logicalName=null}}, vmType=kvm, memSize=2048, 
smpCoresPerSocket=2, vmName=kube-minion3, nice=0, status=Up, 
bootMenuEnable=true, pid=11142, copyPasteEnable=true, displayIp=172.20.1.254, 
displayPort=-1, guestDiskMapping

={}, clientIp=, fileTransferEnable=true, nicModel=rtl8139,pv, 
keyboardLayout=en-us, kvmEnable=true, pitReinjection=false, 
displayNetwork=zw2001, devices=[Ljava.lang.Object;@36f27186, timeOffset=7200, 
maxVCpus=32, spiceSecureChannels=smain,sinp

uts,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard, display=qxl}], log 
id: 70ac29e4

2015-08-30 10:58:12,923 INFO  
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(DefaultQuartzScheduler_Worker-94) Received a spice Device without an address 
when processing VM 69d30e11-4fdc-4129-9e17-36ff37e32bfa devices, skipping device

: {device=spice, specParams={displayNetwork=zw2001, 
spiceSecureChannels=smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard,
 keyMap=en-us, displayIp=172.20.1.254, copyPasteEnable=true}, 
deviceType=graphics, type=graphics, tls

Port=5917}

2015-08-30 10:58:19,143 INFO  
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(DefaultQuartzScheduler_Worker-13) VM kube-minion1 
91454162-2bc6-4119-ab5e-2ca77740bb41 moved from WaitForLaunch -- PoweringUp

2015-08-30 10:58:19,144 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand] 
(DefaultQuartzScheduler_Worker-13) START, FullListVdsCommand(HostName = 
ovirt-engine, HostId = 5af1efa7-5ddd-4df3-ac64-faa101bb505b, vds=Host[ovirt-en

gine,5af1efa7-5ddd-4df3-ac64-faa101bb505b], 
vmIds=[91454162-2bc6-4119-ab5e-2ca77740bb41]), log id: 30059eb4

2015-08-30 10:58:19,149 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand] 
(DefaultQuartzScheduler_Worker-13) FINISH, FullListVdsCommand, return: 

Re: [ovirt-users] Couple issues found with oVirt 3.6.0 Third Beta Release

2015-08-30 Thread Maor Lipchuk




- Original Message -
 From: Yaniv Dary yd...@redhat.com
 To: Chris SULLIVAN (WGK) chris.sulli...@woodgroupkenny.com
 Cc: users@ovirt.org
 Sent: Sunday, August 30, 2015 2:28:16 PM
 Subject: Re: [ovirt-users] Couple issues found with oVirt 3.6.0 Third Beta
 Release
 
 
 
 Yaniv Dary
 Technical Product Manager
 Red Hat Israel Ltd.
 34 Jerusalem Road
 Building A, 4th floor
 Ra'anana, Israel 4350109
 
 Tel : +972 (9) 7692306
 8272306
 Email: yd...@redhat.com IRC : ydary
 
 On Fri, Aug 28, 2015 at 4:19 AM, SULLIVAN, Chris (WGK) 
 chris.sulli...@woodgroupkenny.com  wrote:
 
 
 Hi,
 
 I recently re-installed my test oVirt environment, upgrading from 3.5.2 to
 3.6.0 beta 3 (Engine 3.6.0-0.0.master.20150819134454.gite6b79a7.el7.centos,
 VDSM 4.17.3-0.el7.centos, GlusterFS 3.7.3-1.el7) in the process. Due to a
 software issue outside of oVirt I had to start with a clean install for 3.6,
 however I kept all the old storage domains. All hosts and the engine are
 running CentOS 7.1 and I'm using hosted-engine.
 
 During the setup/recovery process I encountered the following issues:
 
 :: Could not create GlusterFS Data domain due to 'General Exception'
 I attempted to create a Data domain using a FQDN associated with a floating
 IP, so that hosts could still mount the domain when the specific GlusterFS
 host used to define the storage was down. This FQDN was resolvable and
 contactable on each host in the farm. The floating IP is shared between two
 of the four GlusterFS hosts. The logs reported an unhandled exception ('x
 not in list') raised by the below statement (line 340 in
 https://github.com/oVirt/vdsm/blob/master/vdsm/storage/storageServer.py ):
 def _get_backup_servers_option(self):
 servers = [brick.split(:)[0] for brick in self.volinfo['bricks']]
 servers.remove(self._volfileserver) #--- Exception thrown here
 if not servers:
 return 
 
 return backup-volfile-servers= + :.join(servers)
 
 My assumption (without looking too deep in the code) was that since I used a
 FQDN that did not have any bricks associated with it, 'self._volfileserver'
 would be set to a name that would not appear in 'servers', resulting in the
 exception. I patched it as per the following:
 def _get_backup_servers_option(self):
 servers = [brick.split(:)[0] for brick in self.volinfo['bricks']]
 if self._volfileserver in servers:
 self.log.warn(Removing current volfileserver %s... % self._volfileserver)
 servers.remove(self._volfileserver)
 else:
 self.log.warn(Current volfileserver not in servers.)
 if not servers:
 return 
 
 return backup-volfile-servers= + :.join(servers)
 
 Once patched the Data domain created successfully and appears to be working
 normally, although I'm not sure if the above change has any negative
 knock-on effects throughout the code or in specific situations. I'd suggest
 that the _get_backup_servers_option method be tweaked to handle this
 configuration gracefully by someone with more knowledge of the code, either
 by allowing the configuration or rejecting it with a suitable error message
 if the configuration is intended to be unsupported.
 
 :: Could not import VMs from old Data domain due to unsupported video type
 (VGA)
 Once the new data center was up and running, I attached the old Data domain
 and attempted to import the VMs/templates. Template import worked fine,
 however VM import failed with an error stating the video device (which came
 up as VGA) was not supported. I attempted to fix this by specifically
 defining the video type as 'qxl' in the .ovf file for the VM in the
 OVF_STORE for the old storage however the VM would always come up with video
 type VGA in the import dialog, and the import dialog does not permit the
 value to be changed.
 
 The workaround was to add 'vnc/vga' to the supported protocols list in a
 .properties file in the engine OSinfo folder, e.g.:
 os.other.devices.display.protocols.value =
 spice/qxl,vnc/cirrus,vnc/qxl,vnc/vga
 
 Once the engine was restarted the VM import process worked fine, and there
 have been no issues starting the VM with a VGA device or accessing the VM's
 console. To resolve the issue I'd suggest that either:
 - 'vnc/vga' be added to the default supported protocols list; or
 - the video type defined in the .ovf file for the VM to be imported is
 recognized/honoured by the import dialog; or
 - if the import dialog defaults to a particular video device, that it default
 to one that is supported by the engine for the OS defined in the VM's .ovf
 file.


Once the Storage Domain is attached all the ovf data from the OVF_STORE disk is 
being copied to a DB table called unregistered_ovf_of_entities.
This table is being used to determine the VM configuration once you 
register(import) a VM to the setup.
You probably still saw VGA after changing the OVF file since the data in this 
table has not been changed.

 
 I can create Bugzilla entries for the above if required.
 
 
 Please do.
 Thanks!
 
 
 
 Cheers,
 
 Chris
 
 
 
 
 PLEASE 

Re: [ovirt-users] searching vm disks according to vm tags

2015-08-30 Thread Eli Mesika


- Original Message -
 From: Yaniv Dary yd...@redhat.com
 To: Eli Mesika emes...@redhat.com, Liron Kuchlani lkuch...@redhat.com
 Sent: Sunday, August 30, 2015 2:30:55 PM
 Subject: Fwd: [ovirt-users] searching vm disks according to vm tags
 
 Yaniv Dary
 Technical Product Manager
 Red Hat Israel Ltd.
 34 Jerusalem Road
 Building A, 4th floor
 Ra'anana, Israel 4350109
 
 Tel : +972 (9) 7692306
 8272306
 Email: yd...@redhat.com
 IRC : ydary
 
 
 -- Forwarded message --
 From: Jiří Sléžka jiri.sle...@slu.cz
 Date: Thu, Aug 27, 2015 at 11:41 AM
 Subject: [ovirt-users] searching vm disks according to vm tags
 To: users@ovirt.org users@ovirt.org
 
 
 Hello,
 
 Is there a possibility to filter (in manager) all disks which belongs to
 vms with certain tag? If not I think it would be useful. What do you think?

Currently the Disks TAB in webadmin does not support searching by tag, you may 
open a RFE for that 

 
 I am trying this: I have about 50 vms with disks on different storages. I
 have tagged some from this vms by tag migrate_to_new_storage and then I
 would like to move all appropriate disks to new storage. It seems it is not
 possible from manager (in simple way - it looks like I have to filter vms
 by tag, then click on every of them, choose disks tab, select them, click
 move,...).
 
 It would also be nice to have possibility to filter by tags also with
 boolean operator AND and not just OR.

There is already a RFE for that
For doing so we should add a TAB in the webadmin for Tags that will enable that
The challenge is that the result can be mixed entities (hosts, vms etc...), not 
trivial to display in one result list ...


 
 
 Cheers,
 
 Jiri
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Cannot add Posix Storage

2015-08-30 Thread Nir Soffer
- Original Message -
 From: Steve Kilduff kild...@gmail.com
 To: users@ovirt.org
 Sent: Tuesday, August 25, 2015 6:27:26 PM
 Subject: Re: [ovirt-users] Cannot add Posix Storage
 
 Same on centos6 trying to use a bind mount:
 
 Thread-5189::ERROR::2015-08-25
 15:22:26,318::hsm::2379::Storage.HSM::(connectStorageServer) Could not
 connect to storageServer
 Traceback (most recent call last):
 File /usr/share/vdsm/storage/hsm.py, line 2376, in connectStorageServer
 conObj.connect()
 File /usr/share/vdsm/storage/storageServer.py, line 220, in connect
 self.getMountObj().getRecord().fs_file)
 File /usr/share/vdsm/storage/mount.py, line 271, in getRecord
 (self.fs_spec, self.fs_file))
 OSError: [Errno 2] Mount of `/data/test` at
 `/rhev/data-center/mnt/_data_test_` does not exist
 
 And the mount gets mounted on the OS:
 
 /data/test on /rhev/data-center/mnt/_data_test_ type none (rw,bind)
 
 Mount options are:
 Path: /data/test/

Can you try without the trailing slash?

 VFS type: loop
 Options: bind,rw
 
 Cheers,
 Steve
 
 On Tue, Aug 25, 2015 at 3:22 PM, Steve Kilduff  kild...@gmail.com  wrote:
 
 
 
 Hi guys, trying to reply to a previous topic but I am certain I am not doing
 it correct, regardless, here we go.
 
 moosefs mounting on ovirt.
 
 I am having similar problem to the person in the mail Cannot add Posix
 Storage. This is an ovirt setup attempt with moosefs.
 
 Mount options are:
 Path: mfsmount
 VFS type: fuse
 Mount opt: mfsmaster=mfsmaster,mfsport=9421,mfssubfolder=/ovirt,_netdev
 
 I specified a subfolder which may not exist on your mfs storae.
 
 In my setup, ovirt mounts the actual mfs mount without problem, the mount
 remains mounted afterwards as ovirt does not seem to unmount it
 successfully. I think the problem is the mount detection mechanism in
 python, It is looking for a pattern to match both the source/destination of
 the mount and failing: I hope this shows what I mean:
 vsdm.log output:
 OSError: [Errno 2] Mount of `mfsmount` at `/rhev/data-center/mnt/mfsmount`
 does not exist
 
 centos7 output of mount command:
 mfsmaster:9421 on /rhev/data-center/mnt/mfsmount type fuse.mfs
 (rw,relatime,user_id=0,group_id=0,allow_other)
 
 Source = mfsmount in first error out line, and source = ''mfsmaster:9421' on
 the OS, and I guess mount.py is trying to pattern match the mount and doesnt
 compare them correctly and bails out.

This seems to be the case.

ovirt needs a way to detect if a certain remote path is mounted at certain
mountpoint, so it compare the source of the mount and the mountpoint.
We don't have such issue with nfs or glusterfs, which do not change the name
of the remote path.

I would consult with the author of moosefs about this.

We can parse and ignore the :port suffix when comparing remote path, but
I'm not sure this if this behavior is common enough to add support for this.

For example, you can try to do this in the point where we compare the paths:

if : in path:
path = path.rsplit(:, 1)[0]

Before matching local path to remote path. If the path did not match before
because of the port suffix, it will much after removing the port suffix.

 
 I guess normally the source of the mount does not change name, like with the
 specifics I gave to mount mfs.

Can you try different options to avoid the rename of the source?

For example, use the default port.

 
 I am trying to poke around the python to see if I can simply ignore this
 check, but my python skills are non existent.
 
 Also, as a workaround, I tried to first mount mfs without ovirt, and then get
 ovirt to bind mount my /mnt/moosefs how-ever in centos7 bind mount names are
 strange, I think broken when compared to centos6, the output of mount in
 centos7 when bind mounting /mnt/moosefs to /ovirt:
 mfsmaster:9421 on /mnt/moosfs
 mfsmaster:9421 on /ovirt
 
 where as in centos6 it would be:
 mfsmaster:9421 on /mnt/moosfs
 /mnt/moosfs on /ovirt
 
 Again, I think the pattern matching of mount.py is what fails during bind
 mount also.
 
 I think if bind mounting worked, the user could stick what-ever filesystem
 they wanted onto the OS, and get ovirt to bind mount it. Then it's the
 user's problem how to manage their FS.
 
 Best,
 Steve Kilduff
 
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users