Re: [Users] AllInOne installation issue

2013-03-28 Thread Sandro Bonazzola
Il 27/03/2013 23:12, Alon Bar-Lev ha scritto:
 Alex, Sandro,

 We should resolve host in all-in-one and reject if loopback... too many 
 reports on this one.

 Regards,
 Alon


Well, the installer should already have warned that the given hostname
doesn't reverse resolve because it's a loopback and it should not be
mapped on a DNS.
Do you want to abort only the vdsm configuration in this case or do you
want to abort the entire setup process?
Can you open a bz about this?

-- 
Sandro
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] AllInOne installation issue

2013-03-28 Thread Alon Bar-Lev


- Original Message -
 From: Sandro Bonazzola sbona...@redhat.com
 To: Alon Bar-Lev alo...@redhat.com
 Cc: Georg Troxler georg.trox...@staila.com, users@ovirt.org, Alex 
 Lourie alou...@redhat.com
 Sent: Thursday, March 28, 2013 9:16:14 AM
 Subject: Re: [Users] AllInOne installation issue
 
 Il 27/03/2013 23:12, Alon Bar-Lev ha scritto:
  Alex, Sandro,
 
  We should resolve host in all-in-one and reject if loopback... too
  many reports on this one.
 
  Regards,
  Alon
 
 
 Well, the installer should already have warned that the given
 hostname
 doesn't reverse resolve because it's a loopback and it should not be
 mapped on a DNS.
 Do you want to abort only the vdsm configuration in this case or do
 you
 want to abort the entire setup process?

Only if all-in-one is active (we are going to install vdsm) we need to make 
sure host name is not resolved to looback, not warn, fail.

 Can you open a bz about this?

OK. I think it is 3.2.z material.

 
 --
 Sandro
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Issue with Export/NFS storage domain

2013-03-28 Thread Georg Troxler
In ovirt 3.2 with the AllInOne plugin there seems to be an error when I 
add new Export/NFS Domain:


Thread-34494::DEBUG::2013-03-28 
12:13:04,242::task::568::TaskManager.Task::(_updateState) 
Task=`a52ff57d-9d92-4c96-aaf5-98e3f4df4456`::moving from state init - 
state preparing
Thread-34494::INFO::2013-03-28 
12:13:04,242::logUtils::41::dispatcher::(wrapper) Run and protect: 
connectStorageServer(domType=1, 
spUUID='----', conList=[{'connection': 
'192.168.10.105:/mnt/datasstore/vm-storage/export', 'iqn': '', 'portal': 
'', 'user': '', 'password': '**', 'id': 
'----', 'port': ''}], options=None)
Thread-34494::DEBUG::2013-03-28 
12:13:04,247::misc::84::Storage.Misc.excCmd::(lambda) '/usr/bin/sudo 
-n /usr/bin/mount -t nfs -o 
soft,nosharecache,timeo=600,retrans=6,nfsvers=3 
192.168.10.105:/mnt/datasstore/vm-storage/export 
/rhev/data-center/mnt/192.168.10.105:_mnt_datasstore_vm-storage_export' 
(cwd None)
Thread-34494::ERROR::2013-03-28 
12:13:04,321::hsm::2215::Storage.HSM::(connectStorageServer) Could not 
connect to storageServer

Traceback (most recent call last):
  File /usr/share/vdsm/storage/hsm.py, line 2211, in connectStorageServer
conObj.connect()
  File /usr/share/vdsm/storage/storageServer.py, line 302, in connect
return self._mountCon.connect()
  File /usr/share/vdsm/storage/storageServer.py, line 208, in connect
fileSD.validateDirAccess(self.getMountObj().getRecord().fs_file)
  File /usr/share/vdsm/storage/mount.py, line 260, in getRecord
(self.fs_spec, self.fs_file))
OSError: [Errno 2] Mount of 
`192.168.10.105:/mnt/datasstore/vm-storage/export` at 
`/rhev/data-center/mnt/192.168.10.105:_mnt_datasstore_vm-storage_export` 
does not exist
Thread-34494::INFO::2013-03-28 
12:13:04,323::logUtils::44::dispatcher::(wrapper) Run and protect: 
connectStorageServer, Return response: {'statuslist': [{'status': 100, 
'id': '----'}]}
Thread-34494::DEBUG::2013-03-28 
12:13:04,323::task::1151::TaskManager.Task::(prepare) 
Task=`a52ff57d-9d92-4c96-aaf5-98e3f4df4456`::finished: {'statuslist': 
[{'status': 100, 'id': '----'}]}
Thread-34494::DEBUG::2013-03-28 
12:13:04,324::task::568::TaskManager.Task::(_updateState) 
Task=`a52ff57d-9d92-4c96-aaf5-98e3f4df4456`::moving from state preparing 
- state finished


I used the following parameters:

Name: export-domain
Data Center: local_datacenter
Domain Function / Storage Type: Export / NFS
Use Host: local_host
Export Path: 192.168.10.105:/mnt/datasstore/vm-storage/export

It looks as if the subsystem expects a mount point at 
'/rhev/data-center/mnt/192.168.10.105:_mnt_datasstore_vm-storage_export'. If 
I create the mount-point manually, change owner and permission the 
operation still fails.


Unfortunately I could not find specific information regarding the 
creation of an Export store other than the documentation on how to 
create an NFS store. Maybe I am missing something?



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] ldap

2013-03-28 Thread Ryan Wilkinson
I'm able to set up Active Directory authentication if my ovirt engine is
set to use dns that is hosted on the same system as Active Directory.
However, if I use static host entries in my engine hosts file instead of
using dns I'm getting the error ldap server for domain not found when I
issue the command: engine-manage-domains -action=add -domain=’ovirt.local'
-user='admin' -provider=ActiveDirectory -interactive from the engine. I've
googled to death how to configure static entries on my engine system for
the ldap server and it seems that I need to configure my nsswitch and
ldap.conf files but still no luck... Any ideas??
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ldap

2013-03-28 Thread Oved Ourfalli


- Original Message -
 From: Ryan Wilkinson ryanw...@gmail.com
 To: users@ovirt.org
 Sent: Thursday, March 28, 2013 2:42:56 PM
 Subject: [Users] ldap
 
 
 
 I'm able to set up Active Directory authentication if my ovirt engine
 is set to use dns that is hosted on the same system as Active
 Directory. However, if I use static host entries in my engine
 hosts file instead of using dns I'm getting the error ldap server
 for domain not found when I issue the command:
 engine-manage-domains -action=add -domain=’ovirt.local'
 -user='admin' -provider=ActiveDirectory -interactive from the
 engine. I've googled to death how to configure static entries on my
 engine system for the ldap server and it seems that I need to
 configure my nsswitch and ldap.conf files but still no luck... Any
 ideas??
Hi Ryan,

To work with LDAP you currently need to have both LDAP and Kerberos SRV records 
in the DNS, as well as PTR record.
If you would like to work locally I can suggest working with dnsmasq 
(lightweight DHCP and caching DNS server) locally, defining these entries 
there, and setting /etc/resolv.conf properly, so that it would access it.

The configuration is in /etc/dnsmasq.conf (or in /etc/dnsmasq.d/...).
Example for LDAP and Kerberos records:
srv-host=_ldap._tcp.my_domain.com,ad.my_domain.com,389
srv-host=_kerberos._tcp.my_domain.com,ad.my_domain.com,88

and, afaik it also takes /etc/hosts and creates PTR records for the entries 
there, so that should be enough, if you add your AD host in /etc/hosts (I guess 
you can also add those manually in dnsmasq).

Let me know if you need further assistance.

Oved

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] scsi disks inside VMs?

2013-03-28 Thread Andrew Cathrow

- Original Message - 

 From: Paul Jansen vla...@yahoo.com.au
 To: users@ovirt.org
 Sent: Wednesday, March 27, 2013 10:52:17 PM
 Subject: [Users] scsi disks inside VMs?

 Hello.
 I'm wondering if it is possible to create VMs with ovirt that have
 scsi disks?
 I've just installed ovirt 3.2.1 on Fedora 18 and attached an ovirt
 node (the current fedora 18 based version).

 When adding disks to a VM I can chose from the 'IDE' or 'VirtIO'
 interfaces. I'd like a scsi option also.
 Mainly because when migrating from vsphere VMs this makes things
 simpler.
 Also, my current kickstart installer for various OSes does not yet
 handle 'vd' disks.
 To add to things I need to install a custom filesystem on the vms
 that wants a scsi disk. It does a scsi inquiry early on in the
 install phase and will not work in 'vd' disks. ie: ' sg_inq
 /dev/vda' does not work.

 I also know that the libata driver in recent linux distributions
 exposes IDE drives as scsi and allows a scsi enquiry to succeed.
 Unfortunately the use case I have required Enterprise Linux 5 and in
 this release IDE disks report as 'hd', whereas scsi disks report as
 'sd'. So, I can just use an IDE disk to get around this problem.

 I understand that virt-manager will allow attaching scsi disks to KVM
 based virtual machines, and that this is made possible by recent
 changes in libvirt.

 I think we should be encouraging people to use the virtio disks where
 possible, but in cases where this is not straightforward ovirt - and
 RHEV - are missing a trick as far as allowing people that have
 existing vsphere setups to fairly easily move to ovirt.

 Is a 'scsi' interface' option for adding virtual disks for VMs on the
 roadmap? If not, could it be considered?


There are plans to add support for virtio-scsi - still paravirtualized but 
providing a pv scsi controller that gives us more features - such as passing 
scsi commands to luns, allowing more disks per vm etc.


 Thanks.

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] host non responsive in a 1 host DC with 1 vm

2013-03-28 Thread Neil
Hi guys,

I've got a test oVirt setup with only 1 host and 1 test VM but I had a
power failure and the host is now non responsive, despite being able
to log into it via SSH. Manually fencing results in a Manual
fence did not revoke the selected SPM (ovirthosttest) since the master
storage domain was not active or could not use another host for the
fence operation.   When I try put the host into maintenance I get
...Manual fence did not revoke the selected SPM (ovirthosttest) since
the master storage domain was not active or could not use another host
for the fence operation.  When I try and activate the data domain I
get... Failed to activate Storage Domain MAIN (Data Center Default)
by admin@internal.  My VM was in an unknown state, but considering
I'd already had a power failure, I logged into my oVirt engine
postgres command line and ran update vm_dynamic set status = 0 where
vm_guid = (select vm_guid from vm_static where vm_name = 'zimbra');
to try and force my VM into a down state, but that hasn't helped
either.  I'd imagine that if I had another host to add to the cluster
SPM would hopefully move to the new host and the problem would
disappear, but I don't have one unfortunately.

Any suggestions?

Centos 6.4

vdsm-python-4.10.3-0.31.20.el6.x86_64
vdsm-4.10.3-0.31.20.el6.x86_64
vdsm-cli-4.10.3-0.31.20.el6.noarch
vdsm-xmlrpc-4.10.3-0.31.20.el6.noarch
libvirt-lock-sanlock-0.10.2-18.el6.x86_64
libvirt-client-0.10.2-18.el6.x86_64
libvisual-0.4.0-9.1.el6.x86_64
libvirt-python-0.10.2-18.el6.x86_64
libvirt-0.10.2-18.el6.x86_64

On my engine it's Centos 6.4 with ovirt-engine-3.2.0-1.39.el6.noarch
My engine has an NFS shared RAID array which the hosts the entire data
domain.

I could can the entire install and start again, but I'd prefer to
avoid doing that if possible as I'm still in the process of migrating
email to my zimbra VM.

Any help is appreciated greatly.

Thanks.

Regards.

Neil Wilson.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.2.1 Beta EL6 Content is available

2013-03-28 Thread Maor Lipchuk

On 03/27/2013 10:37 AM, Dan Kenigsberg wrote:
 (dropping announce list. they only care about the finished product, not
 the road to get it done)
 
 On Tue, Mar 26, 2013 at 10:24:16AM +0100, Gianluca Cecchi wrote:
 On Thu, Mar 21, 2013 at 2:36 PM, Mike Burns  wrote:



 do they already contain the fix from
 http://gerrit.ovirt.org/#/c/11254/

 that was one of the biggest problems of 3.2?

 Gianluca


 Looking at gerrit, the 3.2 branch fix for that[1] is still in review. That
 would mean that it's not included.

 I'll defer to the engine team on when that will get into a build.

 Mike

 [1] http://gerrit.ovirt.org/#/c/13172/

 Is there a particular reason for it to be still in review?
 Any drawbacks?
 Any way at my side to accelerate the process?
 
 Maor, Daniel: when is this expected to be merge and built?
 
Hi, the patch has been merged, sorry for the delay, I wanted to
re-verify it with the build team before.
Unfortunately I'm not aware of a specific date when next version of
engine should be built.

Regards,
Maor
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Issues using local storage for gluster shared volume

2013-03-28 Thread Tony Feldmann
I have been trying for a month or so to get a 2 node cluster up and
running.  I have engine installed on the first node, then add each each
system as a host to a posix dc.  Both boxes have 4 data disks.  After
adding the hosts I create a distributed replicate volume using 3 disk from
each host with ext4 filesystems. I click the 'optimize for virt' option on
the volume.  There is a message in events that says that it can't set a
volume option, then it sets 2 volume options.  Checking the options tab I
see that it added the gid/uid options.  I was unable to find in the logs
what option was not set, I just see a message about usage for volume set
volname option.  The volume starts fine and I am able to create a data
domain on the volume.  Once the domain is created I try to create a vm and
it fails creating the disk.  Error messages are along the lines of task
file exists and can't remove task files.  There are directories under tasks
and when trying to manually remove them I get the directory not empty
error.  Can someone please shed some light on what I am doing wrong to get
this 2 node cluster with local disk as shared storage up and running?

Thanks,

Tony
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Custom columns in VM list (engine admin) ?

2013-03-28 Thread Ernest Beinrohr
Hi, is it possible to add columns to the VM list view in
ovirt-engine administrator? I'd like to add a memory
columns (max, current) and also some IO stats and.


In RHEL 5 virt-manager show something like this:
http://i.imgur.com/e18DJtf.png

-- 
Ernest Beinrohr, AXON PRO
Ing http://www.beinrohr.sk/ing.php, RHCE
http://www.beinrohr.sk/rhce.php, RHCVA
http://www.beinrohr.sk/rhce.php, LPIC
http://www.beinrohr.sk/lpic.php, +421-2--6241-0360
callto://+421-2--6241-0360, +421-903--482-603 callto://+421-903--482-603
icq:28153343, skype:oernii-work callto://oernii-work,
jabber:oer...@jabber.org

“The bureaucracy is expanding to meet the needs of the expanding
bureaucracy.” ― Oscar Wilde
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Issues using local storage for gluster shared volume

2013-03-28 Thread Kanagaraj Mayilsamy


- Original Message -
 From: Tony Feldmann trfeldm...@gmail.com
 To: users@ovirt.org
 Sent: Thursday, March 28, 2013 8:19:17 PM
 Subject: [Users] Issues using local storage for gluster shared volume
 
 
 I have been trying for a month or so to get a 2 node cluster up and
 running. I have engine installed on the first node, then add each
 each system as a host to a posix dc. Both boxes have 4 data disks.
 After adding the hosts I create a distributed replicate volume using
 3 disk from each host with ext4 filesystems. I click the 'optimize
 for virt' option on the volume. There is a message in events that
 says that it can't set a volume option, then it sets 2 volume
 options. Checking the options tab I see that it added the gid/uid
 options. I was unable to find in the logs what option was not set, I
 just see a message about usage for volume set volname option.

gid and uid options are enough to make a gluster volume ready for virt store. 
The third option sets a group(called as virt group) of options on the volume 
mainly related to performance tuning. To make this option work, you have to 
copy the file 
https://github.com/gluster/glusterfs/blob/master/extras/group-virt.example to 
/var/lib/glusterd/groups/ and name it as virt. Now you can click on 'Optimize 
for virt store' again to set the virt group. Setting this group option 
recommended but not necessary to make the gluster volume to be used as virt 
store.

I am not sure about the below errors, other people in the list can help you out.

Thanks,
Kanagaraj

 The volume starts fine and I am able to create a data domain on the
 volume. Once the domain is created I try to create a vm and it fails
 creating the disk. Error messages are along the lines of task file
 exists and can't remove task files. There are directories under
 tasks and when trying to manually remove them I get the directory
 not empty error. Can someone please shed some light on what I am
 doing wrong to get this 2 node cluster with local disk as shared
 storage up and running?
 
 
 Thanks,
 
 
 Tony
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] VM crashes and doesn't recover

2013-03-28 Thread Limor Gavish
Concerning the following error in dmesg:

[ 2235.638814] device-mapper: table: 253:0: multipath: error getting device
[ 2235.638816] device-mapper: ioctl: error adding target to table

I tried to debug it but mutipath gives me some problems

[wil@bufferoverflow vdsm]$ sudo multipath -l
Mar 28 18:28:19 | multipath.conf +5, invalid keyword: getuid_callout
Mar 28 18:28:19 | multipath.conf +18, invalid keyword: getuid_callout
[wil@bufferoverflow vdsm]$ sudo multipath -F
Mar 28 18:28:30 | multipath.conf +5, invalid keyword: getuid_callout
Mar 28 18:28:30 | multipath.conf +18, invalid keyword: getuid_callout
[wil@bufferoverflow vdsm]$  sudo multipath -v2
Mar 28 18:28:35 | multipath.conf +5, invalid keyword: getuid_callout
Mar 28 18:28:35 | multipath.conf +18, invalid keyword: getuid_callout
Mar 28 18:28:35 | sda: rport id not found
Mar 28 18:28:35 | Corsair_Force_GS_13057914977000C3: ignoring map

Any idea if those mutipath errors are related to the storage crash?

Here is the mutipath.conf:

[wil@bufferoverflow vdsm]$ sudo cat /etc/multipath.conf
*# RHEV REVISION 1.0*
*
*
*defaults {*
*polling_interval5*
*getuid_callout  /usr/lib/udev/scsi_id --whitelisted
--replace-whitespace --device=/dev/%n*
*no_path_retry   fail*
*user_friendly_names no*
*flush_on_last_del   yes*
*fast_io_fail_tmo5*
*dev_loss_tmo30*
*max_fds 4096*
*}*
*
*
*devices {*
*device {*
*vendor  HITACHI*
*product DF.**
*getuid_callout  /usr/lib/udev/scsi_id --whitelisted
--replace-whitespace --device=/dev/%n*
*}*
*device {*
*vendor  COMPELNT*
*product Compellent Vol*
*no_path_retry   fail*
*}*
*}*

Thanks,
Limor G


On Wed, Mar 27, 2013 at 6:08 PM, Yuval M yuva...@gmail.com wrote:

 Still getting crashes with the patch:
 # rpm -q vdsm
 vdsm-4.10.3-0.281.git97db188.fc18.x86_64

 attached excerpts from vdsm.log and from dmesg.

 Yuval


 On Wed, Mar 27, 2013 at 11:02 AM, Dan Kenigsberg dan...@redhat.comwrote:

 On Sun, Mar 24, 2013 at 09:50:02PM +0200, Yuval M wrote:
  I am running vdsm from packages as my interest is in developing for the
  engine and not vdsm.
  I updated the vdsm package in an attempt to solve this, now I have:
  # rpm -q vdsm
  vdsm-4.10.3-10.fc18.x86_64

 I'm afraid that this build still does not have the patch mentioned
 earlier.

 
  I noticed that when the storage domain crashes I can't even do df -h
  (hangs)

 That's expectable, since the master domain is still mounted (due to that
 patch missing), but unreachable.

 Would you be kind to try out my little patch, in order to advance a bit
 in the research to solve the bug?


  I'm also getting some errors in /var/log/messages:
 
  Mar 24 19:57:44 bufferoverflow vdsm SuperVdsmProxy WARNING Connect to
 svdsm
  failed [Errno 2] No such file or directory
  Mar 24 19:57:45 bufferoverflow vdsm SuperVdsmProxy WARNING Connect to
 svdsm
  failed [Errno 2] No such file or directory
  Mar 24 19:57:46 bufferoverflow vdsm SuperVdsmProxy WARNING Connect to
 svdsm
  failed [Errno 2] No such file or directory
  Mar 24 19:57:47 bufferoverflow vdsm SuperVdsmProxy WARNING Connect to
 svdsm
  failed [Errno 2] No such file or directory
  Mar 24 19:57:48 bufferoverflow vdsm SuperVdsmProxy WARNING Connect to
 svdsm
  failed [Errno 2] No such file or directory
  Mar 24 19:57:49 bufferoverflow vdsm SuperVdsmProxy WARNING Connect to
 svdsm
  failed [Errno 2] No such file or directory
  Mar 24 19:57:50 bufferoverflow vdsm SuperVdsmProxy WARNING Connect to
 svdsm
  failed [Errno 2] No such file or directory
  Mar 24 19:57:51 bufferoverflow sanlock[1208]: 2013-03-24 19:57:51+0200
 7412
  [4759]: 1083422e close_task_aio 0 0x7ff3740008c0 busy
  Mar 24 19:57:51 bufferoverflow sanlock[1208]: 2013-03-24 19:57:51+0200
 7412
  [4759]: 1083422e close_task_aio 1 0x7ff374000910 busy
  Mar 24 19:57:51 bufferoverflow sanlock[1208]: 2013-03-24 19:57:51+0200
 7412
  [4759]: 1083422e close_task_aio 2 0x7ff374000960 busy
  Mar 24 19:57:51 bufferoverflow sanlock[1208]: 2013-03-24 19:57:51+0200
 7412
  [4759]: 1083422e close_task_aio 3 0x7ff3740009b0 busy
  Mar 24 19:57:51 bufferoverflow vdsm SuperVdsmProxy WARNING Connect to
 svdsm
  failed [Errno 2] No such file or directory
  Mar 24 19:57:52 bufferoverflow vdsm SuperVdsmProxy WARNING Connect to
 svdsm
  failed [Errno 2] No such file or directory
  Mar 24 19:57:53 bufferoverflow vdsm SuperVdsmProxy WARNING Connect to
 svdsm
  failed [Errno 2] No such file or directory
  Mar 24 19:57:54 bufferoverflow vdsm SuperVdsmProxy WARNING Connect to
 svdsm
  failed [Errno 2] No such file or directory
  Mar 24 19:57:55 bufferoverflow vdsm SuperVdsmProxy WARNING Connect to
 svdsm
  failed [Errno 2] No such file or directory
  Mar 24 19:57:55 bufferoverflow vdsm Storage.Misc ERROR Panic: Couldn't
  connect to supervdsm
  Mar 24 19:57:55 bufferoverflow respawn: slave 

[Users] forced shutdown with client agent

2013-03-28 Thread Thomas Scofield
I have run into a scenario after installing the client agent.  If a VM is
shutdown, the client agent calls the shutdown command with a 1 minute
timeout.

Dummy-2::INFO::2013-03-28 14:05:21,892::vdsAgentLogic::138::root::Shutting
down (timeout = 30, message = 'System Administrator has initiated shutdown
of this Virtual Machine. Virtual Machine is shutting down.'

Since the shutdown command is called with time parameter the VM sets the
/etc/nologin file. When the VM is forced down the /etc/nologin file is not
cleared and when it comes back up only root can login until the
/etc/nologin file is cleared.

Is their some some reason the shutdown time is set to 30 seconds (rounded
up to 1 minute in the code)?  Are there any know issues with setting this
to 0?

Is this the right way to change it to 0?
psql engine postgres -c update vdc_options set option_value = '0' where
option_name = 'VmGracefulShutdownTimeout';
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users