Re: [Users] unable to start vm in 3.3 and f19 with gluster

2013-09-25 Thread Itamar Heim

On 09/25/2013 02:10 AM, Gianluca Cecchi wrote:

Hello,
I'm testing GlusterFS on 3.3 with fedora 19 systems.
One engine (ovirt) + 2 nodes (ovnode01 and ovnode02)

Successfully created gluster volume composed by two bricks (one for
each vdsm node) distributed replicated

Suggestion:
If page
http://www.ovirt.org/Features/GlusterFS_Storage_Domain
is the reference, perhaps it would be better to explicitly specify
that one has to start the created volume before going to add a storage
domain based on the created volume.
Not knowing Gluster could lead to think that the start phase is
responsibility of storage domain creation itself ...


its a wiki - please edit/fix it ;)
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] unable to start vm in 3.3 and f19 with gluster

2013-09-25 Thread Gianluca Cecchi
So it seems the probelm is

file=gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161,if=none,id=drive-virtio-disk0,format=raw,serial=d004045e-620b-4d90-8a7f-6c6d26393a08,cache=none,werror=stop,rerror=stop,aio=threads:
Gluster connection failed for server=ovnode01 port=0 volume=gv01
image=20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161
transport=tcp
qemu-system-x86_64: -drive
file=gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161,if=none,id=drive-virtio-disk0,format=raw,serial=d004045e-620b-4d90-8a7f-6c6d26393a08,cache=none,werror=stop,rerror=stop,aio=threads:
could not open disk image

it is the same in qemu.log of both hosts
On the other one I have:
2013-09-25 05:42:35.454+: starting up
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
QEMU_AUDIO_DRV=spice /usr/bin/qemu-kvm -name C6 -S -machine
pc-1.0,accel=kvm,usb=off -cpu Nehalem -m 2048 -smp
1,sockets=1,cores=1,threads=1 -uuid
409c5dbe-5e70-40de-bf73-46ef484ea2d7 -smbios
type=1,manufacturer=oVirt,product=oVirt
Node,version=19-3,serial=421FAF48-83D1-08DC-F2ED-F2894F8BC56D,uuid=409c5dbe-5e70-40de-bf73-46ef484ea2d7
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/C6.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=2013-09-25T05:42:35,driftfix=slew -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive
if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device
ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
file=gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161,if=none,id=drive-virtio-disk0,format=raw,serial=d004045e-620b-4d90-8a7f-6c6d26393a08,cache=none,werror=stop,rerror=stop,aio=threads
-device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-netdev tap,fd=27,id=hostnet0,vhost=on,vhostfd=28 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:bb:9f:10,bus=pci.0,addr=0x3
-chardev 
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/409c5dbe-5e70-40de-bf73-46ef484ea2d7.com.redhat.rhevm.vdsm,server,nowait
-device 
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev 
socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/409c5dbe-5e70-40de-bf73-46ef484ea2d7.org.qemu.guest_agent.0,server,nowait
-device 
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-chardev spicevmc,id=charchannel2,name=vdagent -device
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
-spice 
tls-port=5900,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
-k en-us -vga qxl -global qxl-vga.ram_size=67108864 -global
qxl-vga.vram_size=67108864 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7
qemu-system-x86_64: -drive
file=gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161,if=none,id=drive-virtio-disk0,format=raw,serial=d004045e-620b-4d90-8a7f-6c6d26393a08,cache=none,werror=stop,rerror=stop,aio=threads:
Gluster connection failed for server=ovnode01 port=0 volume=gv01
image=20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161
transport=tcp
qemu-system-x86_64: -drive
file=gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161,if=none,id=drive-virtio-disk0,format=raw,serial=d004045e-620b-4d90-8a7f-6c6d26393a08,cache=none,werror=stop,rerror=stop,aio=threads:
could not open disk image
gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161:
No data available
2013-09-25 05:42:38.620+: shutting down

Currently iptables as setup by install is this (I checked to set up
iptables from the gui when I added the host)
Do I have to add anything for gluster?

[root@ovnode01 qemu]# iptables -L -n
Chain INPUT (policy ACCEPT)
target prot opt source   destination
ACCEPT all  --  0.0.0.0/00.0.0.0/0state
RELATED,ESTABLISHED
ACCEPT all  --  0.0.0.0/00.0.0.0/0
ACCEPT tcp  --  0.0.0.0/00.0.0.0/0tcp dpt:54321
ACCEPT tcp  

Re: [Users] unable to start vm in 3.3 and f19 with gluster

2013-09-25 Thread Vijay Bellur

On 09/25/2013 11:36 AM, Gianluca Cecchi wrote:

qemu-system-x86_64: -drive
file=gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161,if=none,id=drive-virtio-disk0,format=raw,serial=d004045e-620b-4d90-8a7f-6c6d26393a08,cache=none,werror=stop,rerror=stop,aio=threads:
Gluster connection failed for server=ovnode01 port=0 volume=gv01
image=20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161
transport=tcp
qemu-system-x86_64: -drive
file=gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161,if=none,id=drive-virtio-disk0,format=raw,serial=d004045e-620b-4d90-8a7f-6c6d26393a08,cache=none,werror=stop,rerror=stop,aio=threads:
could not open disk image
gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161:
No data available
2013-09-25 05:42:32.291+: shutting down



Have the following configuration changes been done?

1) gluster volume set volname server.allow-insecure on

2) Edit /etc/glusterfs/glusterd.vol on all gluster nodes to contain this 
line:

option rpc-auth-allow-insecure on

Post 2), restarting glusterd would be necessary.

Regards,
Vijay
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] unable to start vm in 3.3 and f19 with gluster

2013-09-25 Thread Gianluca Cecchi
On Wed, Sep 25, 2013 at 8:11 AM, Vijay Bellur  wrote:



 Have the following configuration changes been done?

 1) gluster volume set volname server.allow-insecure on

 2) Edit /etc/glusterfs/glusterd.vol on all gluster nodes to contain this
 line:
 option rpc-auth-allow-insecure on

 Post 2), restarting glusterd would be necessary.

 Regards,
 Vijay


No, because I didn't find find this kind of info anywhere... ;-)

Done on both hosts (step 1 only one time) and I see that the gui
detects the change in volume setting.
Now the VM can start (I see the qemu process on ovnode02) but it seems
to remain in hourglass state icon.
After 5 minutes it still remains in executing phase in tasks

Eventually I'm going to restart the nodes completely
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] unable to start vm in 3.3 and f19 with gluster

2013-09-25 Thread Gianluca Cecchi
On Wed, Sep 25, 2013 at 8:02 AM, Itamar Heim  wrote:

 Suggestion:
 If page
 http://www.ovirt.org/Features/GlusterFS_Storage_Domain
 is the reference, perhaps it would be better to explicitly specify
 that one has to start the created volume before going to add a storage
 domain based on the created volume.
 Not knowing Gluster could lead to think that the start phase is
 responsibility of storage domain creation itself ...


 its a wiki - please edit/fix it ;)

I was in doubt because it is not explictly set as wiki but more as infra ...
I'll do
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] unable to start vm in 3.3 and f19 with gluster

2013-09-25 Thread Vijay Bellur

On 09/25/2013 11:51 AM, Gianluca Cecchi wrote:

On Wed, Sep 25, 2013 at 8:11 AM, Vijay Bellur  wrote:




Have the following configuration changes been done?

1) gluster volume set volname server.allow-insecure on

2) Edit /etc/glusterfs/glusterd.vol on all gluster nodes to contain this
line:
 option rpc-auth-allow-insecure on

Post 2), restarting glusterd would be necessary.

Regards,
Vijay



No, because I didn't find find this kind of info anywhere... ;-)


The feature page wiki does provide this information but it gets missed 
in the details. Should we highlight it more?




Done on both hosts (step 1 only one time) and I see that the gui
detects the change in volume setting.
Now the VM can start (I see the qemu process on ovnode02) but it seems
to remain in hourglass state icon.
After 5 minutes it still remains in executing phase in tasks



Let us know how this goes.

-Vijay
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] vdsm Domain monitor error

2013-09-25 Thread Dafna Ron

if the domain no longer exists than the problem is not in the host at all.
your problem is that you have db entries and when you try to remove them 
a command is sent to the vdsm (since engine has no idea you removed the 
domain) and you will fail to remove it since vdsm is sending an error.


what you need to do is destroy the domain in the UI (domains tab - 
select the domain - right click - destroy). this should clean all the 
db entries related to that domain.



On 09/24/2013 09:27 PM, Eduardo Ramos wrote:

I think I found it, but I don't know how to remove:

/sbin/lvm vgs --config  devices { preferred_names = 
[\^/dev/mapper/\] ignore_suspended_devices=1 write_cache_state=0 
disable_after_error_count=3 filter = [ 
\a%36000eb396eb9c0540033|3600508b1001c80dabd7195030a341559%\, 
\r%.*%\ ] }  global {  locking_type=1 prioritise_write_locks=1  
wait_for_locks=1 }  backup { retain_min = 50  retain_days = 0 }  
--noheadings --units b --nosuffix --separator '|' -o tags


In the return, there it is:

MDT_POOL_DOMAINS=*0226b818-59a6-41bc-8590-91f520aa7859:Active*44c332da29-ba9f-4c94-8fa9-346bb8e04e2a:Active4451eb6183-157d-4015-ae0f-1c7ffb1731c0:Active440e0be898-6e04-4469-bb32-91f3cf8146d1:Active,MDT__SHA_CKSUM=0ccf56122a8384461c8da7b0eda19e9bdcbd23bf

Any idea to remove it?

On 09/24/2013 02:14 PM, Eduardo Ramos wrote:
This storage domains don't exist anymore. There is an entry in 
postgres with:


Domain VMExport was forcibly removed by admin@internal

It was a NFS Export domain.

Is there any chance it is causing problem with iscsi data domain 
operations? Now I can create disks and VMs, but I can't remove them. 
I tried export a VM, and the engine.log returned me this:


2013-09-24 14:03:05,180 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] 
(pool-3-thread-33) [547e3abf] Failed in MoveImageGroupVDS method
2013-09-24 14:03:05,182 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] 
(pool-3-thread-33) [547e3abf] Error code MoveImageError and error 
message IRSGenericException: IRSErrorException: Failed to 
MoveImageGroupVDS, error = Error moving image: 
('spUUID=9dbc7bb1-c460-4202-8f10-862d2ed3ed9a, 
srcDomUUID=c332da29-ba9f-4c94-8fa9-346bb8e04e2a, 
dstDomUUID=51eb6183-157d-4015-ae0f-1c7ffb1731c0, 
imgUUID=483d8af2-beb2-45cc-b73e-4597e31a6fc0, vmUUID=, op=1, 
force=false, postZero=false force=false',)
2013-09-24 14:03:05,184 ERROR 
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] 
(pool-3-thread-33) [547e3abf] IrsBroker::Failed::MoveImageGroupVDS 
due to: IRSErrorException: IRSGenericException: IRSErrorException: 
Failed to MoveImageGroupVDS, error = Error moving image: 
('spUUID=9dbc7bb1-c460-4202-8f10-862d2ed3ed9a, 
srcDomUUID=c332da29-ba9f-4c94-8fa9-346bb8e04e2a, 
dstDomUUID=51eb6183-157d-4015-ae0f-1c7ffb1731c0, 
imgUUID=483d8af2-beb2-45cc-b73e-4597e31a6fc0, vmUUID=, op=1, 
force=false, postZero=false force=false',)




On 09/24/2013 01:09 PM, Dafna Ron wrote:

vdsm cannot find your storage.
check your storage and network connection to it.

On 09/24/2013 03:31 PM, Eduardo Ramos wrote:

Hi all!

I'm getting a strange error in on my SPM:

Message from syslogd@darwin at Sep 24 11:19:58 ...
�11vdsm Storage.DomainMonitorThread ERROR Error while collecting 
domain 0226b818-59a6-41bc-8590-91f520aa7859 monitoring 
information#012Traceback (most recent call last):#012 File 
/usr/share/vdsm/storage/domainMonitor.py, line 182, in 
_monitorDomain#012 self.domain = sdCache.produce(self.sdUUID)#012 
File /usr/share/vdsm/storage/sdc.py, line 97, in produce#012 
domain.getRealDomain()#012 File /usr/share/vdsm/storage/sdc.py, 
line 52, in getRealDomain#012 return 
self._cache._realProduce(self._sdUUID)#012 File 
/usr/share/vdsm/storage/sdc.py, line 121, in _realProduce#012 
domain = self._findDomain(sdUUID)#012 File 
/usr/share/vdsm/storage/sdc.py, line 152, in _findDomain#012 
raise 
se.StorageDomainDoesNotExist(sdUUID)#012StorageDomainDoesNotExist: 
Storage domain does not exist: 
(u'0226b818-59a6-41bc-8590-91f520aa7859',)


I also can not remove disks. When I try,immediatly appears on the 
'Events' log of webadmin:


*Data Center is being initialized, please wait for initialization 
to complete.*
*User eduardo.ramos failed to initiate removing of disk 
012.167_teste_InfoDoc_Disk1 from domain VMs.*


Could someone help me?

Thanks


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users






___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users





--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Migration issues with ovirt 3.3

2013-09-25 Thread emi...@gmail.com
No, it doesn't succeed neither.

I've also noticed that in the Gluster Swift secction it appears to me
everything as Not installed, except the memcached that appears as Down,
could this has something to do with all these?

The private chat was my bad I think, I've only answered to you in some
point, sorry!

Thanks!



2013/9/25 Dan Kenigsberg dan...@redhat.com

 On Wed, Sep 25, 2013 at 10:40:30AM -0300, emi...@gmail.com wrote:
  I've created a new debian VM. I've restarted the vdsm on the source and
  destination host and after try the migration now I received the following
  message:
 
  Migration failed due to Error: Fatal error during migration (VM:
  debian-test2, Source: ovirt1, Destination: ovirt2).
 
  Do you think that with a clean instalation of the host could this be
 solved?
 No.

 We have a real bug on re-connecting to VMs on startup.

 But why are we in a private chat? Please report it as a reply to the
 mailing list thread.

 But let me verify: if you start a VM and then migrate (no vdsm restart),
 does migration succede?

 Dan.




-- 
*Emiliano Tortorella*
+598 98941176
emi...@gmail.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Glance with oVirt

2013-09-25 Thread Riccardo Brunetti
On 09/24/2013 08:24 PM, Itamar Heim wrote:
 On 09/24/2013 06:06 PM, Jason Brooks wrote:
 On Tue, 2013-09-24 at 16:15 +0200, Riccardo Brunetti wrote:
 Dear ovirt users.
 I'm trying to setup an oVirt 3.3 installation using an already existing
 OpenStack glance service as an external provider.
 When I define the external provider, I put:

 Openstack Image as Type
 the glance service endpoint as URL (ie. http://xx.xx.xx.xx:9292) I
 used the openstack public url.
 check Requires Authentication
 put the administrator user/password/tenant in the following fields.

 Unfortunately the connection test always fails and the glance provider
 doesn't work in ovirt.

 You also need to run this from the command line (of your engine):

 engine-config --set KeystoneAuthUrl=http://host.fqdn:35357

 maybe open a bug on missing a warning keystone is not configured when
 trying to use glance/neutron with authentication?


 And then restart the ovirt-engine service.

 Jason


 In the engine.log I can see:

 2013-09-24 15:57:30,665 INFO
 [org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand]
 (ajp--127.0.0.1-8702-3) Running command:
 TestProviderConnectivityCommand
 internal: false. Entities affected :  ID:
 aaa0----123456789aaa Type: System
 2013-09-24 15:57:30,671 ERROR
 [org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand]
 (ajp--127.0.0.1-8702-3) Command
 org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand
 throw
 Vdc Bll exception. With error message VdcBLLException: (Failed with
 VDSM
 error PROVIDER_FAILURE and code 5050)
 2013-09-24 15:57:30,674 ERROR
 [org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand]
 (ajp--127.0.0.1-8702-3) Transaction rolled-back for command:
 org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand.

 The glance URL is reachable from the oVirt engine host, but looking
 with
 tcpdump on the glance service I noticed that
 no connections come up --- when I use requires authentication ---
 a connection happens if --- I do not use requires authentication ---
 (even if the test fails ultimately)

 My OS is CentOS-6.4 and my packages are the following:

 [root@rhvmgr03 ovirt-engine]# rpm -qa | grep ovi
 ovirt-host-deploy-1.1.1-1.el6.noarch
 ovirt-log-collector-3.3.0-1.el6.noarch
 ovirt-engine-cli-3.3.0.4-1.el6.noarch
 ovirt-engine-webadmin-portal-3.3.0-4.el6.noarch
 ovirt-engine-tools-3.3.0-4.el6.noarch
 ovirt-release-el6-8-1.noarch
 ovirt-engine-sdk-python-3.3.0.6-1.el6.noarch
 ovirt-iso-uploader-3.3.0-1.el6.noarch
 ovirt-host-deploy-java-1.1.1-1.el6.noarch
 ovirt-engine-userportal-3.3.0-4.el6.noarch
 ovirt-engine-backend-3.3.0-4.el6.noarch
 ovirt-engine-setup-3.3.0-4.el6.noarch
 ovirt-engine-3.3.0-4.el6.noarch
 ovirt-image-uploader-3.3.0-1.el6.noarch
 ovirt-engine-lib-3.3.0-4.el6.noarch
 ovirt-engine-restapi-3.3.0-4.el6.noarch
 ovirt-engine-dbscripts-3.3.0-4.el6.noarch

 Do you have some suggestion to debug or solve this issue?

 Thanks a lot.
 Best Regards
 R. Brunetti
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


Dear all.
Unfortunately I did manage to work out only the first step of the
procedure.
I can successfully define the glance external provider and from the
storage tab I can list the images available in glance, but when I try to
import an image (ie. a Fedora19 qcow2 image which is working inside
OpenStack) the disk is stuck in Illegal state and in the logs
(ovirt-engine) I see messages like:

2013-09-25 16:01:37,740 INFO 
[org.ovirt.engine.core.bll.AsyncTaskManager]
(DefaultQuartzScheduler_Worker-71) Polling and updating Async Tasks: 2
tasks, 2 tasks to poll now
2013-09-25 16:01:37,775 INFO  [org.ovirt.engine.core.bll.SPMAsyncTask]
(DefaultQuartzScheduler_Worker-71) BaseAsyncTask::OnTaskEndSuccess: Task
32982d1d-543c-4015-8b02-4b0f3f7ac45a (Parent Command ImportRepoImage,
Parameters Type
org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) ended
successfully.
2013-09-25 16:01:37,776 INFO 
[org.ovirt.engine.core.bll.CommandAsyncTask]
(DefaultQuartzScheduler_Worker-71)
CommandAsyncTask::EndActionIfNecessary: All tasks of command
3a9eb626-2ea8-445f-8286-fb0e7c464b3d has ended - executing EndAction
2013-09-25 16:01:37,777 INFO 
[org.ovirt.engine.core.bll.CommandAsyncTask]
(DefaultQuartzScheduler_Worker-71) CommandAsyncTask::EndAction: Ending
action for 1 tasks (command ID: 3a9eb626-2ea8-445f-8286-fb0e7c464b3d):
calling EndAction .
2013-09-25 16:01:37,778 INFO  [org.ovirt.engine.core.bll.SPMAsyncTask]
(DefaultQuartzScheduler_Worker-71) BaseAsyncTask::OnTaskEndSuccess: Task
e8057bc2-e99f-4db8-ba1d-97b84a1f3db0 (Parent Command ImportRepoImage,
Parameters Type
org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) ended
successfully.
2013-09-25 16:01:37,778 INFO 

[Users] Ovirt engine high availability

2013-09-25 Thread Doug Bishop
Hello,

Was wondering if anyone can shed a light on this subject as I could not find 
any documentation specific to this. I am currently running an ovirt 3.2 cluster 
with 5 hosts with local storage. I have a dedicated box running ovirt engine. I 
would like to have the ability to run an additional box with ovirt engine on it 
for high availability of the portal. I realize this would most likely require I 
run a separate instance of postgres for the engine, however I was wondering if 
this is something that is officially supported?

Thanks!

Doug Bishop
Sr. Systems Engineer
dbis...@controlscan.com

Controlscan, Inc.
11475 Great Oaks Way, Suite 300
Alpharetta, GA 30022
www.controlscan.com

NOTICE:  This e-mail message, including any attachments, is for the sole use of 
the intended recipient(s) or entity and may contain sensitive information that 
is confidential, proprietary, privileged or otherwise protected by law. Any 
unauthorized review, use, disclosure, distribution, copying or other use of, or 
taking of any action in reliance upon, is strictly prohibited. If you are not 
the intended recipient, please contact the sender by reply e-mail and destroy 
all copies of this message. This e-mail is and any response to it will be 
unencrypted and, therefore, potentially unsecure.  The recipient should check 
this email and any attachments for the presence of viruses/malware according to 
security best practices.  Thank you.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Ovirt engine high availability

2013-09-25 Thread Dan Yasny
There are plenty of options:
1. Run the engine as a VM under a local libvirt, and cluster the libvirt VM
as a protected service using RHCS (well tested and documented)
2. Use the self hosted engine (recently announced, coming up tech)
3. Use any other clustering technology you like best, an engine VM with a
centrally backed storage is very easy to start on any capable host


On Wed, Sep 25, 2013 at 10:06 AM, Doug Bishop dbis...@controlscan.comwrote:

 Hello,

 Was wondering if anyone can shed a light on this subject as I could not
 find any documentation specific to this. I am currently running an ovirt
 3.2 cluster with 5 hosts with local storage. I have a dedicated box running
 ovirt engine. I would like to have the ability to run an additional box
 with ovirt engine on it for high availability of the portal. I realize this
 would most likely require I run a separate instance of postgres for the
 engine, however I was wondering if this is something that is officially
 supported?

 Thanks!

 Doug Bishop
 Sr. Systems Engineer
 dbis...@controlscan.com

 Controlscan, Inc.
 11475 Great Oaks Way, Suite 300
 Alpharetta, GA 30022
 www.controlscan.com

 NOTICE:  This e-mail message, including any attachments, is for the sole
 use of the intended recipient(s) or entity and may contain sensitive
 information that is confidential, proprietary, privileged or otherwise
 protected by law. Any unauthorized review, use, disclosure, distribution,
 copying or other use of, or taking of any action in reliance upon, is
 strictly prohibited. If you are not the intended recipient, please contact
 the sender by reply e-mail and destroy all copies of this message. This
 e-mail is and any response to it will be unencrypted and, therefore,
 potentially unsecure.  The recipient should check this email and any
 attachments for the presence of viruses/malware according to security best
 practices.  Thank you.
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Ovirt engine high availability

2013-09-25 Thread Doug Bishop
Those are good options, however since I am not running centralized storage, all 
my hosts are in their own data center with local storage options only. Thats 
why I have two standalone machines to run ovirt on. Im guessing this is not 
recommended? If not is there away to allow local storage and nfs based storage 
in a data center?

Doug Bishop
Sr. Systems Engineer
Cell: 678-848-6658
dbis...@controlscan.com

Controlscan, Inc.
11475 Great Oaks Way, Suite 300
Alpharetta, GA 30022
www.controlscan.com


From: Dan Yasny [dya...@gmail.com]
Sent: Wednesday, September 25, 2013 12:14 PM
To: Doug Bishop
Cc: users@ovirt.org
Subject: Re: [Users] Ovirt engine high availability

There are plenty of options:
1. Run the engine as a VM under a local libvirt, and cluster the libvirt VM as 
a protected service using RHCS (well tested and documented)
2. Use the self hosted engine (recently announced, coming up tech)
3. Use any other clustering technology you like best, an engine VM with a 
centrally backed storage is very easy to start on any capable host


On Wed, Sep 25, 2013 at 10:06 AM, Doug Bishop 
dbis...@controlscan.commailto:dbis...@controlscan.com wrote:
Hello,

Was wondering if anyone can shed a light on this subject as I could not find 
any documentation specific to this. I am currently running an ovirt 3.2 cluster 
with 5 hosts with local storage. I have a dedicated box running ovirt engine. I 
would like to have the ability to run an additional box with ovirt engine on it 
for high availability of the portal. I realize this would most likely require I 
run a separate instance of postgres for the engine, however I was wondering if 
this is something that is officially supported?

Thanks!

Doug Bishop
Sr. Systems Engineer
dbis...@controlscan.commailto:dbis...@controlscan.com

Controlscan, Inc.
11475 Great Oaks Way, Suite 300
Alpharetta, GA 30022
www.controlscan.comhttp://www.controlscan.com

NOTICE:  This e-mail message, including any attachments, is for the sole use of 
the intended recipient(s) or entity and may contain sensitive information that 
is confidential, proprietary, privileged or otherwise protected by law. Any 
unauthorized review, use, disclosure, distribution, copying or other use of, or 
taking of any action in reliance upon, is strictly prohibited. If you are not 
the intended recipient, please contact the sender by reply e-mail and destroy 
all copies of this message. This e-mail is and any response to it will be 
unencrypted and, therefore, potentially unsecure.  The recipient should check 
this email and any attachments for the presence of viruses/malware according to 
security best practices.  Thank you.
___
Users mailing list
Users@ovirt.orgmailto:Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


NOTICE:  This e-mail message, including any attachments, is for the sole use of 
the intended recipient(s) or entity and may contain sensitive information that 
is confidential, proprietary, privileged or otherwise protected by law. Any 
unauthorized review, use, disclosure, distribution, copying or other use of, or 
taking of any action in reliance upon, is strictly prohibited. If you are not 
the intended recipient, please contact the sender by reply e-mail and destroy 
all copies of this message. This e-mail is and any response to it will be 
unencrypted and, therefore, potentially unsecure.  The recipient should check 
this email and any attachments for the presence of viruses/malware according to 
security best practices.  Thank you.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Ovirt engine high availability

2013-09-25 Thread Doug Bishop
Yeah, that was along the lines of what I was thinking. Just wanted to make sure 
there wasnt an official way to do it as well.

Thanks!

Doug Bishop
Sr. Systems Engineer
Cell: 678-848-6658
dbis...@controlscan.com

Controlscan, Inc.
11475 Great Oaks Way, Suite 300
Alpharetta, GA 30022
www.controlscan.com


From: Dan Yasny [dya...@gmail.com]
Sent: Wednesday, September 25, 2013 4:31 PM
To: Doug Bishop
Cc: users@ovirt.org
Subject: Re: [Users] Ovirt engine high availability

The classic answer here would be to use DRBD to replicate the engine storage to 
a standby host's local storage.


On Wed, Sep 25, 2013 at 2:25 PM, Doug Bishop 
dbis...@controlscan.commailto:dbis...@controlscan.com wrote:
Those are good options, however since I am not running centralized storage, all 
my hosts are in their own data center with local storage options only. Thats 
why I have two standalone machines to run ovirt on. Im guessing this is not 
recommended? If not is there away to allow local storage and nfs based storage 
in a data center?

Doug Bishop
Sr. Systems Engineer
Cell: 678-848-6658
dbis...@controlscan.commailto:dbis...@controlscan.com

Controlscan, Inc.
11475 Great Oaks Way, Suite 300
Alpharetta, GA 30022
www.controlscan.comhttp://www.controlscan.com


From: Dan Yasny [dya...@gmail.commailto:dya...@gmail.com]
Sent: Wednesday, September 25, 2013tel:2013 12:14 PM
To: Doug Bishop
Cc: users@ovirt.orgmailto:users@ovirt.org
Subject: Re: [Users] Ovirt engine high availability

There are plenty of options:
1. Run the engine as a VM under a local libvirt, and cluster the libvirt VM as 
a protected service using RHCS (well tested and documented)
2. Use the self hosted engine (recently announced, coming up tech)
3. Use any other clustering technology you like best, an engine VM with a 
centrally backed storage is very easy to start on any capable host


On Wed, Sep 25, 2013tel:2013 at 10:06 AM, Doug Bishop 
dbis...@controlscan.commailto:dbis...@controlscan.commailto:dbis...@controlscan.commailto:dbis...@controlscan.com
 wrote:
Hello,

Was wondering if anyone can shed a light on this subject as I could not find 
any documentation specific to this. I am currently running an ovirt 3.2 cluster 
with 5 hosts with local storage. I have a dedicated box running ovirt engine. I 
would like to have the ability to run an additional box with ovirt engine on it 
for high availability of the portal. I realize this would most likely require I 
run a separate instance of postgres for the engine, however I was wondering if 
this is something that is officially supported?

Thanks!

Doug Bishop
Sr. Systems Engineer
dbis...@controlscan.commailto:dbis...@controlscan.commailto:dbis...@controlscan.commailto:dbis...@controlscan.com

Controlscan, Inc.
11475 Great Oaks Way, Suite 300
Alpharetta, GA 30022
www.controlscan.comhttp://www.controlscan.comhttp://www.controlscan.com

NOTICE:  This e-mail message, including any attachments, is for the sole use of 
the intended recipient(s) or entity and may contain sensitive information that 
is confidential, proprietary, privileged or otherwise protected by law. Any 
unauthorized review, use, disclosure, distribution, copying or other use of, or 
taking of any action in reliance upon, is strictly prohibited. If you are not 
the intended recipient, please contact the sender by reply e-mail and destroy 
all copies of this message. This e-mail is and any response to it will be 
unencrypted and, therefore, potentially unsecure.  The recipient should check 
this email and any attachments for the presence of viruses/malware according to 
security best practices.  Thank you.
___
Users mailing list
Users@ovirt.orgmailto:Users@ovirt.orgmailto:Users@ovirt.orgmailto:Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


NOTICE:  This e-mail message, including any attachments, is for the sole use of 
the intended recipient(s) or entity and may contain sensitive information that 
is confidential, proprietary, privileged or otherwise protected by law. Any 
unauthorized review, use, disclosure, distribution, copying or other use of, or 
taking of any action in reliance upon, is strictly prohibited. If you are not 
the intended recipient, please contact the sender by reply e-mail and destroy 
all copies of this message. This e-mail is and any response to it will be 
unencrypted and, therefore, potentially unsecure.  The recipient should check 
this email and any attachments for the presence of viruses/malware according to 
security best practices.  Thank you.


NOTICE:  This e-mail message, including any attachments, is for the sole use of 
the intended recipient(s) or entity and may contain sensitive information that 
is confidential, proprietary, privileged or otherwise protected by law. Any 
unauthorized review, use, disclosure, distribution, copying or other use of, or 
taking of any action in 

Re: [Users] Ovirt engine high availability

2013-09-25 Thread Dan Yasny
The official way is to have a SAN or NFS in place, so you can allocate a
LUN for the engine. Multiple hosts/DCs with local storage are quite the
corner case IMO.


On Wed, Sep 25, 2013 at 2:31 PM, Doug Bishop dbis...@controlscan.comwrote:

 Yeah, that was along the lines of what I was thinking. Just wanted to make
 sure there wasnt an official way to do it as well.

 Thanks!

 Doug Bishop
 Sr. Systems Engineer
 Cell: 678-848-6658
 dbis...@controlscan.com

 Controlscan, Inc.
 11475 Great Oaks Way, Suite 300
 Alpharetta, GA 30022
 www.controlscan.com

 
 From: Dan Yasny [dya...@gmail.com]
 Sent: Wednesday, September 25, 2013 4:31 PM
 To: Doug Bishop
 Cc: users@ovirt.org
 Subject: Re: [Users] Ovirt engine high availability

 The classic answer here would be to use DRBD to replicate the engine
 storage to a standby host's local storage.


 On Wed, Sep 25, 2013 at 2:25 PM, Doug Bishop dbis...@controlscan.com
 mailto:dbis...@controlscan.com wrote:
 Those are good options, however since I am not running centralized
 storage, all my hosts are in their own data center with local storage
 options only. Thats why I have two standalone machines to run ovirt on. Im
 guessing this is not recommended? If not is there away to allow local
 storage and nfs based storage in a data center?

 Doug Bishop
 Sr. Systems Engineer
 Cell: 678-848-6658
 dbis...@controlscan.commailto:dbis...@controlscan.com

 Controlscan, Inc.
 11475 Great Oaks Way, Suite 300
 Alpharetta, GA 30022
 www.controlscan.comhttp://www.controlscan.com

 
 From: Dan Yasny [dya...@gmail.commailto:dya...@gmail.com]
 Sent: Wednesday, September 25, 2013tel:2013 12:14 PM
 To: Doug Bishop
 Cc: users@ovirt.orgmailto:users@ovirt.org
 Subject: Re: [Users] Ovirt engine high availability

 There are plenty of options:
 1. Run the engine as a VM under a local libvirt, and cluster the libvirt
 VM as a protected service using RHCS (well tested and documented)
 2. Use the self hosted engine (recently announced, coming up tech)
 3. Use any other clustering technology you like best, an engine VM with a
 centrally backed storage is very easy to start on any capable host


 On Wed, Sep 25, 2013tel:2013 at 10:06 AM, Doug Bishop 
 dbis...@controlscan.commailto:dbis...@controlscan.commailto:
 dbis...@controlscan.commailto:dbis...@controlscan.com wrote:
 Hello,

 Was wondering if anyone can shed a light on this subject as I could not
 find any documentation specific to this. I am currently running an ovirt
 3.2 cluster with 5 hosts with local storage. I have a dedicated box running
 ovirt engine. I would like to have the ability to run an additional box
 with ovirt engine on it for high availability of the portal. I realize this
 would most likely require I run a separate instance of postgres for the
 engine, however I was wondering if this is something that is officially
 supported?

 Thanks!

 Doug Bishop
 Sr. Systems Engineer
 dbis...@controlscan.commailto:dbis...@controlscan.commailto:
 dbis...@controlscan.commailto:dbis...@controlscan.com

 Controlscan, Inc.
 11475 Great Oaks Way, Suite 300
 Alpharetta, GA 30022
 www.controlscan.comhttp://www.controlscan.comhttp://www.controlscan.com
 

 NOTICE:  This e-mail message, including any attachments, is for the sole
 use of the intended recipient(s) or entity and may contain sensitive
 information that is confidential, proprietary, privileged or otherwise
 protected by law. Any unauthorized review, use, disclosure, distribution,
 copying or other use of, or taking of any action in reliance upon, is
 strictly prohibited. If you are not the intended recipient, please contact
 the sender by reply e-mail and destroy all copies of this message. This
 e-mail is and any response to it will be unencrypted and, therefore,
 potentially unsecure.  The recipient should check this email and any
 attachments for the presence of viruses/malware according to security best
 practices.  Thank you.
 ___
 Users mailing list
 Users@ovirt.orgmailto:Users@ovirt.orgmailto:Users@ovirt.orgmailto:
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


 NOTICE:  This e-mail message, including any attachments, is for the sole
 use of the intended recipient(s) or entity and may contain sensitive
 information that is confidential, proprietary, privileged or otherwise
 protected by law. Any unauthorized review, use, disclosure, distribution,
 copying or other use of, or taking of any action in reliance upon, is
 strictly prohibited. If you are not the intended recipient, please contact
 the sender by reply e-mail and destroy all copies of this message. This
 e-mail is and any response to it will be unencrypted and, therefore,
 potentially unsecure.  The recipient should check this email and any
 attachments for the presence of viruses/malware according to security best
 practices.  Thank you.


 NOTICE:  This e-mail message, 

Re: [Users] Ovirt engine high availability

2013-09-25 Thread Doug Bishop
Yes. However we are using it for a dev / qa environments in house. We plan to 
roll rhev into production soon and will be using backend storage.

Doug Bishop
Sr. Systems Engineer
Cell: 678-848-6658
dbis...@controlscan.com

Controlscan, Inc.
11475 Great Oaks Way, Suite 300
Alpharetta, GA 30022
www.controlscan.com


From: Dan Yasny [dya...@gmail.com]
Sent: Wednesday, September 25, 2013 4:34 PM
To: Doug Bishop
Cc: users@ovirt.org
Subject: Re: [Users] Ovirt engine high availability

The official way is to have a SAN or NFS in place, so you can allocate a LUN 
for the engine. Multiple hosts/DCs with local storage are quite the corner case 
IMO.


On Wed, Sep 25, 2013 at 2:31 PM, Doug Bishop 
dbis...@controlscan.commailto:dbis...@controlscan.com wrote:
Yeah, that was along the lines of what I was thinking. Just wanted to make sure 
there wasnt an official way to do it as well.

Thanks!

Doug Bishop
Sr. Systems Engineer
Cell: 678-848-6658
dbis...@controlscan.commailto:dbis...@controlscan.com

Controlscan, Inc.
11475 Great Oaks Way, Suite 300
Alpharetta, GA 30022
www.controlscan.comhttp://www.controlscan.com


From: Dan Yasny [dya...@gmail.commailto:dya...@gmail.com]
Sent: Wednesday, September 25, 2013tel:2013 4:31 PM
To: Doug Bishop
Cc: users@ovirt.orgmailto:users@ovirt.org
Subject: Re: [Users] Ovirt engine high availability

The classic answer here would be to use DRBD to replicate the engine storage to 
a standby host's local storage.


On Wed, Sep 25, 2013tel:2013 at 2:25 PM, Doug Bishop 
dbis...@controlscan.commailto:dbis...@controlscan.commailto:dbis...@controlscan.commailto:dbis...@controlscan.com
 wrote:
Those are good options, however since I am not running centralized storage, all 
my hosts are in their own data center with local storage options only. Thats 
why I have two standalone machines to run ovirt on. Im guessing this is not 
recommended? If not is there away to allow local storage and nfs based storage 
in a data center?

Doug Bishop
Sr. Systems Engineer
Cell: 678-848-6658
dbis...@controlscan.commailto:dbis...@controlscan.commailto:dbis...@controlscan.commailto:dbis...@controlscan.com

Controlscan, Inc.
11475 Great Oaks Way, Suite 300
Alpharetta, GA 30022
www.controlscan.comhttp://www.controlscan.comhttp://www.controlscan.com


From: Dan Yasny 
[dya...@gmail.commailto:dya...@gmail.commailto:dya...@gmail.commailto:dya...@gmail.com]
Sent: Wednesday, September 25, 2013tel:2013tel:2013tel:2013 12:14 PM
To: Doug Bishop
Cc: 
users@ovirt.orgmailto:users@ovirt.orgmailto:users@ovirt.orgmailto:users@ovirt.org
Subject: Re: [Users] Ovirt engine high availability

There are plenty of options:
1. Run the engine as a VM under a local libvirt, and cluster the libvirt VM as 
a protected service using RHCS (well tested and documented)
2. Use the self hosted engine (recently announced, coming up tech)
3. Use any other clustering technology you like best, an engine VM with a 
centrally backed storage is very easy to start on any capable host


On Wed, Sep 25, 2013tel:2013tel:2013tel:2013 at 10:06 AM, Doug Bishop 
dbis...@controlscan.commailto:dbis...@controlscan.commailto:dbis...@controlscan.commailto:dbis...@controlscan.commailto:dbis...@controlscan.commailto:dbis...@controlscan.commailto:dbis...@controlscan.commailto:dbis...@controlscan.com
 wrote:
Hello,

Was wondering if anyone can shed a light on this subject as I could not find 
any documentation specific to this. I am currently running an ovirt 3.2 cluster 
with 5 hosts with local storage. I have a dedicated box running ovirt engine. I 
would like to have the ability to run an additional box with ovirt engine on it 
for high availability of the portal. I realize this would most likely require I 
run a separate instance of postgres for the engine, however I was wondering if 
this is something that is officially supported?

Thanks!

Doug Bishop
Sr. Systems Engineer
dbis...@controlscan.commailto:dbis...@controlscan.commailto:dbis...@controlscan.commailto:dbis...@controlscan.commailto:dbis...@controlscan.commailto:dbis...@controlscan.commailto:dbis...@controlscan.commailto:dbis...@controlscan.com

Controlscan, Inc.
11475 Great Oaks Way, Suite 300
Alpharetta, GA 30022
www.controlscan.comhttp://www.controlscan.comhttp://www.controlscan.comhttp://www.controlscan.com

NOTICE:  This e-mail message, including any attachments, is for the sole use of 
the intended recipient(s) or entity and may contain sensitive information that 
is confidential, proprietary, privileged or otherwise protected by law. Any 
unauthorized review, use, disclosure, distribution, copying or other use of, or 
taking of any action in reliance upon, is strictly prohibited. If you are not 
the intended recipient, please contact the sender by reply e-mail and destroy 
all copies of this message. This e-mail is and any response to it will be 
unencrypted and, therefore, 

Re: [Users] Ovirt engine high availability

2013-09-25 Thread Dan Yasny
Sounds like a good, solid plan to me


On Wed, Sep 25, 2013 at 2:34 PM, Doug Bishop dbis...@controlscan.comwrote:

 Yes. However we are using it for a dev / qa environments in house. We plan
 to roll rhev into production soon and will be using backend storage.

 Doug Bishop
 Sr. Systems Engineer
 Cell: 678-848-6658
 dbis...@controlscan.com

 Controlscan, Inc.
 11475 Great Oaks Way, Suite 300
 Alpharetta, GA 30022
 www.controlscan.com

 
 From: Dan Yasny [dya...@gmail.com]
 Sent: Wednesday, September 25, 2013 4:34 PM
 To: Doug Bishop
 Cc: users@ovirt.org
 Subject: Re: [Users] Ovirt engine high availability

 The official way is to have a SAN or NFS in place, so you can allocate a
 LUN for the engine. Multiple hosts/DCs with local storage are quite the
 corner case IMO.


 On Wed, Sep 25, 2013 at 2:31 PM, Doug Bishop dbis...@controlscan.com
 mailto:dbis...@controlscan.com wrote:
 Yeah, that was along the lines of what I was thinking. Just wanted to make
 sure there wasnt an official way to do it as well.

 Thanks!

 Doug Bishop
 Sr. Systems Engineer
 Cell: 678-848-6658
 dbis...@controlscan.commailto:dbis...@controlscan.com

 Controlscan, Inc.
 11475 Great Oaks Way, Suite 300
 Alpharetta, GA 30022
 www.controlscan.comhttp://www.controlscan.com

 
 From: Dan Yasny [dya...@gmail.commailto:dya...@gmail.com]
 Sent: Wednesday, September 25, 2013tel:2013 4:31 PM
 To: Doug Bishop
 Cc: users@ovirt.orgmailto:users@ovirt.org
 Subject: Re: [Users] Ovirt engine high availability

 The classic answer here would be to use DRBD to replicate the engine
 storage to a standby host's local storage.


 On Wed, Sep 25, 2013tel:2013 at 2:25 PM, Doug Bishop 
 dbis...@controlscan.commailto:dbis...@controlscan.commailto:
 dbis...@controlscan.commailto:dbis...@controlscan.com wrote:
 Those are good options, however since I am not running centralized
 storage, all my hosts are in their own data center with local storage
 options only. Thats why I have two standalone machines to run ovirt on. Im
 guessing this is not recommended? If not is there away to allow local
 storage and nfs based storage in a data center?

 Doug Bishop
 Sr. Systems Engineer
 Cell: 678-848-6658
 dbis...@controlscan.commailto:dbis...@controlscan.commailto:
 dbis...@controlscan.commailto:dbis...@controlscan.com

 Controlscan, Inc.
 11475 Great Oaks Way, Suite 300
 Alpharetta, GA 30022
 www.controlscan.comhttp://www.controlscan.comhttp://www.controlscan.com
 

 
 From: Dan Yasny [dya...@gmail.commailto:dya...@gmail.commailto:
 dya...@gmail.commailto:dya...@gmail.com]
 Sent: Wednesday, September 25, 2013tel:2013tel:2013tel:2013 12:14 PM
 To: Doug Bishop
 Cc: users@ovirt.orgmailto:users@ovirt.orgmailto:users@ovirt.orgmailto:
 users@ovirt.org
 Subject: Re: [Users] Ovirt engine high availability

 There are plenty of options:
 1. Run the engine as a VM under a local libvirt, and cluster the libvirt
 VM as a protected service using RHCS (well tested and documented)
 2. Use the self hosted engine (recently announced, coming up tech)
 3. Use any other clustering technology you like best, an engine VM with a
 centrally backed storage is very easy to start on any capable host


 On Wed, Sep 25, 2013tel:2013tel:2013tel:2013 at 10:06 AM, Doug
 Bishop dbis...@controlscan.commailto:dbis...@controlscan.commailto:
 dbis...@controlscan.commailto:dbis...@controlscan.commailto:
 dbis...@controlscan.commailto:dbis...@controlscan.commailto:
 dbis...@controlscan.commailto:dbis...@controlscan.com wrote:
 Hello,

 Was wondering if anyone can shed a light on this subject as I could not
 find any documentation specific to this. I am currently running an ovirt
 3.2 cluster with 5 hosts with local storage. I have a dedicated box running
 ovirt engine. I would like to have the ability to run an additional box
 with ovirt engine on it for high availability of the portal. I realize this
 would most likely require I run a separate instance of postgres for the
 engine, however I was wondering if this is something that is officially
 supported?

 Thanks!

 Doug Bishop
 Sr. Systems Engineer
 dbis...@controlscan.commailto:dbis...@controlscan.commailto:
 dbis...@controlscan.commailto:dbis...@controlscan.commailto:
 dbis...@controlscan.commailto:dbis...@controlscan.commailto:
 dbis...@controlscan.commailto:dbis...@controlscan.com

 Controlscan, Inc.
 11475 Great Oaks Way, Suite 300
 Alpharetta, GA 30022
 www.controlscan.comhttp://www.controlscan.comhttp://www.controlscan.com
 http://www.controlscan.com

 NOTICE:  This e-mail message, including any attachments, is for the sole
 use of the intended recipient(s) or entity and may contain sensitive
 information that is confidential, proprietary, privileged or otherwise
 protected by law. Any unauthorized review, use, disclosure, distribution,
 copying or other use of, or taking of any action in reliance upon, is
 strictly prohibited. If 

[Users] importing VM from ESXI

2013-09-25 Thread emi...@gmail.com
Hi,

I'm not being able to import a VM from ESXI:

[root@ovirt-mgmt ~]# LIBGUESTFS_TRACE=1 LIBGUESTFS_DEBUG=1 virt-v2v -ic
esx://10.11.12.123/?no_verify=1 -o rhev -os
10.11.12.222:/var/lib/exports/export
--network ovirtmgt MultiDektec
MultiDektec_MultiDektec: 100%
[==]D
0h36m54s
libguestfs: trace: set_verbose true
libguestfs: trace: set_verbose = 0
libguestfs: trace: set_backend appliance
libguestfs: trace: set_backend = 0
libguestfs: create: flags = 0, handle = 0x4431d30, program = perl
libguestfs: trace: set_attach_method appliance
libguestfs: trace: set_backend appliance
libguestfs: trace: set_backend = 0
libguestfs: trace: set_attach_method = 0
libguestfs: trace: add_drive
/tmp/KxIoJI50Pc/2872ac3e-7340-4dfa-9801-0a1bd052b3a3/v2v._ApSlRZG/387a5113-bbc2-45a2-9c55-5dc3dade31a9/01c899de-131e-4407-a16c-8c5484ccb8bd
format:raw iface:ide name:sda
libguestfs: trace: add_drive = 0
libguestfs: trace: add_drive /tmp/to3mfS2mtB readonly:true format:raw
iface:ide
libguestfs: trace: add_drive = 0
libguestfs: trace: set_network true
libguestfs: trace: set_network = 0
libguestfs: trace: launch
libguestfs: trace: get_tmpdir
libguestfs: trace: get_tmpdir = /tmp
libguestfs: trace: get_backend
libguestfs: trace: get_backend = direct
libguestfs: launch: backend=direct
libguestfs: launch: tmpdir=/tmp/libguestfsbaJ32y
libguestfs: launch: umask=0022
libguestfs: launch: euid=36
libguestfs: command: run: supermin-helper
libguestfs: command: run: \ --verbose
libguestfs: command: run: \ -u 36
libguestfs: command: run: \ -g 36
libguestfs: command: run: \ -f checksum
libguestfs: command: run: \ /usr/lib64/guestfs/supermin.d
libguestfs: command: run: \ x86_64
supermin helper [8ms] whitelist = (not specified), host_cpu = x86_64,
kernel = (null), initrd = (null), appliance = (null)
supermin helper [9ms] inputs[0] = /usr/lib64/guestfs/supermin.d
checking modpath /lib/modules/3.10.10-200.fc19.x86_64 is a directory
checking modpath /lib/modules/3.11.1-200.fc19.x86_64 is a directory
checking modpath /lib/modules/3.10.11-200.fc19.x86_64 is a directory
picked vmlinuz-3.11.1-200.fc19.x86_64
supermin helper [00010ms] finished creating kernel
supermin helper [00010ms] visiting /usr/lib64/guestfs/supermin.d
supermin helper [00010ms] visiting /usr/lib64/guestfs/supermin.d/base.img
supermin helper [00055ms] visiting /usr/lib64/guestfs/supermin.d/daemon.img
supermin helper [00057ms] visiting /usr/lib64/guestfs/supermin.d/hostfiles
supermin helper [00419ms] visiting /usr/lib64/guestfs/supermin.d/init.img
supermin helper [00440ms] visiting
/usr/lib64/guestfs/supermin.d/udev-rules.img
supermin helper [00468ms] adding kernel modules
supermin helper [00528ms] finished creating appliance
libguestfs: checksum of existing appliance:
096bf8833aed53a33f7c0da2a5e685ad430bba5b50ae20a7e0325bca32cca6d2
libguestfs: trace: get_cachedir
libguestfs: trace: get_cachedir = /var/tmp
libguestfs: [00718ms] begin testing qemu features
libguestfs: command: run: /usr/bin/qemu-kvm
libguestfs: command: run: \ -nographic
libguestfs: command: run: \ -help
libguestfs: command: run: /usr/bin/qemu-kvm
libguestfs: command: run: \ -nographic
libguestfs: command: run: \ -version
libguestfs: qemu version 1.4
libguestfs: command: run: /usr/bin/qemu-kvm
libguestfs: command: run: \ -nographic
libguestfs: command: run: \ -machine accel=kvm:tcg
libguestfs: command: run: \ -device ?
libguestfs: [01607ms] finished testing qemu features
[01608ms] /usr/bin/qemu-kvm \
-global virtio-blk-pci.scsi=off \
-nodefconfig \
-nodefaults \
-nographic \
-machine accel=kvm:tcg \
-m 500 \
-no-reboot \
-no-hpet \
-kernel /var/tmp/.guestfs-36/kernel.52160 \
-initrd /var/tmp/.guestfs-36/initrd.52160 \
-device virtio-scsi-pci,id=scsi \
-drive
file=/tmp/KxIoJI50Pc/2872ac3e-7340-4dfa-9801-0a1bd052b3a3/v2v._ApSlRZG/387a5113-bbc2-45a2-9c55-5dc3dade31a9/01c899de-131e-4407-a16c-8c5484ccb8bd,cache=none,format=raw,id=hd0,if=ide
\
-drive file=/tmp/to3mfS2mtB,snapshot=on,format=raw,id=hd1,if=ide \
-drive
file=/var/tmp/.guestfs-36/root.52160,snapshot=on,id=appliance,if=none,cache=unsafe
\
-device scsi-hd,drive=appliance \
-device virtio-serial \
-serial stdio \
-device sga \
-chardev socket,path=/tmp/libguestfsbaJ32y/guestfsd.sock,id=channel0 \
-device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 \
-netdev user,id=usernet,net=169.254.0.0/16 \
-device virtio-net-pci,netdev=usernet \
-append 'panic=1 console=ttyS0 udevtimeout=600 no_timer_check acpi=off
printk.time=1 cgroup_disable=memory root=/dev/sdc selinux=0
guestfs_verbose=1 TERM=screen'
qemu-system-x86_64: -drive
file=/tmp/KxIoJI50Pc/2872ac3e-7340-4dfa-9801-0a1bd052b3a3/v2v._ApSlRZG/387a5113-bbc2-45a2-9c55-5dc3dade31a9/01c899de-131e-4407-a16c-8c5484ccb8bd,cache=none,format=raw,id=hd0,if=ide:
could not open 

Re: [Users] vdsm live migration errors in latest master

2013-09-25 Thread Dan Kenigsberg
On Tue, Sep 24, 2013 at 12:04:14PM -0400, Federico Simoncelli wrote:
 - Original Message -
  From: Dan Kenigsberg dan...@redhat.com
  To: Dead Horse deadhorseconsult...@gmail.com
  Cc: users@ovirt.org users@ovirt.org, vdsm-de...@fedorahosted.org, 
  fsimo...@redhat.com, aba...@redhat.com
  Sent: Tuesday, September 24, 2013 11:44:48 AM
  Subject: Re: [Users] vdsm live migration errors in latest master
  
  On Mon, Sep 23, 2013 at 04:05:34PM -0500, Dead Horse wrote:
   Seeing failed live migrations and these errors in the vdsm logs with 
   latest
   VDSM/Engine master.
   Hosts are EL6.4
  
  Thanks for posting this report.
  
  The log is from the source of migration, right?
  Could you trace the history of the hosts of this VM? Could it be that it
  was started on an older version of vdsm (say ovirt-3.3.0) and then (due
  to migration or vdsm upgrade) got into a host with a much newer vdsm?
  
  Would you share the vmCreate (or vmMigrationCreate) line for this Vm in
  your log? I smells like an unintended regression of
  http://gerrit.ovirt.org/17714
  vm: extend shared property to support locking
  
  solving it may not be trivial, as we should not call
  _normalizeDriveSharedAttribute() automatically on migration destination,
  as it may well still be apart of a 3.3 clusterLevel.
  
  Also, migration from vdsm with extended shared property, to an ovirt 3.3
  vdsm is going to explode (in a different way), since the destination
  does not expect the extended values.
  
  Federico, do we have a choice but to revert that patch, and use
  something like shared3 property instead?
 
 I filed a bug at:
 
 https://bugzilla.redhat.com/show_bug.cgi?id=1011608
 
 A possible fix could be:
 
 http://gerrit.ovirt.org/#/c/19509

Beyond this, we must make sure that on Engine side, the extended shared
values would be used only for clusterLevel 3.4 and above.

Are the extended shared values already used by Engine?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Disk state - Illegal?

2013-09-25 Thread Andrew Lau
I noticed that too, I wasn't sure if it was a bug or just how I had setup
my NFS share..

There were three steps I did to remove the disk images, I'm sure there's a
100% easier solution..:

I found the easiest way (graphically) was go to your
https://ovirtengine/api/disks and so a search for the illegal disk. Append
the extra ID eg. disk href=/api/disks/lk342-dfsdf...
into your URL this'll give you your image ID.

Go to your storage share:
cd /data/storage-id/master/vms/storage-id
grep -ir 'vmname' *
You'll find the image-id reference here too.

Then the image you will want to remove is in the
/data/storage-id/images/image-id
I assume you could safely remove this whole folder if you wanted to delete
the disk.

To remove the illegal state I did it through the API so again with the URL
above https://ovirtengine/disks/disk-id send a DELETE using HTTP/CURL

Again, this was a poor mans solution but it worked for me.

On Thu, Sep 26, 2013 at 4:04 AM, Dan Ferris
dfer...@prometheusresearch.comwrote:

 Hi,

 I have another hopefully simple question.

 One VM that I am trying to remove says that it's disk state is illegal
 and when I try to remove the disk it says that it failed to initiate the
 removing of the disk.

 Is there an easy way to get rid of these illegal disk images?

 Dan
 __**_
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/**mailman/listinfo/usershttp://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users