Re: [Users] oVirt 3.0 - VM shows paused status only

2013-05-07 Thread Omer Frenkel
- Original Message -

> From: "Sven Knohsalla" 
> To: users@ovirt.org
> Sent: Wednesday, May 8, 2013 8:24:12 AM
> Subject: [Users] oVirt 3.0 - VM shows paused status only

> Hi,

> it seems I didn’t give enough time for one VM to shutdown ( on 2.3.0 HV)
> properly, so it ended up as “paused”.

> Unfortunately I don’t have the option to unpause , the commands
> shutdown/power off/start are all failing.

> Makes sense as the VM isn’t running on any hypervisor anymore,

> so it must be a DB entry which has been set by the engine?

> Is there a workaround/command to reset a paused VM on the engine/DB to set
> the status to powered off?

well there is, but i would suggest to try the safer way first, 
what is the status of your hosts? 
what is reorted (in the ui) as the host this vm is running on? 

> Thanks in advance for your help!

> Best,

> Sven.

> Sven Knohsalla | System Administration

> Office +49 631 68036 433 | Fax +49 631 68036 111 |E-Mail
> s.knohsa...@netbiscuits.com | Skype: netbiscuits.admin

> Netbiscuits GmbH | Europaallee 10 | 67657 | GERMANY

> Register Court: Local Court Kaiserslautern | Commercial Register ID: HR B
> 3604
> Management Board : Guido Moggert, Michael Neidhöfer, Christian Reitz, Martin
> Süß

> This message and any files transmitted with it are confidential and intended
> solely for the use of the individual or entity to whom they are addressed.
> It may also be privileged or otherwise protected by work product immunity or
> other legal rules. Please notify the sender immediately by e-mail if you
> have received this e-mail by mistake and delete this e-mail from your
> system. If you are not the intended recipient you are notified that
> disclosing, copying, distributing or taking any action in reliance on the
> contents of this information is strictly prohibited.

> Warning: Although Netbiscuits has taken reasonable precautions to ensure no
> viruses are present in this email, the company cannot accept responsibility
> for any loss or damage arising from the use of this email or attachments.

> Please consider the environment before printing

> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] v2v error

2013-05-07 Thread Michael Wagenknecht

Hi Jose,
the disk size has to be the same and the "Allocation Policy" has to be 
the right one (see my first answer).


Regards,  Michael


Am 07.05.2013 18:51, schrieb supo...@logicworks.pt:

Hi,

I did that but it fails, maybe the KVM disk has to be the same size or 
bigger?


Regards
Jose


*From: *"Michael Wagenknecht" 
*To: *users@ovirt.org
*Sent: *Terça-feira, 7 de Maio de 2013 12:47:52
*Subject: *Re: [Users] v2v error

Hi,
the folder images/ contains the virtual disks. Every disk in a 
separate folder.

You can ignore the .meta file in the images folder.
The other one is the image file. You can check it with the command:
qemu-img info  f89d8ca3-f65c-45a0-8f2e-df2f05014d0b

The folder master/vms/ contains a description file of the vm in 
the .ovf format.
When your VM uses more than one virtual disks, there you can find the 
imgId of the disks.


Regards,   Michael


Am 07.05.2013 13:24, schrieb supo...@logicworks.pt:

Hi Michael,

Thanks. One question, Myexported VM has the name Fedora, how can I
identify it on the system?
In the export domain i have
drwxr-xr-x. 2 vdsm kvm 4096 Apr 12 17:07 dom_md
drwxr-xr-x. 3 vdsm kvm 4096 May  7 10:57 images
drwxr-xr-x. 4 vdsm kvm 4096 Apr 12 17:08 master

and in

/mnt/398da79f-53e1-43dd-8836-e58edd4de975/images/2a95635c-3e1b-45a6-be8c-fa66317b6475
I have
-rw-rw. 1 vdsm kvm 1073741824 May  7 10:59
f89d8ca3-f65c-45a0-8f2e-df2f05014d0b
-rw-r--r--. 1 vdsm kvm269 May  7 10:57
f89d8ca3-f65c-45a0-8f2e-df2f05014d0b.meta

which one is the exported VM?


*From: *"Michael Wagenknecht" 
*To: *users@ovirt.org
*Sent: *Terça-feira, 7 de Maio de 2013 7:33:49
*Subject: *Re: [Users] v2v error

Hi,
I had the same problem some weeks ago. After many hours of try an
error I went another way.
I made a new VM on the ovirt engine with the same parameter than
the kvm VM. Especially the "Allocation Policy" of the virtual disk
is important (Preallocated for raw images and Thin Provision for
qcow images). Then I export the VM. Then I copy the image from the
KVM server to the ovirt export folder and override the empty one.
Please check the permissions. Now you can import the VM.
I go this way with a lot of linux and windows VMs.

Regards,  Michael


Am 07.05.2013 01:52, schrieb supo...@logicworks.pt:

This is what I get if run LIBGUESTFS_DEBUG=1 virt-v2v -i
libvirt -ic qemu+ssh://root@IP-Address/system -o rhev -os
nfs.domain.local:/ovirt/export -of qcow2 -oa sparse -n
ovirtmgmt VM_Name

Fedora14.qcow2: 100%
[===]D 0h59m31s
libguestfs: create: flags = 0, handle = 0x2d8fdd0
libguestfs: launch: attach-method=libvirt
libguestfs: launch: tmpdir=/tmp/libguestfs4jk5Wh
libguestfs: launch: umask=0022
libguestfs: launch: euid=36
libguestfs: libvirt version = 10002 (0.10.2)
libguestfs: [0ms] connect to libvirt
libguestfs: opening libvirt handle: URI = NULL, auth =
virConnectAuthPtrDefault, flags = 0
libguestfs: successfully opened libvirt handle: conn = 0x2d7b690
libguestfs: [00164ms] get libvirt capabilities
libguestfs: [05465ms] parsing capabilities XML
libguestfs: [05467ms] build appliance
libguestfs: command: run: febootstrap-supermin-helper
libguestfs: command: run: \ --verbose
libguestfs: command: run: \ -u 36
libguestfs: command: run: \ -g 36
libguestfs: command: run: \ -f checksum
libguestfs: command: run: \ /usr/lib64/guestfs/supermin.d
libguestfs: command: run: \ x86_64
supermin helper [2ms] whitelist = (not specified),
host_cpu = x86_64, kernel = (null), initrd = (null), appliance
= (null)
supermin helper [2ms] inputs[0] =
/usr/lib64/guestfs/supermin.d
checking modpath /lib/modules/3.8.6-203.fc18.x86_64 is a directory
picked vmlinuz-3.8.6-203.fc18.x86_64 because modpath
/lib/modules/3.8.6-203.fc18.x86_64 exists
checking modpath /lib/modules/3.8.11-200.fc18.x86_64 is a
directory
picked vmlinuz-3.8.11-200.fc18.x86_64 because modpath
/lib/modules/3.8.11-200.fc18.x86_64 exists
checking modpath /lib/modules/3.8.8-202.fc18.x86_64 is a directory
picked vmlinuz-3.8.8-202.fc18.x86_64 because modpath
/lib/modules/3.8.8-202.fc18.x86_64 exists
supermin helper [2ms] finished creating kernel
supermin helper [3ms] visiting /usr/lib64/guestfs/supermin.d
supermin helper [3ms] visiting
/usr/lib64/guestfs/supermin.d/base.img
supermin helper [00010ms] visiting
/usr/lib64/guestfs/supermin.d/daemon.

[Users] oVirt 3.0 - VM shows paused status only

2013-05-07 Thread Sven Knohsalla
Hi,

it seems I didn't give enough time for one VM to shutdown ( on 2.3.0 HV) 
properly, so it ended up as "paused".

Unfortunately I don't have the option to unpause , the commands shutdown/power 
off/start are all failing.
Makes sense as the VM isn't running on any hypervisor anymore,
so it must be a DB entry which has been set by the engine?

Is there a workaround/command to reset a paused VM on the engine/DB to set the 
status to powered off?

Thanks in advance for your help!

Best,
Sven.


Sven Knohsalla | System Administration

Office +49 631 68036 433 | Fax +49 631 68036 111  |E-Mail 
s.knohsa...@netbiscuits.com | Skype: netbiscuits.admin
Netbiscuits GmbH | Europaallee 10 | 67657 | GERMANY

[https://my.netbiscuits.com/image/image_gallery?uuid=3a1a9d19-c305-4032-8cef-00b03c3d4c79&groupId=10211&t=1361534926402]

 
[https://my.netbiscuits.com/image/image_gallery?uuid=9e553e7b-3e7d-4784-b274-15aa1dfb48e2&groupId=10211&t=1361533377340]
   
[https://my.netbiscuits.com/image/image_gallery?uuid=1d1a5e29-ceda-4ab1-9353-67a1e838364d&groupId=10211&t=1347281040591]
   
[https://my.netbiscuits.com/image/image_gallery?uuid=c99bf866-be25-4236-a0ee-dca68ec828a5&groupId=10211&t=1347280983848]
   
[https://my.netbiscuits.com/image/image_gallery?uuid=d62ba951-14dc-450d-b5f1-be33884225e3&groupId=10211&t=1347280983872]
   
[https://my.netbiscuits.com/image/image_gallery?uuid=7b28f500-f415-40bb-851f-0cd55beeaf45&groupId=10211&t=1347280983791]
   
[https://my.netbiscuits.com/image/image_gallery?uuid=cc8764d0-a5ac-4623-bb63-da3ca7c97f94&groupId=10211&t=1347280983836]
   
[https://my.netbiscuits.com/image/image_gallery?uuid=a15e871c-a11b-419c-acca-da5a0ebd5856&groupId=10211&t=1347281040599]
 

Register Court: Local Court Kaiserslautern | Commercial Register ID: HR B 3604
Management Board: Guido Moggert, Michael Neidhöfer, Christian Reitz, Martin Süß

This message and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed. It 
may also be privileged or otherwise protected by work product immunity or other 
legal rules. Please notify the sender immediately by e-mail if you have 
received this e-mail by mistake and delete this e-mail from your system. If you 
are not the intended recipient you are notified that disclosing, copying, 
distributing or taking any action in reliance on the contents of this 
information is strictly prohibited.
Warning: Although Netbiscuits has taken reasonable precautions to ensure no 
viruses are present in this email, the company cannot accept responsibility for 
any loss or damage arising from the use of this email or attachments.

[http://www.netbiscuits.com/image/image_gallery?uuid=0ba7711a-a277-4ea0-acb0-17fe13c3089d&groupId=10211&t=1348560850164]Please
 consider the environment before printing

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Guest never gets removed

2013-05-07 Thread Karli Sjöberg
Hi,

yesterday I issued a remove of a Guest and then immediately afterwards engine 
started complaining about sync errors between database and storage metadata. To 
my surprise, both of my two data stores were marked as master in their storage 
metadata files. I had to change "MASTER_VERSION" and "ROLE", then tail the vdsm 
log on a Host and watch it complain that the checksum is wrong and how it is 
${X} but should be ${Y} according to it´s logic, and copy/paste tbe correct 
checksum into storage metadata file as well. This had to be done for both data 
domains, and when done, I have one "Master"- and one "Regular" domain that 
works and equals what´s in the database.

The problem is that engine now apparently has forgotten about my Guest that it 
was supposed to remove before all of this... The operation is still listed in 
"Tasks", so it´s not quite finished yet. Since it´s changed state to 
"hourglass" and status "Image locked", I can´t click for removal again either...

Is there any way to "remind" engine of a job not done?

--

Med Vänliga Hälsningar
---
Karli Sjöberg
Swedish University of Agricultural Sciences
Box 7079 (Visiting Address Kronåsvägen 8)
S-750 07 Uppsala, Sweden
Phone:  +46-(0)18-67 15 66
karli.sjob...@slu.se
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Looking for ovirt 3.3 nightly build Installation Instructions

2013-05-07 Thread Alon Bar-Lev
As far as I know you should either localinstall all dependencies or declare yum 
repository.

Best is to declare a yum repository...

/etc/yum.repos.d/ovirt-nightly.repo
---
[ovirt-nightly]
name=ovirt-nightly
baseurl=http://resources.ovirt.org/releases/nightly/rpm/Fedora/18/
enabled=1
gpgcheck=0
priority=1
protect=1
---

Then just run:
# yum install ovirt-engine

- Original Message -
> From: "Sherry Yu" 
> To: users@ovirt.org
> Sent: Tuesday, May 7, 2013 11:07:31 PM
> Subject: [Users] Looking for ovirt 3.3 nightly build Installation 
> Instructions
> 
> I try to install ovirt-engine 3.3 from the nightly build . However "yum
> localinstall
> ovirt-engine-3.3.0-0.2.master.20130506150450.gitab5c237.fc18.noarch.rpm"
> complains that dependencies are missing, but install dependency complains
> that ovirt-engine is missing. For example ovirt-engine requires
> ovirt-engine-backend, but "yum local install
> ovirt-engine-backend-3.3.0-0.2.master.20130506150450.gitab5c237.fc18.noarch.rpm"
> says that ovirt-engine is missing.
> 
> This happens to other dependencies.
> 
> How to solve this dead loop? Am I missing anything?
> 
> Thanks!
> Sherry
> 
> --
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Looking for ovirt 3.3 nightly build Installation Instructions

2013-05-07 Thread Sherry Yu
I try to install ovirt-engine 3.3 from the nightly build . However "yum 
localinstall 
ovirt-engine-3.3.0-0.2.master.20130506150450.gitab5c237.fc18.noarch.rpm" 
complains that dependencies are missing, but install dependency complains that 
ovirt-engine is missing. For example ovirt-engine requires 
ovirt-engine-backend, but "yum local install 
ovirt-engine-backend-3.3.0-0.2.master.20130506150450.gitab5c237.fc18.noarch.rpm"
 says that ovirt-engine is missing. 

This happens to other dependencies. 

How to solve this dead loop? Am I missing anything? 

Thanks! 
Sherry 

-- 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] v2v error

2013-05-07 Thread suporte
Hi, 

I did that but it fails, maybe the KVM disk has to be the same size or bigger? 

Regards 
Jose 

- Original Message -

From: "Michael Wagenknecht"  
To: users@ovirt.org 
Sent: Terça-feira, 7 de Maio de 2013 12:47:52 
Subject: Re: [Users] v2v error 

Hi, 
the folder images/ contains the virtual disks. Every disk in a separate 
folder. 
You can ignore the .meta file in the images folder. 
The other one is the image file. You can check it with the command: 
qemu-img info f89d8ca3-f65c-45a0-8f2e-df2f05014d0b 

The folder master/vms/ contains a description file of the vm in the .ovf 
format. 
When your VM uses more than one virtual disks, there you can find the imgId of 
the disks. 

Regards, Michael 


Am 07.05.2013 13:24, schrieb supo...@logicworks.pt : 


Hi Michael, 

Thanks. One question, Myexported VM has the name Fedora, how can I identify it 
on the system? 
In the export domain i have 
drwxr-xr-x. 2 vdsm kvm 4096 Apr 12 17:07 dom_md 
drwxr-xr-x. 3 vdsm kvm 4096 May 7 10:57 images 
drwxr-xr-x. 4 vdsm kvm 4096 Apr 12 17:08 master 

and in 
/mnt/398da79f-53e1-43dd-8836-e58edd4de975/images/2a95635c-3e1b-45a6-be8c-fa66317b6475
 
I have 
-rw-rw. 1 vdsm kvm 1073741824 May 7 10:59 
f89d8ca3-f65c-45a0-8f2e-df2f05014d0b 
-rw-r--r--. 1 vdsm kvm 269 May 7 10:57 
f89d8ca3-f65c-45a0-8f2e-df2f05014d0b.meta 

which one is the exported VM? 

- Original Message -

From: "Michael Wagenknecht"  
To: users@ovirt.org 
Sent: Terça-feira, 7 de Maio de 2013 7:33:49 
Subject: Re: [Users] v2v error 

Hi, 
I had the same problem some weeks ago. After many hours of try an error I went 
another way. 
I made a new VM on the ovirt engine with the same parameter than the kvm VM. 
Especially the "Allocation Policy" of the virtual disk is important 
(Preallocated for raw images and Thin Provision for qcow images). Then I export 
the VM. Then I copy the image from the KVM server to the ovirt export folder 
and override the empty one. Please check the permissions. Now you can import 
the VM. 
I go this way with a lot of linux and windows VMs. 

Regards, Michael 


Am 07.05.2013 01:52, schrieb supo...@logicworks.pt : 



This is what I get if run LIBGUESTFS_DEBUG=1 virt-v2v -i libvirt -ic 
qemu+ssh://root@IP-Address/system -o rhev -os nfs.domain.local:/ovirt/export 
-of qcow2 -oa sparse -n ovirtmgmt VM_Name 



Fedora14.qcow2: 100% [===]D 
0h59m31s 
libguestfs: create: flags = 0, handle = 0x2d8fdd0 
libguestfs: launch: attach-method=libvirt 
libguestfs: launch: tmpdir=/tmp/libguestfs4jk5Wh 
libguestfs: launch: umask=0022 
libguestfs: launch: euid=36 
libguestfs: libvirt version = 10002 (0.10.2) 
libguestfs: [0ms] connect to libvirt 
libguestfs: opening libvirt handle: URI = NULL, auth = 
virConnectAuthPtrDefault, flags = 0 
libguestfs: successfully opened libvirt handle: conn = 0x2d7b690 
libguestfs: [00164ms] get libvirt capabilities 
libguestfs: [05465ms] parsing capabilities XML 
libguestfs: [05467ms] build appliance 
libguestfs: command: run: febootstrap-supermin-helper 
libguestfs: command: run: \ --verbose 
libguestfs: command: run: \ -u 36 
libguestfs: command: run: \ -g 36 
libguestfs: command: run: \ -f checksum 
libguestfs: command: run: \ /usr/lib64/guestfs/supermin.d 
libguestfs: command: run: \ x86_64 
supermin helper [2ms] whitelist = (not specified), host_cpu = x86_64, 
kernel = (null), initrd = (null), appliance = (null) 
supermin helper [2ms] inputs[0] = /usr/lib64/guestfs/supermin.d 
checking modpath /lib/modules/3.8.6-203.fc18.x86_64 is a directory 
picked vmlinuz-3.8.6-203.fc18.x86_64 because modpath 
/lib/modules/3.8.6-203.fc18.x86_64 exists 
checking modpath /lib/modules/3.8.11-200.fc18.x86_64 is a directory 
picked vmlinuz-3.8.11-200.fc18.x86_64 because modpath 
/lib/modules/3.8.11-200.fc18.x86_64 exists 
checking modpath /lib/modules/3.8.8-202.fc18.x86_64 is a directory 
picked vmlinuz-3.8.8-202.fc18.x86_64 because modpath 
/lib/modules/3.8.8-202.fc18.x86_64 exists 
supermin helper [2ms] finished creating kernel 
supermin helper [3ms] visiting /usr/lib64/guestfs/supermin.d 
supermin helper [3ms] visiting /usr/lib64/guestfs/supermin.d/base.img 
supermin helper [00010ms] visiting /usr/lib64/guestfs/supermin.d/daemon.img 
supermin helper [00052ms] visiting /usr/lib64/guestfs/supermin.d/hostfiles 
supermin helper [00748ms] visiting /usr/lib64/guestfs/supermin.d/init.img 
supermin helper [00762ms] visiting /usr/lib64/guestfs/supermin.d/udev-rules.img 
supermin helper [00770ms] adding kernel modules 
supermin helper [00898ms] finished creating appliance 
libguestfs: checksum of existing appliance: 
7705d920c064a722def20ac25b6fb162ceec1019efac541dacfea976a5540326 
libguestfs: [06509ms] begin building supermin appliance 
libguestfs: [06509ms] run supermin-helper 
libguestfs: command: run: febootstrap-supermin-helper 
libguestfs: command: run: \ --verbose 
libguestfs: command: run: \ -u 36 
libguestfs: command: run: \ -g 36 
libgues

Re: [Users] NFS Export Doamin

2013-05-07 Thread suporte
Ok thanks, I'll try it. 

- Original Message -

From: "Michael Kublin"  
To: supo...@logicworks.pt 
Cc: "Gianluca Cecchi" , "users"  
Sent: Terça-feira, 7 de Maio de 2013 17:13:22 
Subject: Re: [Users] NFS Export Doamin 





- Original Message - 
> From: supo...@logicworks.pt 
> To: "Michael Kublin"  
> Cc: "Gianluca Cecchi" , "users"  
> Sent: Sunday, May 5, 2013 10:02:08 PM 
> Subject: Re: [Users] NFS Export Doamin 
> 
> I found this errors on the event: 
> 
> The error message for connection Export_Domain_IP:/nfs_name returned by VDSM 
> was: generalexception 
> Failed to connect Host node2 to the Storage Domains Name_of_Export_Domain. 
> Failed to connect Host node1 to Storage Pool acloudDC 
> 
> I'm running version 3.2.1, is there a way to fix it? 

Hi, first of all you can move this domain to Maintenance , and after that 
activate a host 
If, you have only one host, and domain is in status Unknown, please do the 
following: 
put host in maintainence, after that 
please run the following query : update storage_pool_iso_map set status = 6 
where storage_id=... 
(Info about domains is located inside storage_domain_static table, 6 means 
Maintenance) 
> 
> Jose 
> - Mensagem original - 
> 
> De: "Michael Kublin"  
> Para: "Gianluca Cecchi"  
> Cc: "users"  
> Enviadas: Domingo, 5 Maio, 2013 11:37:59 
> Assunto: Re: [Users] NFS Export Doamin 
> 
> 
> 
> 
> 
> - Original Message - 
> > From: "Gianluca Cecchi"  
> > To: "Allon Mureinik"  
> > Cc: "users"  
> > Sent: Sunday, May 5, 2013 10:31:35 AM 
> > Subject: Re: [Users] NFS Export Doamin 
> > 
> > 
> > 
> > 
> > Il giorno 05/mag/2013 08:21, "Allon Mureinik" < amure...@redhat.com > ha 
> > scritto: 
> > > 
> > > Frankly, the host status shouldn't depend on any domain status, as a 
> > > series 
> > > of recent patches fixed. 
> Host status never depend on status of export/import/iso domains 
> > > I don't think got into oVirt 3.2, however. 
> > > 
> > > I'm wondering if this work should be backported to 3.2.1/3.2.2, but it's 
> > > quite a major change, not sure the risk justifies it... 
> > > 
> > >  
> > >> 
> > >> From: supo...@logicworks.pt 
> > >> To: users@ovirt.org 
> > >> Sent: Saturday, May 4, 2013 10:26:19 PM 
> > >> Subject: [Users] NFS Export Doamin 
> > >> 
> > >> 
> > >> Hi, 
> > >> 
> > >> I'm runing oVirt 3.2, I had to restart all the system, when the system 
> > >> (engine, hosts and storage) comes up, the NFS export domain was not 
> > >> connected to the switch, i notice that the hosts did not come live 
> > >> because of that. This is not a main storage, is this a normal behavior? 
> > >> 
> No the behaviour is not normal. 
> The status of host never depend and should not depend on status of export 
> (also import/iso) domains. 
> Did you see in engine logs something like that: 
> "Domain export-domain-name was reported with error code  " 
> > >> Jose 
> > >> 
> > >> -- 
> > >>  
> > >> Jose Ferradeira 
> > >> http://www.logicworks.pt 
> > >> 
> > >> 
> > >> ___ 
> > >> Users mailing list 
> > >> Users@ovirt.org 
> > >> http://lists.ovirt.org/mailman/listinfo/users 
> > > 
> > > 
> > > 
> > > ___ 
> > > Users mailing list 
> > > Users@ovirt.org 
> > > http://lists.ovirt.org/mailman/listinfo/users 
> > > 
> > At least Don't let that an export or iso domain down can determine whole 
> > infrastructure down. .. 
> > 
> > ___ 
> > Users mailing list 
> > Users@ovirt.org 
> > http://lists.ovirt.org/mailman/listinfo/users 
> > 
> ___ 
> Users mailing list 
> Users@ovirt.org 
> http://lists.ovirt.org/mailman/listinfo/users 
> 
>  
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] NFS Export Doamin

2013-05-07 Thread Michael Kublin




- Original Message -
> From: supo...@logicworks.pt
> To: "Michael Kublin" 
> Cc: "Gianluca Cecchi" , "users" 
> Sent: Sunday, May 5, 2013 10:02:08 PM
> Subject: Re: [Users] NFS Export Doamin
> 
> I found this errors on the event:
> 
> The error message for connection Export_Domain_IP:/nfs_name returned by VDSM
> was: generalexception
> Failed to connect Host node2 to the Storage Domains Name_of_Export_Domain.
> Failed to connect Host node1 to Storage Pool acloudDC
> 
> I'm running version 3.2.1, is there a way to fix it?

Hi, first of all you can move this domain to Maintenance , and after that 
activate a host
If, you have only one host, and domain is in status Unknown, please do the 
following:
put host in maintainence, after that
please run the following query :  update storage_pool_iso_map set status = 6 
where storage_id=...
(Info about domains is located inside storage_domain_static table, 6 means 
Maintenance) 
> 
> Jose
> - Mensagem original -
> 
> De: "Michael Kublin" 
> Para: "Gianluca Cecchi" 
> Cc: "users" 
> Enviadas: Domingo, 5 Maio, 2013 11:37:59
> Assunto: Re: [Users] NFS Export Doamin
> 
> 
> 
> 
> 
> - Original Message -
> > From: "Gianluca Cecchi" 
> > To: "Allon Mureinik" 
> > Cc: "users" 
> > Sent: Sunday, May 5, 2013 10:31:35 AM
> > Subject: Re: [Users] NFS Export Doamin
> > 
> > 
> > 
> > 
> > Il giorno 05/mag/2013 08:21, "Allon Mureinik" < amure...@redhat.com > ha
> > scritto:
> > > 
> > > Frankly, the host status shouldn't depend on any domain status, as a
> > > series
> > > of recent patches fixed.
> Host status never depend on status of export/import/iso domains
> > > I don't think got into oVirt 3.2, however.
> > > 
> > > I'm wondering if this work should be backported to 3.2.1/3.2.2, but it's
> > > quite a major change, not sure the risk justifies it...
> > > 
> > > 
> > >> 
> > >> From: supo...@logicworks.pt
> > >> To: users@ovirt.org
> > >> Sent: Saturday, May 4, 2013 10:26:19 PM
> > >> Subject: [Users] NFS Export Doamin
> > >> 
> > >> 
> > >> Hi,
> > >> 
> > >> I'm runing oVirt 3.2, I had to restart all the system, when the system
> > >> (engine, hosts and storage) comes up, the NFS export domain was not
> > >> connected to the switch, i notice that the hosts did not come live
> > >> because of that. This is not a main storage, is this a normal behavior?
> > >> 
> No the behaviour is not normal.
> The status of host never depend and should not depend on status of export
> (also import/iso) domains.
> Did you see in engine logs something like that:
> "Domain export-domain-name was reported with error code  "
> > >> Jose
> > >> 
> > >> --
> > >> 
> > >> Jose Ferradeira
> > >> http://www.logicworks.pt
> > >> 
> > >> 
> > >> ___
> > >> Users mailing list
> > >> Users@ovirt.org
> > >> http://lists.ovirt.org/mailman/listinfo/users
> > > 
> > > 
> > > 
> > > ___
> > > Users mailing list
> > > Users@ovirt.org
> > > http://lists.ovirt.org/mailman/listinfo/users
> > > 
> > At least Don't let that an export or iso domain down can determine whole
> > infrastructure down. ..
> > 
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> > 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
> 
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] NFS domain and LVM

2013-05-07 Thread Koch (ovido)
Hi,

I just rebooted my NFS server which hosts a ISO domain which is used by
RHEV 3.1 and oVirt 3.2. In both environments the ISO domain became
inactive and I tried to activate it again.
In RHEV this worked fine, but in oVirt it didn't.

Just for your information: both - RHEV and oVirt use "Local on host" as
datacenter storage type.
I normally use a local NFS server and mount the share locally (dc with
storage type NFS) as "Local on host" is making troubles in most of my
setups (in both - oVirt and RHEV) and the issues are mainly missing
(sometimes lost) volume groups.
As my oVirt 3.2 setup is a testing environment and my test vms are still
running, I'll keep it in this state for finding a solution for this. As
said, have seen issues like this more then one time before, so maybe
this is a bug and need some attention (can also open a bug report if
needed).


vdsm.log is telling me that the storage domain doesn't exist (as
complained by vdsm daemon):

2013-05-07 16:44:18,493 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand] 
(pool-3-thread-47) [210a0bcb] START, ActivateStorageDomainVDSCommand( 
storagePoolId = 484e62d7-7a01-4b5e-aec8-59d366100281, ignoreFailoverLimit = 
false, compatabilityVersion = null, storageDomainId = 
a4c43175-ce34-49a5-8608-cac573bf7647), log id: 33c54af7
2013-05-07 16:44:20,770 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(pool-3-thread-47) [210a0bcb] Failed in ActivateStorageDomainVDS method
2013-05-07 16:44:20,781 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(pool-3-thread-47) [210a0bcb] Error code StorageDomainDoesNotExist and
error message IRSGenericException: IRSErrorException: Failed to
ActivateStorageDomainVDS, error = Storage domain does not exist:
('a4c43175-ce34-49a5-8608-cac573bf7647',)
2013-05-07 16:44:20,791 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(pool-3-thread-47) [210a0bcb]
IrsBroker::Failed::ActivateStorageDomainVDS due to: IRSErrorException:
IRSGenericException: IRSErrorException: Failed to
ActivateStorageDomainVDS, error = Storage domain does not exist:
('a4c43175-ce34-49a5-8608-cac573bf7647',)


More interesting is the behavior of my host (CentOS 6.4) which tries to
find a logical volume and is doing some iSCSI-scans:

Thread-224450::DEBUG::2013-05-07
16:44:19,333::task::957::TaskManager.Task::(_decref)
Task=`ab00697e-32df-43b1-822b-a94bef55909d`::ref 0 aborting False
Thread-1129392::DEBUG::2013-05-07
16:44:20,555::misc::84::Storage.Misc.excCmd::() '/usr/bin/sudo
-n /sbin/multipath' (cwd None)
Thread-1129392::DEBUG::2013-05-07
16:44:20,608::misc::84::Storage.Misc.excCmd::() SUCCESS:  =
'';  = 0
Thread-1129392::DEBUG::2013-05-07
16:44:20,609::lvm::477::OperationMutex::(_invalidateAllPvs) Operation
'lvm invalidate operation' got the operation mutex
Thread-1129392::DEBUG::2013-05-07
16:44:20,609::lvm::479::OperationMutex::(_invalidateAllPvs) Operation
'lvm invalidate operation' released the operation mutex
Thread-1129392::DEBUG::2013-05-07
16:44:20,609::lvm::488::OperationMutex::(_invalidateAllVgs) Operation
'lvm invalidate operation' got the operation mutex
Thread-1129392::DEBUG::2013-05-07
16:44:20,610::lvm::490::OperationMutex::(_invalidateAllVgs) Operation
'lvm invalidate operation' released the operation mutex
Thread-1129392::DEBUG::2013-05-07
16:44:20,610::lvm::508::OperationMutex::(_invalidateAllLvs) Operation
'lvm invalidate operation' got the operation mutex
Thread-1129392::DEBUG::2013-05-07
16:44:20,610::lvm::510::OperationMutex::(_invalidateAllLvs) Operation
'lvm invalidate operation' released the operation mutex
Thread-1129392::DEBUG::2013-05-07
16:44:20,610::misc::1064::SamplingMethod::(__call__) Returning last
result
Thread-1129392::DEBUG::2013-05-07
16:44:20,610::lvm::368::OperationMutex::(_reloadvgs) Operation 'lvm
reload operation' got the operation mutex
Thread-1129392::DEBUG::2013-05-07
16:44:20,612::misc::84::Storage.Misc.excCmd::() '/usr/bin/sudo
-n /sbin/lvm vgs --config " devices { preferred_names = [\
\"^/dev/mapper/\\"
] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 filter = [ \\"r%.*%\\" ] }  global {
locking_type=1  prioritise_write_locks=1  wait_for_locks=1 }  back
up {  retain_min = 50  retain_days = 0 } " --noheadings --units b
--nosuffix --separator | -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_fr
ee a4c43175-ce34-49a5-8608-cac573bf7647' (cwd None)
Thread-1129392::DEBUG::2013-05-07
16:44:20,759::misc::84::Storage.Misc.excCmd::() FAILED:  =
'  Volume group "a4c43175-ce34-49a5-8608-cac573bf7647" not found\n';
 =
 5
Thread-1129392::WARNING::2013-05-07
16:44:20,760::lvm::373::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 []
['  Volume group "a4c43175-ce34-49a5-8608-cac573bf7647" not found']
Thread-1129392::DEBUG::2013-05-07
16:44:20,760::lvm::397::OperationMutex::(_reloadvgs) Operation 'lvm
reload operation' released the operation mutex
Thread-1129392::ERROR::2013-05-0

Re: [Users] v2v error

2013-05-07 Thread Michael Wagenknecht

Hi,
the folder images/ contains the virtual disks. Every disk in a 
separate folder.

You can ignore the .meta file in the images folder.
The other one is the image file. You can check it with the command:
qemu-img info  f89d8ca3-f65c-45a0-8f2e-df2f05014d0b

The folder master/vms/ contains a description file of the vm in 
the .ovf format.
When your VM uses more than one virtual disks, there you can find the 
imgId of the disks.


Regards,   Michael


Am 07.05.2013 13:24, schrieb supo...@logicworks.pt:

Hi Michael,

Thanks. One question, Myexported VM has the name Fedora, how can I 
identify it on the system?

In the export domain i have
drwxr-xr-x. 2 vdsm kvm 4096 Apr 12 17:07 dom_md
drwxr-xr-x. 3 vdsm kvm 4096 May  7 10:57 images
drwxr-xr-x. 4 vdsm kvm 4096 Apr 12 17:08 master

and in
/mnt/398da79f-53e1-43dd-8836-e58edd4de975/images/2a95635c-3e1b-45a6-be8c-fa66317b6475
I have
-rw-rw. 1 vdsm kvm 1073741824 May  7 10:59 
f89d8ca3-f65c-45a0-8f2e-df2f05014d0b
-rw-r--r--. 1 vdsm kvm269 May  7 10:57 
f89d8ca3-f65c-45a0-8f2e-df2f05014d0b.meta


which one is the exported VM?


*From: *"Michael Wagenknecht" 
*To: *users@ovirt.org
*Sent: *Terça-feira, 7 de Maio de 2013 7:33:49
*Subject: *Re: [Users] v2v error

Hi,
I had the same problem some weeks ago. After many hours of try an 
error I went another way.
I made a new VM on the ovirt engine with the same parameter than the 
kvm VM. Especially the "Allocation Policy" of the virtual disk is 
important (Preallocated for raw images and Thin Provision for qcow 
images). Then I export the VM. Then I copy the image from the KVM 
server to the ovirt export folder and override the empty one. Please 
check the permissions. Now you can import the VM.

I go this way with a lot of linux and windows VMs.

Regards,  Michael


Am 07.05.2013 01:52, schrieb supo...@logicworks.pt:

This is what I get if run LIBGUESTFS_DEBUG=1 virt-v2v -i libvirt
-ic qemu+ssh://root@IP-Address/system -o rhev -os
nfs.domain.local:/ovirt/export -of qcow2 -oa sparse -n ovirtmgmt
VM_Name

Fedora14.qcow2: 100%
[===]D 0h59m31s
libguestfs: create: flags = 0, handle = 0x2d8fdd0
libguestfs: launch: attach-method=libvirt
libguestfs: launch: tmpdir=/tmp/libguestfs4jk5Wh
libguestfs: launch: umask=0022
libguestfs: launch: euid=36
libguestfs: libvirt version = 10002 (0.10.2)
libguestfs: [0ms] connect to libvirt
libguestfs: opening libvirt handle: URI = NULL, auth =
virConnectAuthPtrDefault, flags = 0
libguestfs: successfully opened libvirt handle: conn = 0x2d7b690
libguestfs: [00164ms] get libvirt capabilities
libguestfs: [05465ms] parsing capabilities XML
libguestfs: [05467ms] build appliance
libguestfs: command: run: febootstrap-supermin-helper
libguestfs: command: run: \ --verbose
libguestfs: command: run: \ -u 36
libguestfs: command: run: \ -g 36
libguestfs: command: run: \ -f checksum
libguestfs: command: run: \ /usr/lib64/guestfs/supermin.d
libguestfs: command: run: \ x86_64
supermin helper [2ms] whitelist = (not specified), host_cpu =
x86_64, kernel = (null), initrd = (null), appliance = (null)
supermin helper [2ms] inputs[0] = /usr/lib64/guestfs/supermin.d
checking modpath /lib/modules/3.8.6-203.fc18.x86_64 is a directory
picked vmlinuz-3.8.6-203.fc18.x86_64 because modpath
/lib/modules/3.8.6-203.fc18.x86_64 exists
checking modpath /lib/modules/3.8.11-200.fc18.x86_64 is a directory
picked vmlinuz-3.8.11-200.fc18.x86_64 because modpath
/lib/modules/3.8.11-200.fc18.x86_64 exists
checking modpath /lib/modules/3.8.8-202.fc18.x86_64 is a directory
picked vmlinuz-3.8.8-202.fc18.x86_64 because modpath
/lib/modules/3.8.8-202.fc18.x86_64 exists
supermin helper [2ms] finished creating kernel
supermin helper [3ms] visiting /usr/lib64/guestfs/supermin.d
supermin helper [3ms] visiting
/usr/lib64/guestfs/supermin.d/base.img
supermin helper [00010ms] visiting
/usr/lib64/guestfs/supermin.d/daemon.img
supermin helper [00052ms] visiting
/usr/lib64/guestfs/supermin.d/hostfiles
supermin helper [00748ms] visiting
/usr/lib64/guestfs/supermin.d/init.img
supermin helper [00762ms] visiting
/usr/lib64/guestfs/supermin.d/udev-rules.img
supermin helper [00770ms] adding kernel modules
supermin helper [00898ms] finished creating appliance
libguestfs: checksum of existing appliance:
7705d920c064a722def20ac25b6fb162ceec1019efac541dacfea976a5540326
libguestfs: [06509ms] begin building supermin appliance
libguestfs: [06509ms] run supermin-helper
libguestfs: command: run: febootstrap-supermin-helper
libguestfs: command: run: \ --verbose
libguestfs: command: run: \ -u 36
libguestfs: command: run: \ -g 36
libguestfs: command: run: \ --copy-kernel
lib

Re: [Users] v2v error

2013-05-07 Thread suporte
Hi Michael, 

Thanks. One question, Myexported VM has the name Fedora, how can I identify it 
on the system? 
In the export domain i have 
drwxr-xr-x. 2 vdsm kvm 4096 Apr 12 17:07 dom_md 
drwxr-xr-x. 3 vdsm kvm 4096 May 7 10:57 images 
drwxr-xr-x. 4 vdsm kvm 4096 Apr 12 17:08 master 

and in 
/mnt/398da79f-53e1-43dd-8836-e58edd4de975/images/2a95635c-3e1b-45a6-be8c-fa66317b6475
 
I have 
-rw-rw. 1 vdsm kvm 1073741824 May 7 10:59 
f89d8ca3-f65c-45a0-8f2e-df2f05014d0b 
-rw-r--r--. 1 vdsm kvm 269 May 7 10:57 
f89d8ca3-f65c-45a0-8f2e-df2f05014d0b.meta 

which one is the exported VM? 

- Original Message -

From: "Michael Wagenknecht"  
To: users@ovirt.org 
Sent: Terça-feira, 7 de Maio de 2013 7:33:49 
Subject: Re: [Users] v2v error 

Hi, 
I had the same problem some weeks ago. After many hours of try an error I went 
another way. 
I made a new VM on the ovirt engine with the same parameter than the kvm VM. 
Especially the "Allocation Policy" of the virtual disk is important 
(Preallocated for raw images and Thin Provision for qcow images). Then I export 
the VM. Then I copy the image from the KVM server to the ovirt export folder 
and override the empty one. Please check the permissions. Now you can import 
the VM. 
I go this way with a lot of linux and windows VMs. 

Regards, Michael 


Am 07.05.2013 01:52, schrieb supo...@logicworks.pt : 



This is what I get if run LIBGUESTFS_DEBUG=1 virt-v2v -i libvirt -ic 
qemu+ssh://root@IP-Address/system -o rhev -os nfs.domain.local:/ovirt/export 
-of qcow2 -oa sparse -n ovirtmgmt VM_Name 



Fedora14.qcow2: 100% [===]D 
0h59m31s 
libguestfs: create: flags = 0, handle = 0x2d8fdd0 
libguestfs: launch: attach-method=libvirt 
libguestfs: launch: tmpdir=/tmp/libguestfs4jk5Wh 
libguestfs: launch: umask=0022 
libguestfs: launch: euid=36 
libguestfs: libvirt version = 10002 (0.10.2) 
libguestfs: [0ms] connect to libvirt 
libguestfs: opening libvirt handle: URI = NULL, auth = 
virConnectAuthPtrDefault, flags = 0 
libguestfs: successfully opened libvirt handle: conn = 0x2d7b690 
libguestfs: [00164ms] get libvirt capabilities 
libguestfs: [05465ms] parsing capabilities XML 
libguestfs: [05467ms] build appliance 
libguestfs: command: run: febootstrap-supermin-helper 
libguestfs: command: run: \ --verbose 
libguestfs: command: run: \ -u 36 
libguestfs: command: run: \ -g 36 
libguestfs: command: run: \ -f checksum 
libguestfs: command: run: \ /usr/lib64/guestfs/supermin.d 
libguestfs: command: run: \ x86_64 
supermin helper [2ms] whitelist = (not specified), host_cpu = x86_64, 
kernel = (null), initrd = (null), appliance = (null) 
supermin helper [2ms] inputs[0] = /usr/lib64/guestfs/supermin.d 
checking modpath /lib/modules/3.8.6-203.fc18.x86_64 is a directory 
picked vmlinuz-3.8.6-203.fc18.x86_64 because modpath 
/lib/modules/3.8.6-203.fc18.x86_64 exists 
checking modpath /lib/modules/3.8.11-200.fc18.x86_64 is a directory 
picked vmlinuz-3.8.11-200.fc18.x86_64 because modpath 
/lib/modules/3.8.11-200.fc18.x86_64 exists 
checking modpath /lib/modules/3.8.8-202.fc18.x86_64 is a directory 
picked vmlinuz-3.8.8-202.fc18.x86_64 because modpath 
/lib/modules/3.8.8-202.fc18.x86_64 exists 
supermin helper [2ms] finished creating kernel 
supermin helper [3ms] visiting /usr/lib64/guestfs/supermin.d 
supermin helper [3ms] visiting /usr/lib64/guestfs/supermin.d/base.img 
supermin helper [00010ms] visiting /usr/lib64/guestfs/supermin.d/daemon.img 
supermin helper [00052ms] visiting /usr/lib64/guestfs/supermin.d/hostfiles 
supermin helper [00748ms] visiting /usr/lib64/guestfs/supermin.d/init.img 
supermin helper [00762ms] visiting /usr/lib64/guestfs/supermin.d/udev-rules.img 
supermin helper [00770ms] adding kernel modules 
supermin helper [00898ms] finished creating appliance 
libguestfs: checksum of existing appliance: 
7705d920c064a722def20ac25b6fb162ceec1019efac541dacfea976a5540326 
libguestfs: [06509ms] begin building supermin appliance 
libguestfs: [06509ms] run supermin-helper 
libguestfs: command: run: febootstrap-supermin-helper 
libguestfs: command: run: \ --verbose 
libguestfs: command: run: \ -u 36 
libguestfs: command: run: \ -g 36 
libguestfs: command: run: \ --copy-kernel 
libguestfs: command: run: \ -f ext2 
libguestfs: command: run: \ /usr/lib64/guestfs/supermin.d 
libguestfs: command: run: \ x86_64 
libguestfs: command: run: \ /var/tmp/guestfs.uYx5Jb/kernel 
libguestfs: command: run: \ /var/tmp/guestfs.uYx5Jb/initrd 
libguestfs: command: run: \ /var/tmp/guestfs.uYx5Jb/root 
supermin helper [1ms] whitelist = (not specified), host_cpu = x86_64, 
kernel = /var/tmp/guestfs.uYx5Jb/kernel, initrd = 
/var/tmp/guestfs.uYx5Jb/initrd, appliance = /var/tmp/guestfs.uYx5Jb/root 
supermin helper [1ms] inputs[0] = /usr/lib64/guestfs/supermin.d 
checking modpath /lib/modules/3.8.6-203.fc18.x86_64 is a directory 
picked vmlinuz-3.8.6-203.fc18.x86_64 because modpath 
/lib/modules/3.8.6-203.fc18.x86_64 exists

Re: [Users] deploy without engine.conf.defaults?

2013-05-07 Thread Alon Bar-Lev


- Original Message -
> From: "lof yer" 
> To: users@ovirt.org
> Sent: Tuesday, May 7, 2013 1:28:32 PM
> Subject: [Users] deploy without engine.conf.defaults?
> 
> I compiled the latest source yesterday, but I met a problem in deploying it.
> 
> I used old env-variable:
> ENGINE_DEFAULTS=/home/demo/gittest/virtfan-ovirt/backend/manager/conf/engine.conf.defaults
> and engine.conf.defaults had content like this:
> 
> ENGINE_USR=/usr/share/ovirt-engine
> ENGINE_ETC=/etc/ovirt-engine
> 
> When I ran standalone.sh, it just tell me that I need more variables like
> 
> ENGINE_PKI_ENGINE_STORE and etc..
> What should I add exactly?

Hello,

You should set these as well... to point to PKI resources, at least the 
ENGINE_PKI.
Or better, try out the upcoming development[1] environment which sets all these 
for you.
In this environment you setup the product within your home directory, so that 
product is fully functional, jboss is run as in production with proper 
environment and you can use jboss debug port to debug application.

Regards,
Alon

[1] https://github.com/alonbl/ovirt-engine/blob/otopi/README.developer

> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] deploy without engine.conf.defaults?

2013-05-07 Thread lof yer
I compiled the latest source yesterday, but I met a problem in deploying it.

I used old env-variable:
ENGINE_DEFAULTS=/home/demo/gittest/virtfan-ovirt/backend/manager/conf/engine.conf.defaults
and engine.conf.defaults had content like this:

ENGINE_USR=/usr/share/ovirt-engine
ENGINE_ETC=/etc/ovirt-engine

When I ran standalone.sh, it just tell me that I need more variables like

ENGINE_PKI_ENGINE_STORE and etc..
What should I add exactly?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] booting an ovirt VW

2013-05-07 Thread Olivier Boulin

Hi,
I'm starting to investigate OVIRT and i've got a question.

My goal is to have a classroom full of computers booting remotely on virtual 
machines.

Is it possible with ovirt to boot directly on a remote VM without having a 
local OS to go through the web portal ?

Thanx
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] alias to user and admin portal

2013-05-07 Thread Andrej Bagon
Hi,

what is the best practice to make a hostname alias for user and admin
web portal?
By default the admin portal is on
hostname/webadmin/webadmin/WebAdmin.html and user portal on
hostname/UserPortal/org.ovirt.engine.ui.userportal.UserPortal/UserPortal.html

what are the best directives (apache) to run the applications on
adminportal.hostname and userportal.hostname?

Thank you.

Best Regards,
Andrej
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users