Re: [ovirt-users] Reattach export Domain failed

2014-12-04 Thread Punit Dambiwal
Hi,

I solved the problem by match the metadata checksum...

On Fri, Dec 5, 2014 at 12:05 PM, Punit Dambiwal  wrote:

> Hi,
>
> I try to reattach the export domain in the datacenter but it
> failedwith the following error :-
>
> IrsOperationFailedNoFailoverException: IRSGenericException:
> IRSErrorException: Meta Data seal is broken (checksum mismatch): 'cksum =
> 02ff046f33562d487e8df3758445c6ed45ecddca, computed_cksum =
> 71eb588ec5683be9f22ae8ca90ac599a5d871fda'
>
> I paste the engine logs here :- http://ur1.ca/iznd0
>
> Thanks,
> punit
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Reattach export Domain failed

2014-12-04 Thread Punit Dambiwal
Hi,

I try to reattach the export domain in the datacenter but it failedwith
the following error :-

IrsOperationFailedNoFailoverException: IRSGenericException:
IRSErrorException: Meta Data seal is broken (checksum mismatch): 'cksum =
02ff046f33562d487e8df3758445c6ed45ecddca, computed_cksum =
71eb588ec5683be9f22ae8ca90ac599a5d871fda'

I paste the engine logs here :- http://ur1.ca/iznd0

Thanks,
punit
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] shared storage with iscsi

2014-12-04 Thread Amador Pahim

On 12/04/2014 02:54 PM, mad Engineer wrote:

sorry i am wrong its the data domain that stores virtual disk images,I
have no idea how ovirt shares block device across hosts.Looks like i
need to try that for understanding how its implemented.In case of
vmware they uses vmfs file system for sharing block device among
hosts.I currently have no issues with my NFS shared storage so dont
have any plan to use iscsi,but just curious how the implementation
is.


It's not shared considering the lowest level. Each disk corresponds to a 
LVM logical volume, which is activated only on the specific hypervisor 
running the VM. Others hypervisors cannot access the logical volume 
while the VM is not running on it (except for shared disks). To activate 
the logical volume (in order to run the VM), hypervisor runs a "lvchange 
-ay vol_group/logical_vol"



In my existing non ovirt environment,SAN block devices is exported
and its shared using GFS.Is ovirt using any simillar method to achieve
shared block device.



On Thu, Dec 4, 2014 at 9:10 PM, mad Engineer  wrote:

Thanks Gianluca,it says only NFS is supported as export domain.I
believe export domain is the one i am currently using for live
migrating vms across hosts.So iscsi is not supported,please correct me
if i am wrong.Thanks for your help

On Thu, Dec 4, 2014 at 9:00 PM, Gianluca Cecchi
 wrote:

On Thu, Dec 4, 2014 at 4:23 PM, mad Engineer 
wrote:

Thanks Maor,in non ovirt kvm setup i use GFS2 to make the block device
shareable across multiple hosts.Could you tell me how is it achieved
in ovirt.

Regards


First step the admin guide:
http://www.ovirt.org/OVirt_Administration_Guide

and in particular the Storage chapter and the related iSCSI part:
http://www.ovirt.org/OVirt_Administration_Guide#.E2.81.A0Storage

Also, for multipath in case you need it:
http://www.ovirt.org/Feature/iSCSI-Multipath

HIH,
Gianluca

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] shared storage with iscsi

2014-12-04 Thread Dan Yasny
On Thu, Dec 4, 2014 at 12:54 PM, mad Engineer 
wrote:

> sorry i am wrong its the data domain that stores virtual disk images,I
> have no idea how ovirt shares block device across hosts.Looks like i
> need to try that for understanding how its implemented.In case of
> vmware they uses vmfs file system for sharing block device among
> hosts.I currently have no issues with my NFS shared storage so dont
> have any plan to use iscsi,but just curious how the implementation
> is.In my existing non ovirt environment,SAN block devices is exported
> and its shared using GFS.Is ovirt using any simillar method to achieve
> shared block device.
>
>
oVirt uses a different approach altogether (this is why it can scale to
clusters with hundreds of hosts, unlike vmware). The idea is pretty simple
1. format the shared block storage with LVM
2. Every VM disk image is an LV
3. manage host access to separate LVs using simple lvchange.

Of course there's much more to the way this is implemented, but very simply
-  no two hosts have write access to the same LV, unless it is specified as
a shared disk, so no clustered FS is needed, no scsi reservations are
needed and this concept scales, unlike vmfs, gfs, and other such systems.




>
>
> On Thu, Dec 4, 2014 at 9:10 PM, mad Engineer 
> wrote:
> > Thanks Gianluca,it says only NFS is supported as export domain.I
> > believe export domain is the one i am currently using for live
> > migrating vms across hosts.So iscsi is not supported,please correct me
> > if i am wrong.Thanks for your help
> >
> > On Thu, Dec 4, 2014 at 9:00 PM, Gianluca Cecchi
> >  wrote:
> >> On Thu, Dec 4, 2014 at 4:23 PM, mad Engineer 
> >> wrote:
> >>>
> >>> Thanks Maor,in non ovirt kvm setup i use GFS2 to make the block device
> >>> shareable across multiple hosts.Could you tell me how is it achieved
> >>> in ovirt.
> >>>
> >>> Regards
> >>>
> >>
> >> First step the admin guide:
> >> http://www.ovirt.org/OVirt_Administration_Guide
> >>
> >> and in particular the Storage chapter and the related iSCSI part:
> >> http://www.ovirt.org/OVirt_Administration_Guide#.E2.81.A0Storage
> >>
> >> Also, for multipath in case you need it:
> >> http://www.ovirt.org/Feature/iSCSI-Multipath
> >>
> >> HIH,
> >> Gianluca
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] shared storage with iscsi

2014-12-04 Thread mad Engineer
sorry i am wrong its the data domain that stores virtual disk images,I
have no idea how ovirt shares block device across hosts.Looks like i
need to try that for understanding how its implemented.In case of
vmware they uses vmfs file system for sharing block device among
hosts.I currently have no issues with my NFS shared storage so dont
have any plan to use iscsi,but just curious how the implementation
is.In my existing non ovirt environment,SAN block devices is exported
and its shared using GFS.Is ovirt using any simillar method to achieve
shared block device.



On Thu, Dec 4, 2014 at 9:10 PM, mad Engineer  wrote:
> Thanks Gianluca,it says only NFS is supported as export domain.I
> believe export domain is the one i am currently using for live
> migrating vms across hosts.So iscsi is not supported,please correct me
> if i am wrong.Thanks for your help
>
> On Thu, Dec 4, 2014 at 9:00 PM, Gianluca Cecchi
>  wrote:
>> On Thu, Dec 4, 2014 at 4:23 PM, mad Engineer 
>> wrote:
>>>
>>> Thanks Maor,in non ovirt kvm setup i use GFS2 to make the block device
>>> shareable across multiple hosts.Could you tell me how is it achieved
>>> in ovirt.
>>>
>>> Regards
>>>
>>
>> First step the admin guide:
>> http://www.ovirt.org/OVirt_Administration_Guide
>>
>> and in particular the Storage chapter and the related iSCSI part:
>> http://www.ovirt.org/OVirt_Administration_Guide#.E2.81.A0Storage
>>
>> Also, for multipath in case you need it:
>> http://www.ovirt.org/Feature/iSCSI-Multipath
>>
>> HIH,
>> Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] shared storage with iscsi

2014-12-04 Thread Amador Pahim

On 12/04/2014 12:40 PM, mad Engineer wrote:

Thanks Gianluca,it says only NFS is supported as export domain.


That's right.


I
believe export domain is the one i am currently using for live
migrating vms across hosts.


That's not true. You're not using the Export Domain to support live 
migrations. Your VMs are running in a regular Storage Domain (not Export 
Domain), probably NFS, and it could be iscsi.



So iscsi is not supported,please correct me
if i am wrong.Thanks for your help

On Thu, Dec 4, 2014 at 9:00 PM, Gianluca Cecchi
 wrote:

On Thu, Dec 4, 2014 at 4:23 PM, mad Engineer 
wrote:

Thanks Maor,in non ovirt kvm setup i use GFS2 to make the block device
shareable across multiple hosts.Could you tell me how is it achieved
in ovirt.

Regards


First step the admin guide:
http://www.ovirt.org/OVirt_Administration_Guide

and in particular the Storage chapter and the related iSCSI part:
http://www.ovirt.org/OVirt_Administration_Guide#.E2.81.A0Storage

Also, for multipath in case you need it:
http://www.ovirt.org/Feature/iSCSI-Multipath

HIH,
Gianluca

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] shared storage with iscsi

2014-12-04 Thread Nicolas Ecarnot

Le 04/12/2014 16:40, mad Engineer a écrit :

Thanks Gianluca,it says only NFS is supported as export domain.I
believe export domain is the one i am currently using for live
migrating vms across hosts.So iscsi is not supported,please correct me
if i am wrong.Thanks for your help


Sorry I did not read you were speaking about _EXPORT_ domain.

Then indeed, you're right, iSCSI is not supported for export domains.

In our case, we are providing some iSCSI LUNs as export domains via the 
managers which are serving them as NFS shares.

Performance is OK.



On Thu, Dec 4, 2014 at 9:00 PM, Gianluca Cecchi
 wrote:

On Thu, Dec 4, 2014 at 4:23 PM, mad Engineer 
wrote:


Thanks Maor,in non ovirt kvm setup i use GFS2 to make the block device
shareable across multiple hosts.Could you tell me how is it achieved
in ovirt.

Regards



First step the admin guide:
http://www.ovirt.org/OVirt_Administration_Guide

and in particular the Storage chapter and the related iSCSI part:
http://www.ovirt.org/OVirt_Administration_Guide#.E2.81.A0Storage

Also, for multipath in case you need it:
http://www.ovirt.org/Feature/iSCSI-Multipath

HIH,
Gianluca

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users




--
Nicolas Ecarnot
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] shared storage with iscsi

2014-12-04 Thread mad Engineer
Thanks Gianluca,it says only NFS is supported as export domain.I
believe export domain is the one i am currently using for live
migrating vms across hosts.So iscsi is not supported,please correct me
if i am wrong.Thanks for your help

On Thu, Dec 4, 2014 at 9:00 PM, Gianluca Cecchi
 wrote:
> On Thu, Dec 4, 2014 at 4:23 PM, mad Engineer 
> wrote:
>>
>> Thanks Maor,in non ovirt kvm setup i use GFS2 to make the block device
>> shareable across multiple hosts.Could you tell me how is it achieved
>> in ovirt.
>>
>> Regards
>>
>
> First step the admin guide:
> http://www.ovirt.org/OVirt_Administration_Guide
>
> and in particular the Storage chapter and the related iSCSI part:
> http://www.ovirt.org/OVirt_Administration_Guide#.E2.81.A0Storage
>
> Also, for multipath in case you need it:
> http://www.ovirt.org/Feature/iSCSI-Multipath
>
> HIH,
> Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Seperate ovirt-user-portal

2014-12-04 Thread Alexander Wels
On Thursday, December 04, 2014 11:49:00 AM Demeter Tibor wrote:
> Hi,
> 
> I would like to add access the ovirt-user-portal webui for external users.
> Could I seperate this from ovirt-portal-server?
> I just wondering, maybe that is a possible way if I create an virtual
> machine for user-portal with external IP address and the users could
> connect to them. I don't want to add access to the whole portal server
> directly.
> 
> Is it possible?
> 
> Thanks in advance,
> 
> Tibor

Why not simply put a proxy for your external users that points to the URL of 
the user portal? That should be relatively easy with Apace and mod_proxy, or 
your proxy of choice.

I am assuming you are simply trying to prevent users from changing the URL and 
seeing the webadmin URL, or the welcome page. 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] shared storage with iscsi

2014-12-04 Thread Gianluca Cecchi
On Thu, Dec 4, 2014 at 4:23 PM, mad Engineer 
wrote:

> Thanks Maor,in non ovirt kvm setup i use GFS2 to make the block device
> shareable across multiple hosts.Could you tell me how is it achieved
> in ovirt.
>
> Regards
>
>
First step the admin guide:
http://www.ovirt.org/OVirt_Administration_Guide

and in particular the Storage chapter and the related iSCSI part:
http://www.ovirt.org/OVirt_Administration_Guide#.E2.81.A0Storage

Also, for multipath in case you need it:
http://www.ovirt.org/Feature/iSCSI-Multipath

HIH,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] shared storage with iscsi

2014-12-04 Thread Nicolas Ecarnot

Le 04/12/2014 16:23, mad Engineer a écrit :

Thanks Maor,in non ovirt kvm setup i use GFS2 to make the block device
shareable across multiple hosts.Could you tell me how is it achieved
in ovirt.


When using iSCSI, they are not shared.
You have to make sure your storage infrastructure allows multiple 
connection to the iscsi LUN.

But there is nothing to do about sharing on the block level.
When migrating data in iSCSI environement, according to what I've 
witnessed, blocks are dd'ed (so this takes time) from the SD source to 
target.


I guess that's where glusterfs adds some benefit.



Regards

On Thu, Dec 4, 2014 at 5:31 PM, Maor Lipchuk  wrote:



- Original Message -

From: "mad Engineer" 
To: users@ovirt.org
Sent: Thursday, December 4, 2014 11:26:32 AM
Subject: [ovirt-users] shared storage with iscsi

Hello All,
 I am using NFS as shared storage and is working fine,able
to migrate instances across nodes.
Is it possible to use iscsi backend and achieve the same,a shared
iscsi[i am not able to find a way to do a shared iscsi across hosts]
can some one help with a shared iscsi storage for live migration of vm
across hosts

Thanks
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



Hi,

What do you mean by shared iscsi?
if you created a new iSCSI Storage Domain in the Data Center and it is active 
then all the hosts should see it while it is Active.

Regards,
Maor

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users




--
Nicolas Ecarnot
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] shared storage with iscsi

2014-12-04 Thread mad Engineer
Thanks Maor,in non ovirt kvm setup i use GFS2 to make the block device
shareable across multiple hosts.Could you tell me how is it achieved
in ovirt.

Regards

On Thu, Dec 4, 2014 at 5:31 PM, Maor Lipchuk  wrote:
>
>
> - Original Message -
>> From: "mad Engineer" 
>> To: users@ovirt.org
>> Sent: Thursday, December 4, 2014 11:26:32 AM
>> Subject: [ovirt-users] shared storage with iscsi
>>
>> Hello All,
>> I am using NFS as shared storage and is working fine,able
>> to migrate instances across nodes.
>> Is it possible to use iscsi backend and achieve the same,a shared
>> iscsi[i am not able to find a way to do a shared iscsi across hosts]
>> can some one help with a shared iscsi storage for live migration of vm
>> across hosts
>>
>> Thanks
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
> Hi,
>
> What do you mean by shared iscsi?
> if you created a new iSCSI Storage Domain in the Data Center and it is active 
> then all the hosts should see it while it is Active.
>
> Regards,
> Maor
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] shared storage with iscsi

2014-12-04 Thread Maor Lipchuk


- Original Message -
> From: "mad Engineer" 
> To: users@ovirt.org
> Sent: Thursday, December 4, 2014 11:26:32 AM
> Subject: [ovirt-users] shared storage with iscsi
> 
> Hello All,
> I am using NFS as shared storage and is working fine,able
> to migrate instances across nodes.
> Is it possible to use iscsi backend and achieve the same,a shared
> iscsi[i am not able to find a way to do a shared iscsi across hosts]
> can some one help with a shared iscsi storage for live migration of vm
> across hosts
> 
> Thanks
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 

Hi,

What do you mean by shared iscsi?
if you created a new iSCSI Storage Domain in the Data Center and it is active 
then all the hosts should see it while it is Active.

Regards,
Maor
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Seperate ovirt-user-portal

2014-12-04 Thread Demeter Tibor
Hi, 

I would like to add access the ovirt-user-portal webui for external users. 
Could I seperate this from ovirt-portal-server? 
I just wondering, maybe that is a possible way if I create an virtual machine 
for user-portal with external IP address and the users could connect to them. 
I don't want to add access to the whole portal server directly. 

Is it possible? 

Thanks in advance, 

Tibor 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [QE][ACTION REQUIRED] oVirt 3.6.0 status

2014-12-04 Thread Sandro Bonazzola
Il 04/12/2014 09:19, Sven Kieske ha scritto:
> 
> 
> On 04/12/14 02:24, Robert Story wrote:
>> On Wed, 03 Dec 2014 10:37:19 +0100 Sandro wrote: SB> Two different
>> proposals have been made about above scheduling [3]: SB> 1) extend
>> the cycle to 10 months for allowing to include a large SB> feature
>> set 2) reduce the cycle to less than 6 months and split SB>
>> features over 3.6 and 3.7
> 
>> I'd prefer a six-month cycle, so that the smaller features and
>> enhancements come more quickly.
> 
> If I read Sandro correct your choice is not in the given alternatives?
> Would this be an third option? to neither decrease nor increase the
> release cycle?

Keeping current is what we'll have if we won't have agreements :-)
Let's wait a couple of weeks for gathering the full feature list and have a 
better picture of how much time it will take to get them in.

> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 

-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Backup solution using the API

2014-12-04 Thread Liron Aravot


- Original Message -
> From: "Thomas Keppler (PEBA)" 
> To: "Liron Aravot" 
> Sent: Thursday, December 4, 2014 12:01:28 PM
> Subject: Re: [ovirt-users] Backup solution using the API
> 
> The Export Phase.
> 
There's no export phase, you create a snapshot and then you backup the data 
present to the time of the snapshot.
> --
> Best regards
> Thomas Keppler
> 
> > Am 04.12.2014 um 10:54 schrieb Liron Aravot :
> > 
> > 
> > 
> > - Original Message -
> >> From: "Thomas Keppler (PEBA)" 
> >> To: "Liron Aravot" 
> >> Sent: Thursday, December 4, 2014 11:41:13 AM
> >> Subject: Re: [ovirt-users] Backup solution using the API
> >> 
> >> This it requires you to halt the  vm, therefore it's not usable for us.
> > What phase requires to halt the vm?
> >> --
> >> Best regards
> >> Thomas Keppler
> >> 
> >>> Am 04.12.2014 um 08:51 schrieb Liron Aravot :
> >>> 
> >>> Hi Thomas/plysan
> >>> oVirt has the Backup/Restore api designated to provide the capabilities
> >>> to
> >>> backup/restore a vm.
> >>> see here:
> >>> http://www.ovirt.org/Features/Backup-Restore_API_Integration
> >>> 
> >>> - Original Message -
>  From: "plysan" 
>  To: "Thomas Keppler (PEBA)" 
>  Cc: users@ovirt.org
>  Sent: Thursday, December 4, 2014 3:51:24 AM
>  Subject: Re: [ovirt-users] Backup solution using the API
>  
>  Hi,
>  
>  For the live-backup, i think you can make a live snapshot of the vm, and
>  then
>  clone a new vm from that snapshot, after that you can do export.
>  
>  2014-11-27 23:12 GMT+08:00 Keppler, Thomas (PEBA) <
>  thomas.kepp...@kit.edu
>  :
>  
>  
>  
>  Hello,
>  
>  now that our oVirt Cluster runs great, I'd like to know if anybody has
>  done a
>  backup solution for oVirt before.
>  
>  My basic idea goes as follows:
>  - Create a JSON file with machine's preferences for later restore,
>  create
>  a
>  snapshot, snatch the disk by its disk-id (copy it to the fileserver),
>  then
>  deleting the snapshot and be done with it.
>  - On restore I'd just create a new VM with the Disks and NICs specified
>  in
>  the preferences; then I'd go ahead and and put back the disk... t
>  
>  I've played a little bit around with building a JSON file and so far it
>  works
>  great; I haven't tried to make a VM with that, though...
>  
>  Using the export domain or a simple export command is not what I want
>  since
>  you'd have to turn off the machine to do that - AFAIK. Correct me if
>  that
>  should not be true.
>  
>  Now, before I go into any more hassle, has somebody else of you done a
>  live-backup solution for oVirt? Are there any recommendations? Thanks
>  for
>  any help provided!
>  
>  Best regards
>  Thomas Keppler
>  
>  ___
>  Users mailing list
>  Users@ovirt.org
>  http://lists.ovirt.org/mailman/listinfo/users
>  
>  
>  
>  ___
>  Users mailing list
>  Users@ovirt.org
>  http://lists.ovirt.org/mailman/listinfo/users
> >> 
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] shared storage with iscsi

2014-12-04 Thread mad Engineer
Hello All,
I am using NFS as shared storage and is working fine,able
to migrate instances across nodes.
Is it possible to use iscsi backend and achieve the same,a shared
iscsi[i am not able to find a way to do a shared iscsi across hosts]
can some one help with a shared iscsi storage for live migration of vm
across hosts

Thanks
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to stick VM to hosts?

2014-12-04 Thread Arman Khalatyan
I just found very simple solution by adding a dummy interface on host.
Same with VMs. So the only hosts with dummy interface are able to run the
VMs. Works even with HA-VMs.
For sure it is  a hack and not so elegant solution, But it works.:)
a.


***

Dr. Arman Khalatyan eScience -SuperComputing Leibniz-Institut für
Astrophysik Potsdam (AIP) An der Sternwarte 16, 14482 Potsdam, Germany

***


On Sun, Nov 30, 2014 at 3:33 PM, Doron Fediuck  wrote:

> Just for the record, affinity deals with the relation
> between VMs, and pinning is about the relation between
> a VM and a host.
>
> We're considering extending affinity to handle hosts
> as well, but it's not a trivial change.
>
> Doron
>
> - Original Message -
> > From: "Arman Khalatyan" 
> > To: "Itamar Heim" 
> > Cc: "users" 
> > Sent: Sunday, November 30, 2014 3:30:19 PM
> > Subject: Re: [ovirt-users] How to stick VM to hosts?
> >
> > Thanks for ideas.
> > I will try http://www.ovirt.org/External_Scheduler_Samples looks nice.
> > Itamar the VM-Affinity selects the VMs, but not the hosts.
> > Would be nice in the next release to add some host pinning options as
> well.
> > Cheers,
> > Arman.
> >
> >
> > ***
> > Dr. Arman Khalatyan eScience -SuperComputing Leibniz-Institut für
> Astrophysik
> > Potsdam (AIP) An der Sternwarte 16, 14482 Potsdam, Germany
> > ***
> >
> > On Sun, Nov 30, 2014 at 2:20 PM, Itamar Heim < ih...@redhat.com > wrote:
> >
> >
> > On 11/30/2014 12:58 PM, Artyom Lukianov wrote:
> >
> >
> > I am sure you know about vm pinning, but it promise that you can run vm
> only
> > on one specific vm, in your case you want to run vm on some range of
> hosts,
> > exclude some specific hosts. At the moment ovirt don't have such thing of
> > policy, but you can write your own filter for scheduler-proxy, that will
> > filter all hosts except ones that you need:
> > http://www.ovirt.org/Features/ oVirt_External_Scheduling_ Proxy
> > http://www.ovirt.org/External_ Scheduler_Samples
> >
> > You need to install package ovirt-scheduler-proxy and also enable it via
> > engine-config -s ExternalSchedulerEnabled=True.
> > Also additional info you can find under cd /usr/share/doc/ovirt-
> > scheduler-proxy*
> >
> > why - since 3.4 we have affinity support out of the box?
> > http://www.ovirt.org/Features/ VM-Affinity
> >
> >
> >
> >
> >
> > I hope it will help you.
> > Thanks
> >
> > - Original Message -
> > From: "Arman Khalatyan" < arm2...@gmail.com >
> > To: "users" < users@ovirt.org >
> > Sent: Friday, November 28, 2014 11:51:40 AM
> > Subject: [ovirt-users] How to stick VM to hosts?
> >
> > Hello,
> > I have 2 VMs with the negative Affinity.
> > I am looking some way to force the VMs running on different selected
> hosts.
> > Assuming I have hosts c1-c8: Can I tell VM to run on c1-c8 but not c2
> and c4?
> > Thanks,
> > Arman.
> >
> > ** *
> > Dr. Arman Khalatyan eScience -SuperComputing Leibniz-Institut für
> Astrophysik
> > Potsdam (AIP) An der Sternwarte 16, 14482 Potsdam, Germany
> > ** *
> >
> > __ _
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/ mailman/listinfo/users
> > __ _
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/ mailman/listinfo/users
> >
> >
> >
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Storage domain issue

2014-12-04 Thread Koen Vanoppen
Never mind. Installed a new updated brocade driver on the hypervisor,
restarted the engine after after 10 minutes, the engine restored everything
itself :-)

2014-12-04 9:04 GMT+01:00 Koen Vanoppen :

> Dear All,
>
> After we updated our hypervisors to :
> vdsm-4.16.7-1.gitdb83943.el6.x86_64
> vdsm-python-4.16.7-1.gitdb83943.el6.noarch
> vdsm-python-zombiereaper-4.16.7-1.gitdb83943.el6.noarch
> vdsm-xmlrpc-4.16.7-1.gitdb83943.el6.noarch
> vdsm-yajsonrpc-4.16.7-1.gitdb83943.el6.noarch
> vdsm-jsonrpc-4.16.7-1.gitdb83943.el6.noarch
> vdsm-cli-4.16.7-1.gitdb83943.el6.noarch
>
> We don't have access to our storage domain anymore. He jsut doesn't see
> the disks anymore...
>
> I added the vdsm log and engine log. I'm clueless... BEfore he did see the
> storage domains, now after the system update he doesn't anymore...
>
> Thanks in advance,
>
> Kind regards,
> Koen
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [QE][ACTION REQUIRED] oVirt 3.6.0 status

2014-12-04 Thread Sven Kieske
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1



On 04/12/14 02:24, Robert Story wrote:
> On Wed, 03 Dec 2014 10:37:19 +0100 Sandro wrote: SB> Two different
> proposals have been made about above scheduling [3]: SB> 1) extend
> the cycle to 10 months for allowing to include a large SB> feature
> set 2) reduce the cycle to less than 6 months and split SB>
> features over 3.6 and 3.7
> 
> I'd prefer a six-month cycle, so that the smaller features and
> enhancements come more quickly.

If I read Sandro correct your choice is not in the given alternatives?
Would this be an third option? to neither decrease nor increase the
release cycle?

- -- 
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH & Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad
Oeynhausen
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJUgBj8AAoJEC5d3lL7/I9ziM4P/3PW2jvgdu0r2taPRTnE9JIc
aynKs/cOd6+edomalFSnhjCIwYJi6i37VMwINO+M8NMAUozQetYbXw4D2jjKk9pN
4foe04KqxobUAMtpbLn0C1WRYRUchE335g2GYNHPgn5gvZuiYHUBojqUepf0881F
f+k+sKaCJrAA3d2x4bytokr/altJo+eiSdWEbzejBF/2tekYBAFxwiFT6pTRupoO
EUQGMvU73ahBKZj1gI57Bx38IlfT0BVaLoeZfvy/9Y/sveaXLcTX/aO9YCh41edG
Ym6biG5mQiaRW/XpapNqRZkHUuxBucF1ZHdL5kB1uB01D8IRUAjaZD1kXPXRKY9Y
RBGt0HWK/Wf8n9x+pIhnOmanx+hvHiX/cyk13Sf+0vTljenOOsZDt6VMUt9ey03R
eozZgd67PuVrcln2hmOJqG2FP5BkYMmVrXWWGV/nj2iV0N0j99LqES/FasX+Kmuf
MRS2bRyubLBu+rWjp0gecw1OOw20ENfFshu27NDNhWFgLqvBpClQek/wlncqrVTw
tp7eBNPt7i/tWR19TdzCw18/5SnchzKQpAPZh059uZK1+78H+UhubO9JNH+kUDzc
5JDNYH1g2zU0pEBKU/nMl3VRI5gPOgWspXTr4L6oMvJyeXq6MYQwpqhes5AlJ0wK
RbG2OMdnR2ql1SblsPkg
=0sJT
-END PGP SIGNATURE-
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt-guest-agent on windows : what Python env. needed?

2014-12-04 Thread Karli Sjöberg
On Wed, 2014-12-03 at 07:44 -0500, Lev Veyde wrote:
> Hi Nicolas,
> 
> If the agent is compiled with py2exe (and as you got .exe files it means it 
> was compiled with py2exe) then the executables are self contained, and you 
> don't need to install Python separately in each VM.
> 
> All you need is to download and install VC runtime, which you can download 
> from here:
> http://www.microsoft.com/en-us/download/details.aspx?id=5582
> 
> That should resolve the issue.
> 
> BTW, we have oVirt WGT (Windows Guest Tools) RPM, with ISO which contains the 
> installer that will install the oVirt Guest Agent (including VC Runtime), as 
> well as drivers etc. automatically for you.
> 
> Thanks in advance,
> Lev Veyde.
> 
> - Original Message -
> From: "Sandro Bonazzola" 
> To: "Lev Veyde" 
> Sent: Wednesday, December 3, 2014 1:49:00 PM
> Subject: Fwd: [ovirt-users] ovirt-guest-agent on windows : what Python env. 
> needed?
> 
> 
> 
> 
>   Messaggio Inoltrato 
> Oggetto: [ovirt-users] ovirt-guest-agent on windows : what Python env. needed?
> Data: Wed, 03 Dec 2014 11:49:06 +0100
> Mittente: Nicolas Ecarnot 
> Organizzazione: Si peu...
> A: Users@ovirt.org 
> 
> Hello,
> 
> I read the following page :
> http://www.ovirt.org/OVirt_Guest_Agent_For_Windows
> and applied it on a server, and it ran very well.
> 
> I obtained the two executables, copied them into "program files"
> according to the doc, along with the .ini as stated here :
> https://www.mail-archive.com/users@ovirt.org/msg18561.html
> 
> - the "-install", the start, and the enabling went fine
> - rebooting the server runs OK too, and the agent is seen by oVirt
> 
> What I don't understand is the following sentence of
> https://github.com/oVirt/ovirt-guest-agent/blob/master/ovirt-guest-agent/README-windows.txt
> 
> "Optionally install py2exe if you want to build an executable file which
> doesn't require Python installation for running"
> 
> As I don't know python at all, I thought this was building some sort of
> "self-executable" binary that I could copy-paste into another VM, and do
> the same install/enable/run.
> 
> And WITHOUT installing any Python environnement.
> 
> I'm sorry for such a weak question, but if this is not the case, does
> that mean I have to install a Pyhton env on each of my windows VMs?
> 
> BTW, I tried to copy-paste the programfiles/guestagent... into another
> server, and when running the install, it gives a message
> [in french, :(  ]
> L'application n'a pas pu démarrer car sa configuration côte-à-côte est
> incorrecte.
> That could be translated by :
> The application could not start because its side-by-side (?) config is
> incorrect.
> 
> PS : Time is not anymore at launching windows_vs_linux war, but I just
> installed the guest agent on 17 _linux_ VMs in 3 minutes...
> 

Haven´t tried what Lev said, using the new ISO, but as the person who
wrote the wiki page, yes, you need to install _either_ python or, as Lev
said, the VC runtime for the exe's to work on each guest.

In practice though you create a template with this installed _one_ time,
then spawn the rest of your VM's from that. But e.g. converting lots of
VM's that came from another virtualization platform you´d need the VC
runtime (or python) and exe's installed in each VM for the guest agent
to work.

Do you know there´s a third-party package manager for windows called
"Chocolatey", kind of like yum for Windows? That together with Puppet
(or plain-old SCCM) makes Windows management lots easier.

/K


-- 

Med Vänliga Hälsningar

---
Karli Sjöberg
Swedish University of Agricultural Sciences Box 7079 (Visiting Address
Kronåsvägen 8)
S-750 07 Uppsala, Sweden
Phone:  +46-(0)18-67 15 66
karli.sjob...@slu.se
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Storage domain issue

2014-12-04 Thread Koen Vanoppen
Dear All,

After we updated our hypervisors to :
vdsm-4.16.7-1.gitdb83943.el6.x86_64
vdsm-python-4.16.7-1.gitdb83943.el6.noarch
vdsm-python-zombiereaper-4.16.7-1.gitdb83943.el6.noarch
vdsm-xmlrpc-4.16.7-1.gitdb83943.el6.noarch
vdsm-yajsonrpc-4.16.7-1.gitdb83943.el6.noarch
vdsm-jsonrpc-4.16.7-1.gitdb83943.el6.noarch
vdsm-cli-4.16.7-1.gitdb83943.el6.noarch

We don't have access to our storage domain anymore. He jsut doesn't see the
disks anymore...

I added the vdsm log and engine log. I'm clueless... BEfore he did see the
storage domains, now after the system update he doesn't anymore...

Thanks in advance,

Kind regards,
Koen
[root@ovirtmgmt01prod ~]# tail -f /var/log/ovirt-engine/engine.log
2014-12-04 08:58:50,192 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-96) Command HSMGetAllTasksStatusesVDSCommand(HostName = mercury2, HostId = dfddc678-f8ee-45eb-897a-885c83de870e) execution failed. Exception: IRSNonOperationalException: IRSGenericException: IRSErrorException: IRSNonOperationalException: Not SPM: ()
2014-12-04 08:59:20,247 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] (DefaultQuartzScheduler_Worker-96) Command ConnectStoragePoolVDSCommand(HostName = mercury2, HostId = dfddc678-f8ee-45eb-897a-885c83de870e, vdsId = dfddc678-f8ee-45eb-897a-885c83de870e, storagePoolId = 1d03dc05-008b-4d14-97ce-b17bd714183d, masterVersion = 8) execution failed. Exception: VDSNetworkException: java.util.concurrent.TimeoutException
2014-12-04 08:59:20,248 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-96) IrsBroker::Failed::GetStoragePoolInfoVDS due to: IRSNonOperationalException: IRSGenericException: IRSErrorException: IRSNonOperationalException: Could not connect host to Data Center(Storage issue)
2014-12-04 08:59:30,418 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-45) Command HSMGetAllTasksStatusesVDSCommand(HostName = mercury2, HostId = dfddc678-f8ee-45eb-897a-885c83de870e) execution failed. Exception: IRSNonOperationalException: IRSGenericException: IRSErrorException: IRSNonOperationalException: Not SPM: ()
2014-12-04 08:59:47,985 WARN  [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (org.ovirt.thread.pool-8-thread-45) Domain eb912657-8a8c-4173-9d24-92d2b09a773c:StoragePoolDMZ03 was reported by all hosts in status UP as problematic. Not moving the domain to NonOperational because it is being reconstructed now.
2014-12-04 08:59:50,188 WARN  [org.ovirt.engine.core.bll.scheduling.policyunits.EvenGuestDistributionBalancePolicyUnit] (DefaultQuartzScheduler_Worker-68) There is no host with less than 5 running guests
2014-12-04 08:59:50,189 WARN  [org.ovirt.engine.core.bll.scheduling.PolicyUnitImpl] (DefaultQuartzScheduler_Worker-68) All hosts are over-utilized, cant balance the cluster SandyBridgeCluster
2014-12-04 09:00:00,469 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] (DefaultQuartzScheduler_Worker-45) Command ConnectStoragePoolVDSCommand(HostName = mercury2, HostId = dfddc678-f8ee-45eb-897a-885c83de870e, vdsId = dfddc678-f8ee-45eb-897a-885c83de870e, storagePoolId = 1d03dc05-008b-4d14-97ce-b17bd714183d, masterVersion = 8) execution failed. Exception: VDSNetworkException: java.util.concurrent.TimeoutException
2014-12-04 09:00:00,470 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-45) IrsBroker::Failed::GetStoragePoolInfoVDS due to: IRSNonOperationalException: IRSGenericException: IRSErrorException: IRSNonOperationalException: Could not connect host to Data Center(Storage issue)
2014-12-04 09:00:10,597 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-81) Command HSMGetAllTasksStatusesVDSCommand(HostName = mercury2, HostId = dfddc678-f8ee-45eb-897a-885c83de870e) execution failed. Exception: IRSNonOperationalException: IRSGenericException: IRSErrorException: IRSNonOperationalException: Not SPM: ()
2014-12-04 09:00:40,718 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] (DefaultQuartzScheduler_Worker-81) Command ConnectStoragePoolVDSCommand(HostName = mercury2, HostId = dfddc678-f8ee-45eb-897a-885c83de870e, vdsId = dfddc678-f8ee-45eb-897a-885c83de870e, storagePoolId = 1d03dc05-008b-4d14-97ce-b17bd714183d, masterVersion = 8) execution failed. Exception: VDSNetworkException: java.util.concurrent.TimeoutException
2014-12-04 09:00:40,719 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-81) IrsBroker::Failed::GetStoragePoolInfoVDS due to: IRSNonOperationalException: IRSGenericException: IRSErrorException: IRSNonOperationalException: Could not connect host to Data Center(Storage issue)
2014-12-04 09:00:50,439 WARN  [org.ovirt.engine.core.bll.scheduling.policyunits.EvenGuestDistribut