Re: [ovirt-users] shutdown and kernel panic

2016-11-23 Thread Pavel Gashev
Luigi,

It’s necessary to put a host into maintenance mode before shutdown.

That panic is a kernel reaction to NMI, which is triggered by hardware 
watchdog, which is set by wdmd daemon, which is used by sanlock. This scheme is 
intended to hardware reset a server if it has lost connection to its storage 
when lock is acquired.

On HP servers it’s necessary to configure watchdog to do hardware reset instead 
of NMI in BIOS settings.


From:  on behalf of Juan Pablo 

Date: Wednesday 23 November 2016 at 17:17
To: Luigi Fanton 
Cc: "users@ovirt.org" 
Subject: Re: [ovirt-users] shutdown and kernel panic

same issue here. I guess now that I read your post that its an HP bug 'somehow' 
(to blame someone).
maybe thats why ovirt asks for fencing interfaces as ilo/imm/ipmi, to hard 
reboot the server in case there's an issue like this.

just my 2c

2016-11-22 13:22 GMT-03:00 Luigi Fanton 
>:
Hello to all,
I'm just playing with ovirt4, installed on HP server with CentOS 7, and a 
virtual machine  as host engine.
I have a lot of problems with the server shutdown!
The server dosn't power off and will reboot after some "kernel panic" error.

To turn off the ovirt server:
1) Put ovirt in global maintinience (hosted-engine --set-maintenance 
--mode=global)
2) Host engine shutdown and check vm-status
3) Finaly, shutdown server and wait ... wait ... wait
Whats wrong?! -_-

best regards
Luigi F.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] shutdown and kernel panic

2016-11-23 Thread Juan Pablo
same issue here. I guess now that I read your post that its an HP bug
'somehow' (to blame someone).
maybe thats why ovirt asks for fencing interfaces as ilo/imm/ipmi, to hard
reboot the server in case there's an issue like this.

just my 2c

2016-11-22 13:22 GMT-03:00 Luigi Fanton :

> Hello to all,
> I'm just playing with ovirt4, installed on HP server with CentOS 7, and a
> virtual machine  as host engine.
> I have a lot of problems with the server shutdown!
> The server dosn't power off and will reboot after some "kernel panic"
> error.
>
> To turn off the ovirt server:
> 1) Put ovirt in global maintinience (hosted-engine --set-maintenance
> --mode=global)
> 2) Host engine shutdown and check vm-status
> 3) Finaly, shutdown server and wait ... wait ... wait
>
> Whats wrong?! -_-
>
> best regards
> Luigi F.
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] GFS2 and OCFS2 for Shared Storage

2016-11-23 Thread Charles Kozler
Hey Fernando -

I've had success using OCFS2 with both oVirt and Xen although you still
need something to replicate the blocks and this is where DRBD comes in. The
premise was simple - configured two DRBD devices and then setup OCFS2 as
desired (very straight forward vs comparatively to GFS2). Start the cluster
and export via NFS. From there you create an oVirt storage domain as an NFS
backend and its good to go

On your note about using the network traffic for better stuff - eg: VM
traffic - its usually wise, when you have the capabilities, to keep your
storage network separate of VM network so in that you do not have any
latency between your VM nodes and their backend storage. Take for instance
if one VM starts crippling the network (in whatever scenario) then your
oVirt nodes and engine cannot contact storage. oVirt will begin to take
corrective action and will pause all of your VMs

On Wed, Nov 23, 2016 at 9:08 AM, Fernando Frediani <
fernando.fredi...@upx.com.br> wrote:

> Right Pavel. Then where is it or where is the reference to it ?
>
> The only way I heard of is using Thinprovisioning in the SAN level.
>
> With regards to OCFS2 if anyone has any experience with I would like to
> hear about its sucess or not using it.
>
> Thanks
>
> Fernando
>
>
>
> On 23/11/2016 11:46, Pavel Gashev wrote:
>
>> Fernando,
>>
>> Clustered LVM doesn’t support lvmthin(7) http://man7.org/linux/man-page
>> s/man7/lvmthin.7.html
>> There is an oVirt LVM-based thin provisioning implementation.
>>
>> -Original Message-
>> From: Fernando Frediani 
>> Date: Wednesday 23 November 2016 at 16:31
>> To: Pavel Gashev , "users@ovirt.org" 
>> Subject: Re: [ovirt-users] GFS2 and OCFS2 for Shared Storage
>>
>> Are you sure Pavel ?
>>
>> As far as I know and it has been discussed in this list before, the
>> limitation is in CLVM which doesn't support Thinprovisioning yet. LVM2
>> does, but it is not in Clustered mode. I tried to use GFS2 in the past
>> for other non-virtualization related stuff and didn't have much success
>> either.
>>
>> What about OCFS2 ? Has anyone ?
>>
>> Fernando
>>
>>
>> On 23/11/2016 11:26, Pavel Gashev wrote:
>>
>>> Fernando,
>>>
>>> oVirt supports thin provisioning for shared block storages (DAS or
>>> iSCSI). It works using QCOW2 disk images directly on LVM volumes. oVirt
>>> extends volumes when QCOW2 is growing.
>>>
>>> I tried GFS2. It's slow, and blocks other hosts on a host failure.
>>>
>>> -Original Message-
>>> From:  on behalf of Fernando Frediani <
>>> fernando.fredi...@upx.com.br>
>>> Date: Wednesday 23 November 2016 at 15:03
>>> To: "users@ovirt.org" 
>>> Subject: [ovirt-users] GFS2 and OCFS2 for Shared Storage
>>>
>>> Has anyone managed to use GFS2 or OCFS2 for Shared Block Storage between
>>> hosts ? How scalable was it and which of the two work better ?
>>>
>>> Using traditional CLVM is far from good starting because of the lack of
>>> Thinprovision so I'm willing to consider either of the Filesystems.
>>>
>>> Thanks
>>>
>>> Fernando
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>>
>>
>>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] GFS2 and OCFS2 for Shared Storage

2016-11-23 Thread Fernando Frediani

Right Pavel. Then where is it or where is the reference to it ?

The only way I heard of is using Thinprovisioning in the SAN level.

With regards to OCFS2 if anyone has any experience with I would like to 
hear about its sucess or not using it.


Thanks

Fernando


On 23/11/2016 11:46, Pavel Gashev wrote:

Fernando,

Clustered LVM doesn’t support lvmthin(7) 
http://man7.org/linux/man-pages/man7/lvmthin.7.html
There is an oVirt LVM-based thin provisioning implementation.

-Original Message-
From: Fernando Frediani 
Date: Wednesday 23 November 2016 at 16:31
To: Pavel Gashev , "users@ovirt.org" 
Subject: Re: [ovirt-users] GFS2 and OCFS2 for Shared Storage

Are you sure Pavel ?

As far as I know and it has been discussed in this list before, the
limitation is in CLVM which doesn't support Thinprovisioning yet. LVM2
does, but it is not in Clustered mode. I tried to use GFS2 in the past
for other non-virtualization related stuff and didn't have much success
either.

What about OCFS2 ? Has anyone ?

Fernando


On 23/11/2016 11:26, Pavel Gashev wrote:

Fernando,

oVirt supports thin provisioning for shared block storages (DAS or iSCSI). It 
works using QCOW2 disk images directly on LVM volumes. oVirt extends volumes 
when QCOW2 is growing.

I tried GFS2. It's slow, and blocks other hosts on a host failure.

-Original Message-
From:  on behalf of Fernando Frediani 

Date: Wednesday 23 November 2016 at 15:03
To: "users@ovirt.org" 
Subject: [ovirt-users] GFS2 and OCFS2 for Shared Storage

Has anyone managed to use GFS2 or OCFS2 for Shared Block Storage between
hosts ? How scalable was it and which of the two work better ?

Using traditional CLVM is far from good starting because of the lack of
Thinprovision so I'm willing to consider either of the Filesystems.

Thanks

Fernando

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users







___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] GFS2 and OCFS2 for Shared Storage

2016-11-23 Thread Pavel Gashev
Fernando,

Clustered LVM doesn’t support lvmthin(7) 
http://man7.org/linux/man-pages/man7/lvmthin.7.html
There is an oVirt LVM-based thin provisioning implementation.

-Original Message-
From: Fernando Frediani 
Date: Wednesday 23 November 2016 at 16:31
To: Pavel Gashev , "users@ovirt.org" 
Subject: Re: [ovirt-users] GFS2 and OCFS2 for Shared Storage

Are you sure Pavel ?

As far as I know and it has been discussed in this list before, the 
limitation is in CLVM which doesn't support Thinprovisioning yet. LVM2 
does, but it is not in Clustered mode. I tried to use GFS2 in the past 
for other non-virtualization related stuff and didn't have much success 
either.

What about OCFS2 ? Has anyone ?

Fernando


On 23/11/2016 11:26, Pavel Gashev wrote:
> Fernando,
>
> oVirt supports thin provisioning for shared block storages (DAS or iSCSI). It 
> works using QCOW2 disk images directly on LVM volumes. oVirt extends volumes 
> when QCOW2 is growing.
>
> I tried GFS2. It's slow, and blocks other hosts on a host failure.
>
> -Original Message-
> From:  on behalf of Fernando Frediani 
> 
> Date: Wednesday 23 November 2016 at 15:03
> To: "users@ovirt.org" 
> Subject: [ovirt-users] GFS2 and OCFS2 for Shared Storage
>
> Has anyone managed to use GFS2 or OCFS2 for Shared Block Storage between
> hosts ? How scalable was it and which of the two work better ?
>
> Using traditional CLVM is far from good starting because of the lack of
> Thinprovision so I'm willing to consider either of the Filesystems.
>
> Thanks
>
> Fernando
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] GFS2 and OCFS2 for Shared Storage

2016-11-23 Thread Fernando Frediani

Are you sure Pavel ?

As far as I know and it has been discussed in this list before, the 
limitation is in CLVM which doesn't support Thinprovisioning yet. LVM2 
does, but it is not in Clustered mode. I tried to use GFS2 in the past 
for other non-virtualization related stuff and didn't have much success 
either.


What about OCFS2 ? Has anyone ?

Fernando


On 23/11/2016 11:26, Pavel Gashev wrote:

Fernando,

oVirt supports thin provisioning for shared block storages (DAS or iSCSI). It 
works using QCOW2 disk images directly on LVM volumes. oVirt extends volumes 
when QCOW2 is growing.

I tried GFS2. It's slow, and blocks other hosts on a host failure.

-Original Message-
From:  on behalf of Fernando Frediani 

Date: Wednesday 23 November 2016 at 15:03
To: "users@ovirt.org" 
Subject: [ovirt-users] GFS2 and OCFS2 for Shared Storage

Has anyone managed to use GFS2 or OCFS2 for Shared Block Storage between
hosts ? How scalable was it and which of the two work better ?

Using traditional CLVM is far from good starting because of the lack of
Thinprovision so I'm willing to consider either of the Filesystems.

Thanks

Fernando

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] GFS2 and OCFS2 for Shared Storage

2016-11-23 Thread Fernando Frediani

Hello Nicolas. Thanks for your reply.

As you correctly said GlusterFS is not Block Storare but it is 
Distributed Storage. There are scenarios where it simply doesn't apply 
like a Shared Block storage between physical servers in a chassis or 
simply shared DAS (Direct Attached Storage). Otherwise would you would 
unnecessarily use network throughput which can be better used for other 
things like VM legit traffic and not have the best performance you could 
reading/writing directly from/to a Shared Block Storage.


Distributed storage is always a great mindset for newer scenarios, but 
it doesn't apply to all scenarios and I wouldn't think Redhat would 
direct people to a single way.


Fernando


On 23/11/2016 11:11, Nicolas Ecarnot wrote:

Le 23/11/2016 à 13:03, Fernando Frediani a écrit :

Has anyone managed to use GFS2 or OCFS2 for Shared Block Storage between
hosts ? How scalable was it and which of the two work better ?

Using traditional CLVM is far from good starting because of the lack of
Thinprovision so I'm willing to consider either of the Filesystems.

Thanks

Fernando

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Hello Fernando,

Redhat took a clear direction towards the use of GlusterFS for its 
Software-defined storage, and lots of efforts are made to make 
oVirt/RHEV work together smoothly.
I know GlusterFS is not a block storage, but it's worth considering 
it, especially if you intend to setup hyper-converged clusters.




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] GFS2 and OCFS2 for Shared Storage

2016-11-23 Thread Pavel Gashev
Fernando,

oVirt supports thin provisioning for shared block storages (DAS or iSCSI). It 
works using QCOW2 disk images directly on LVM volumes. oVirt extends volumes 
when QCOW2 is growing.

I tried GFS2. It's slow, and blocks other hosts on a host failure.

-Original Message-
From:  on behalf of Fernando Frediani 

Date: Wednesday 23 November 2016 at 15:03
To: "users@ovirt.org" 
Subject: [ovirt-users] GFS2 and OCFS2 for Shared Storage

Has anyone managed to use GFS2 or OCFS2 for Shared Block Storage between 
hosts ? How scalable was it and which of the two work better ?

Using traditional CLVM is far from good starting because of the lack of 
Thinprovision so I'm willing to consider either of the Filesystems.

Thanks

Fernando

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to notify cluster nodes after "engine-config --set IPTablesConfigSiteCustom..." ?

2016-11-23 Thread Yedidyah Bar David
On Wed, Nov 23, 2016 at 1:54 PM,   wrote:
> "As I wrote there, you can also do this manually"
>
> How?

I am not sure I understand the question.

The same way you configure iptables on non-oVirt-hosts machines.

If you mean "How to imitate the way the engine does this during
host deploy", then I don't know - you can check engine sources
for that. I am guessing that you can get the values of IPTablesConfig
and IPTablesConfigSiteCustom with engine-config, replace inside the
latter "@CUSTOM_RULES@" with the contents of the former, then copy
the result to the host and load it with iptables-restore (and/or
copy to /etc/sysconfig/iptables and restart iptables service).

>
> 23.11.2016, 14:23, "Yedidyah Bar David" :
>> On Wed, Nov 23, 2016 at 12:51 PM,  wrote:
>>>  Hi Didi!
>>>
>>>  https://www.mail-archive.com/users@ovirt.org/msg37193.html
>>>
>>>  "Move to maintenance and reinstall" to add the iptables rules ?
>>>
>>>  Are you serious?
>>>
>>>  There is no other way (without reinstalling the hosts) ?
>>
>> AFAIK, using ovirt-host-deploy, no.
>>
>> I am not aware of an engine API or vdsm verb to do this, but these are
>> not my main area of expertise.
>>
>> As I wrote there, you can also do this manually.
>>
>> The oVirt engine is not a replacement for configuration management
>> systems. If you have complex needs, might as well uncheck this
>> checkbox and use other means.
>>
>> Best,
>>
>>>  23.11.2016, 13:07, "Yedidyah Bar David" :
  On Wed, Nov 23, 2016 at 12:02 PM,  wrote:
>   Hmm. I just rebooted the host, but the iptables rules have not been 
> updated :(
>
>   On Engine server my custom iptables rules are visible:
>
>   # engine-config --get IPTablesConfigSiteCustom
>
>   IPTablesConfigSiteCustom:
>   -A INPUT -p tcp --dport 2301 -j ACCEPT -m comment --comment 'HPE System 
> Management Homepage'
>   -A INPUT -p tcp --dport 2381 -j ACCEPT -m comment --comment 'HPE System 
> Management Homepage (Secure port)'
>version: general
>
>   How to update the configuration on the hosts ?
>
>   23.11.2016, 11:30, "aleksey.maksi...@it-kb.ru" 
> :
>>   Hello oVirt guru`s !
>>
>>   oVirt Engine Version: 4.0.5.5-1.el7.centos
>>
>>   I updated the configuration of the firewall on the Engine server with 
>> "engine-config --set IPTablesConfigSiteCustom...".
>>   How to notify cluster nodes (all virtualization hosts) about the 
>> changes without reboot?

  Please check the other thread here "[ovirt-users] Hook to add firewall
  rules". Thanks.

>   ___
>   Users mailing list
>   Users@ovirt.org
>   http://lists.ovirt.org/mailman/listinfo/users

  --
  Didi
>>
>> --
>> Didi



-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] GFS2 and OCFS2 for Shared Storage

2016-11-23 Thread Nicolas Ecarnot

Le 23/11/2016 à 13:03, Fernando Frediani a écrit :

Has anyone managed to use GFS2 or OCFS2 for Shared Block Storage between
hosts ? How scalable was it and which of the two work better ?

Using traditional CLVM is far from good starting because of the lack of
Thinprovision so I'm willing to consider either of the Filesystems.

Thanks

Fernando

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Hello Fernando,

Redhat took a clear direction towards the use of GlusterFS for its 
Software-defined storage, and lots of efforts are made to make 
oVirt/RHEV work together smoothly.
I know GlusterFS is not a block storage, but it's worth considering it, 
especially if you intend to setup hyper-converged clusters.


--
Nicolas ECARNOT
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Storage questions

2016-11-23 Thread Краснобаев Михаил
Good day, >>you have to make VM snapshots and merge them This operation also takes a lot of time (in my experience equal to reading out the whole virtual disk). Pavel, why don't you consider building a classic Datacenter, where you have a shared storage?   This operation 23.11.2016, 14:36, "Pavel Gashev" :1. You can create a datacenter per host, but you can't have a storage shared among datacenters.  2. I mean backups would add performance problems. When you rsync a disk image, in order to find the difference it reads both the source and the destination images. In other words, if you want to make daily backups, rsync will daily read everything located on local storages, plus everything located on gluster. Plus, in order to make consistent backups, you have to make VM snapshots and merge them after rsync.   From:  Oscar Segarra Date: Wednesday 23 November 2016 at 13:42To: Pavel Gashev Cc: users Subject: Re: [ovirt-users] Storage questions Hi Pavel,  1. Local storage datacenter doesn’t support multiple hosts. If you have multiple hosts, you have to have a shared storage, even it’s a hyper-converged setup. Is it not possible to create a datacenter for each node and set up a shared storage (transversal to all hosts) for storing engine and other infraestructure virtual servers? 2. In your case most of disk and network performance would be used by backups. And a backup cycle would take more than 24 hours. Even rsync would take much resources since it has to at least read the whole disk images. Do you mean that 1000 vdis against a shared gluster volume provided by 10 physical hosts (the same hosts that run kvm) won't have performance problems? Do you know any similar experience? Related to rsync, the idea is launch one rsync process per physical node for backing up the contained virtual machines. But if you expect rsync to require the whole day... do you mean gluster georeplication will require 24 hours too? Thanks a lot   2016-11-23 11:02 GMT+01:00 Pavel Gashev :Oscar, I’d make two notes: 1. Local storage datacenter doesn’t support multiple hosts. If you have multiple hosts, you have to have a shared storage, even it’s a hyper-converged setup. 2. In your case most of disk and network performance would be used by backups. And a backup cycle would take more than 24 hours. Even rsync would take much resources since it has to at least read the whole disk images. I’d recommend a scenario with a dedicated shared storage that supports snapshots.  From:  on behalf of Oscar Segarra Date: Wednesday 23 November 2016 at 03:11To: Yaniv Dary Cc: users Subject: Re: [ovirt-users] Storage questions Hi,  As on oVirt is it possible to attach local storage I supose it can be used to run virtual machines: I have drawn a couple of diagrams in order to know if is it possible to set up this configuration: 1.- In on-going scenario:Every host runs 100 vdi virtual machines whose disks are placed on local storage. There is a common gluster volume shared between all nodes.  2.- If one node fails:  oVirt has to be able to inventory the copy of machines (in our example vdi201 ... vdi300) and start them on remaining nodes. ¿Is it possible to reach this configuration with oVirt? ¿or something similar? Making backup with the import-export procedure based on snapshot can take lot of time and resources. Incremental rsync is cheaper in terms of resources. Thanks lot.  2016-11-22 10:49 GMT+01:00 Yaniv Dary :I suggest you setup that environment and test for the performance and report if you have issues. Please note that currently there is no data locality guarantee, so a VM might be running on a host that doesn't have its disks. We have APIs to do backup\restore and that is the only supported option for backup:https://www.ovirt.org/develop/release-management/features/storage/backup-restore-api-integration/You can look at the Gluster DR option that was posted a while back, you can look that up.It used geo replication and import storage domain to do the DR.      Yaniv DaryTechnical Product ManagerRed Hat Israel Ltd.34 Jerusalem RoadBuilding A, 4th floorRa'anana, Israel 4350109 Tel : +972 (9) 7692306    8272306Email: yd...@redhat.comIRC : ydary On Mon, Nov 21, 2016 at 5:17 PM, Oscar Segarra  wrote:Hi,  I'm planning to deploy a scalable VDI infraestructure where each phisical host can run over 100 VDIs and I'd like to deploy 10 physical hosts (1000 VDIs). In order to avoid performance problems (replicating 1000VDIs changes over gluster network I think can provoque performance problems) I have thought to use local storage for VDI assuming that VDIs cannot be migrated between phisical hosts. ¿Is my worry founded in terms of performance?¿Is it possible to utilize local SSD storage for VDIs? I'd like to configure a gluster volume for 

[ovirt-users] GFS2 and OCFS2 for Shared Storage

2016-11-23 Thread Fernando Frediani
Has anyone managed to use GFS2 or OCFS2 for Shared Block Storage between 
hosts ? How scalable was it and which of the two work better ?


Using traditional CLVM is far from good starting because of the lack of 
Thinprovision so I'm willing to consider either of the Filesystems.


Thanks

Fernando

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] OVN Provider setup issues

2016-11-23 Thread Andrea Fagiani

Marcin,

thanks! I keep forgetting that part of the configuration even exists.

After attaching the network to the cluster I was able to configure the 
vNics as intended.
I then had to disable iptables (as mentioned in the blog post) on the 
hosts running the VMs and I was able to establish connectivity between 
two VMs running on different hosts using the OVN-provided network!


Thanks everyone involved for the help,
I'll be sure to report back after further testing.

Regards,
Andrea


On 23/11/2016 11:27, Marcin Mirecki wrote:

Andrea,

Please check if the network is attached to the cluster.

Thanks,
Marcin

- Original Message -

From: "Andrea Fagiani" 
To: users@ovirt.org
Cc: "Dan Kenigsberg" , "Lance Richardson" 
, mmire...@redhat.com
Sent: Wednesday, November 23, 2016 11:02:10 AM
Subject: Re: [ovirt-users] OVN Provider setup issues

Hi Dan,

I was able to setup the OVN external provider building and loading the
updated OVS kernel module; I am currently running it on all 5 hosts,
ovs-vsctl shows all the tunnels correctly instantiated.

However, after importing the provider into the oVirt engine and setting
up a vNic profile, I cannot assign it to any VM; it doesn't show up in
the vNic profiles list.

Any suggestions?

Thanks,
Andrea


On 18/11/2016 12:33, Dan Kenigsberg wrote:

On Fri, Nov 18, 2016 at 09:13:53AM +0100, Andrea Fagiani wrote:

Hi Lance,

thanks, I have currently deployed oVirt using the oVirt Node images, so
indeed I would like to avoid updating;
out of curiosity, is there actually a beta/pre-release version of the node
avaiable?

I'm afraid that such version would be available only after the release
of centos7.3 and ovirt-4.1-beta. Now we're still speaking about
master-branch experiments.


I have since reinstalled the host to perform further testing but I'll give
it a shot as soon as soon as I find the time.

We'd love to hear how that works for you.

Regards,
Dan.




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to notify cluster nodes after "engine-config --set IPTablesConfigSiteCustom..." ?

2016-11-23 Thread aleksey . maksimov
"As I wrote there, you can also do this manually"

How?

23.11.2016, 14:23, "Yedidyah Bar David" :
> On Wed, Nov 23, 2016 at 12:51 PM,  wrote:
>>  Hi Didi!
>>
>>  https://www.mail-archive.com/users@ovirt.org/msg37193.html
>>
>>  "Move to maintenance and reinstall" to add the iptables rules ?
>>
>>  Are you serious?
>>
>>  There is no other way (without reinstalling the hosts) ?
>
> AFAIK, using ovirt-host-deploy, no.
>
> I am not aware of an engine API or vdsm verb to do this, but these are
> not my main area of expertise.
>
> As I wrote there, you can also do this manually.
>
> The oVirt engine is not a replacement for configuration management
> systems. If you have complex needs, might as well uncheck this
> checkbox and use other means.
>
> Best,
>
>>  23.11.2016, 13:07, "Yedidyah Bar David" :
>>>  On Wed, Nov 23, 2016 at 12:02 PM,  wrote:
   Hmm. I just rebooted the host, but the iptables rules have not been 
 updated :(

   On Engine server my custom iptables rules are visible:

   # engine-config --get IPTablesConfigSiteCustom

   IPTablesConfigSiteCustom:
   -A INPUT -p tcp --dport 2301 -j ACCEPT -m comment --comment 'HPE System 
 Management Homepage'
   -A INPUT -p tcp --dport 2381 -j ACCEPT -m comment --comment 'HPE System 
 Management Homepage (Secure port)'
    version: general

   How to update the configuration on the hosts ?

   23.11.2016, 11:30, "aleksey.maksi...@it-kb.ru" 
 :
>   Hello oVirt guru`s !
>
>   oVirt Engine Version: 4.0.5.5-1.el7.centos
>
>   I updated the configuration of the firewall on the Engine server with 
> "engine-config --set IPTablesConfigSiteCustom...".
>   How to notify cluster nodes (all virtualization hosts) about the 
> changes without reboot?
>>>
>>>  Please check the other thread here "[ovirt-users] Hook to add firewall
>>>  rules". Thanks.
>>>
   ___
   Users mailing list
   Users@ovirt.org
   http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>  --
>>>  Didi
>
> --
> Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Released update for 4.0.5?

2016-11-23 Thread Gianluca Cecchi
On Wed, Nov 23, 2016 at 11:26 AM, Gianluca Cecchi  wrote:

> Hello,
> in webadmin GUI I see on my 4.0.5 hosts a message related to updates
> available
>
> If I run yum update in my plain CentOS 7.2 hosts I get:
>
> 1) vdsm packages
>  vdsmx86_64 4.18.15.3-1.el7.centos ovirt-4.0
>688 k
>  vdsm-apinoarch 4.18.15.3-1.el7.centos ovirt-4.0
> 53 k
>  vdsm-clinoarch 4.18.15.3-1.el7.centos ovirt-4.0
> 67 k
>  vdsm-glusternoarch 4.18.15.3-1.el7.centos ovirt-4.0
> 53 k
>  vdsm-hook-vmfex-dev noarch 4.18.15.3-1.el7.centos ovirt-4.0
>6.6 k
>  vdsm-infra  noarch 4.18.15.3-1.el7.centos ovirt-4.0
> 12 k
>  vdsm-jsonrpcnoarch 4.18.15.3-1.el7.centos ovirt-4.0
> 25 k
>  vdsm-python noarch 4.18.15.3-1.el7.centos ovirt-4.0
>602 k
>  vdsm-xmlrpc noarch 4.18.15.3-1.el7.centos ovirt-4.0
> 25 k
>  vdsm-yajsonrpc  noarch 4.18.15.3-1.el7.centos ovirt-4.0
> 27 k
>
> Are they for anything particular?
>
> 2) gluster packages
>
>  glusterfs   x86_64 3.7.17-1.el7
> ovirt-4.0-centos-gluster37 483 k
>  glusterfs-api   x86_64 3.7.17-1.el7
> ovirt-4.0-centos-gluster37  87 k
>  glusterfs-cli   x86_64 3.7.17-1.el7
> ovirt-4.0-centos-gluster37 180 k
>  glusterfs-client-xlators
>  x86_64 3.7.17-1.el7
> ovirt-4.0-centos-gluster37 857 k
>  glusterfs-fuse  x86_64 3.7.17-1.el7
> ovirt-4.0-centos-gluster37 130 k
>  glusterfs-geo-replication
>  x86_64 3.7.17-1.el7
> ovirt-4.0-centos-gluster37 206 k
>  glusterfs-libs  x86_64 3.7.17-1.el7
> ovirt-4.0-centos-gluster37 355 k
>  glusterfs-serverx86_64 3.7.17-1.el7
> ovirt-4.0-centos-gluster37 1.4 M
>
> Currently I have 3.7.16-1.el7.x86_64
> How can I manage gluster updates on gluster environment with 3 hosts?
> Separated by vdsm updates or in the same run?
> Any hint/caveat?
>
> Thanks
> Gianluca
>
>

BTW: it seems there is a problem with the history changelog of vdsm rpm
package no updates since 2013:

both in current vdsm-4.18.15.2-1.el7.centos.x86_64 and in
downloaded vdsm-4.18.15.3-1.el7.centos.x86_64.rpm

rpm -qp --changelog
/var/cache/yum/x86_64/7/ovirt-4.0/packages/vdsm-4.18.15.3-1.el7.centos.x86_64.rpm
* Sun Oct 13 2013 Yaniv Bronhaim  - 4.13.0
- Removing vdsm-python-cpopen from the spec
- Adding dependency on formal cpopen package

* Sun Apr 07 2013 Yaniv Bronhaim  - 4.9.0-1
- Adding cpopen package

* Wed Oct 12 2011 Federico Simoncelli  - 4.9.0-0
- Initial upstream release

* Thu Nov 02 2006 Simon Grinberg  -  0.0-1
- Initial build
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to notify cluster nodes after "engine-config --set IPTablesConfigSiteCustom..." ?

2016-11-23 Thread Yedidyah Bar David
On Wed, Nov 23, 2016 at 12:51 PM,   wrote:
> Hi Didi!
>
> https://www.mail-archive.com/users@ovirt.org/msg37193.html
>
> "Move to maintenance and reinstall" to add the iptables rules ?
>
> Are you serious?
>
> There is no other way (without reinstalling the hosts) ?

AFAIK, using ovirt-host-deploy, no.

I am not aware of an engine API or vdsm verb to do this, but these are
not my main area of expertise.

As I wrote there, you can also do this manually.

The oVirt engine is not a replacement for configuration management
systems. If you have complex needs, might as well uncheck this
checkbox and use other means.

Best,

>
> 23.11.2016, 13:07, "Yedidyah Bar David" :
>> On Wed, Nov 23, 2016 at 12:02 PM,  wrote:
>>>  Hmm. I just rebooted the host, but the iptables rules have not been 
>>> updated :(
>>>
>>>  On Engine server my custom iptables rules are visible:
>>>
>>>  # engine-config --get IPTablesConfigSiteCustom
>>>
>>>  IPTablesConfigSiteCustom:
>>>  -A INPUT -p tcp --dport 2301 -j ACCEPT -m comment --comment 'HPE System 
>>> Management Homepage'
>>>  -A INPUT -p tcp --dport 2381 -j ACCEPT -m comment --comment 'HPE System 
>>> Management Homepage (Secure port)'
>>>   version: general
>>>
>>>  How to update the configuration on the hosts ?
>>>
>>>  23.11.2016, 11:30, "aleksey.maksi...@it-kb.ru" :
  Hello oVirt guru`s !

  oVirt Engine Version: 4.0.5.5-1.el7.centos

  I updated the configuration of the firewall on the Engine server with 
 "engine-config --set IPTablesConfigSiteCustom...".
  How to notify cluster nodes (all virtualization hosts) about the changes 
 without reboot?
>>
>> Please check the other thread here "[ovirt-users] Hook to add firewall
>> rules". Thanks.
>>
>>>  ___
>>>  Users mailing list
>>>  Users@ovirt.org
>>>  http://lists.ovirt.org/mailman/listinfo/users
>>
>> --
>> Didi



-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to notify cluster nodes after "engine-config --set IPTablesConfigSiteCustom..." ?

2016-11-23 Thread aleksey . maksimov
Hi Didi!

https://www.mail-archive.com/users@ovirt.org/msg37193.html

"Move to maintenance and reinstall" to add the iptables rules ?

Are you serious?

There is no other way (without reinstalling the hosts) ?

23.11.2016, 13:07, "Yedidyah Bar David" :
> On Wed, Nov 23, 2016 at 12:02 PM,  wrote:
>>  Hmm. I just rebooted the host, but the iptables rules have not been updated 
>> :(
>>
>>  On Engine server my custom iptables rules are visible:
>>
>>  # engine-config --get IPTablesConfigSiteCustom
>>
>>  IPTablesConfigSiteCustom:
>>  -A INPUT -p tcp --dport 2301 -j ACCEPT -m comment --comment 'HPE System 
>> Management Homepage'
>>  -A INPUT -p tcp --dport 2381 -j ACCEPT -m comment --comment 'HPE System 
>> Management Homepage (Secure port)'
>>   version: general
>>
>>  How to update the configuration on the hosts ?
>>
>>  23.11.2016, 11:30, "aleksey.maksi...@it-kb.ru" :
>>>  Hello oVirt guru`s !
>>>
>>>  oVirt Engine Version: 4.0.5.5-1.el7.centos
>>>
>>>  I updated the configuration of the firewall on the Engine server with 
>>> "engine-config --set IPTablesConfigSiteCustom...".
>>>  How to notify cluster nodes (all virtualization hosts) about the changes 
>>> without reboot?
>
> Please check the other thread here "[ovirt-users] Hook to add firewall
> rules". Thanks.
>
>>  ___
>>  Users mailing list
>>  Users@ovirt.org
>>  http://lists.ovirt.org/mailman/listinfo/users
>
> --
> Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Storage questions

2016-11-23 Thread Oscar Segarra
Hi Pavel,

1. Local storage datacenter doesn’t support multiple hosts. If you have
multiple hosts, you have to have a shared storage, even it’s a
hyper-converged setup.

Is it not possible to create a datacenter for each node and set up a shared
storage (transversal to all hosts) for storing engine and other
infraestructure virtual servers?

2. In your case most of disk and network performance would be used by
backups. And a backup cycle would take more than 24 hours. Even rsync would
take much resources since it has to at least read the whole disk images.

Do you mean that 1000 vdis against a shared gluster volume provided by 10
physical hosts (the same hosts that run kvm) won't have performance
problems? Do you know any similar experience?

Related to rsync, the idea is launch one rsync process per physical node
for backing up the contained virtual machines. But if you expect rsync to
require the whole day... do you mean gluster georeplication will require 24
hours too?

Thanks a lot


2016-11-23 11:02 GMT+01:00 Pavel Gashev :

> Oscar,
>
>
>
> I’d make two notes:
>
>
>
> 1. Local storage datacenter doesn’t support multiple hosts. If you have
> multiple hosts, you have to have a shared storage, even it’s a
> hyper-converged setup.
>
>
>
> 2. In your case most of disk and network performance would be used by
> backups. And a backup cycle would take more than 24 hours. Even rsync would
> take much resources since it has to at least read the whole disk images.
>
>
>
> I’d recommend a scenario with a dedicated shared storage that supports
> snapshots.
>
>
>
>
>
> *From: * on behalf of Oscar Segarra <
> oscar.sega...@gmail.com>
> *Date: *Wednesday 23 November 2016 at 03:11
> *To: *Yaniv Dary 
> *Cc: *users 
> *Subject: *Re: [ovirt-users] Storage questions
>
>
>
> Hi,
>
>
>
> As on oVirt is it possible to attach local storage I supose it can be used
> to run virtual machines:
>
>
>
> I have drawn a couple of diagrams in order to know if is it possible to
> set up this configuration:
>
>
>
> 1.- In on-going scenario:
>
> Every host runs 100 vdi virtual machines whose disks are placed on local
> storage. There is a common gluster volume shared between all nodes.
>
>
>
> [image: mágenes integradas 1]
>
>
>
> 2.- If one node fails:
>
>
>
> [image: mágenes integradas 2]
>
>
>
> oVirt has to be able to inventory the copy of machines (in our example
> vdi201 ... vdi300) and start them on remaining nodes.
>
>
>
> ¿Is it possible to reach this configuration with oVirt? ¿or something
> similar?
>
>
>
> Making backup with the import-export procedure based on snapshot can take
> lot of time and resources. Incremental rsync is cheaper in terms of
> resources.
>
>
>
> Thanks lot.
>
>
>
>
>
> 2016-11-22 10:49 GMT+01:00 Yaniv Dary :
>
> I suggest you setup that environment and test for the performance and
> report if you have issues.
>
> Please note that currently there is no data locality guarantee, so a VM
> might be running on a host that doesn't have its disks.
>
>
>
> We have APIs to do backup\restore and that is the only supported option
> for backup:
>
> https://www.ovirt.org/develop/release-management/features/
> storage/backup-restore-api-integration/
>
> You can look at the Gluster DR option that was posted a while back, you
> can look that up.
>
> It used geo replication and import storage domain to do the DR.
>
>
>
>
> Yaniv Dary
>
> Technical Product Manager
>
> Red Hat Israel Ltd.
>
> 34 Jerusalem Road
>
> Building A, 4th floor
>
> Ra'anana, Israel 4350109
>
>
>
> Tel : +972 (9) 7692306
>
> 8272306
>
> Email: yd...@redhat.com
>
> IRC : ydary
>
>
>
> On Mon, Nov 21, 2016 at 5:17 PM, Oscar Segarra 
> wrote:
>
> Hi,
>
>
>
> I'm planning to deploy a scalable VDI infraestructure where each phisical
> host can run over 100 VDIs and I'd like to deploy 10 physical hosts (1000
> VDIs).
>
>
>
> In order to avoid performance problems (replicating 1000VDIs changes over
> gluster network I think can provoque performance problems) I have thought
> to use local storage for VDI assuming that VDIs cannot be migrated between
> phisical hosts.
>
>
>
> ¿Is my worry founded in terms of performance?
>
> ¿Is it possible to utilize local SSD storage for VDIs?
>
>
>
> I'd like to configure a gluster volume for backup on rotational disks
> (tiered+replica 2+ stripe=2) just to provide HA if a physical host fails.
>
>
>
> ¿Is it possible to use rsync for backing up VDIs?
>
> If not ¿How can I sync/backup  the VDIs running on local storage on the
> gluster shared storage?
>
> If a physical host fails ¿How can I start the latest backup of the VDI on
> the shared gluster?
>
>
>
> Thanks a lot
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
>
>
___
Users mailing list

Re: [ovirt-users] OVN Provider setup issues

2016-11-23 Thread Marcin Mirecki
Andrea,

Please check if the network is attached to the cluster.

Thanks,
Marcin

- Original Message -
> From: "Andrea Fagiani" 
> To: users@ovirt.org
> Cc: "Dan Kenigsberg" , "Lance Richardson" 
> , mmire...@redhat.com
> Sent: Wednesday, November 23, 2016 11:02:10 AM
> Subject: Re: [ovirt-users] OVN Provider setup issues
> 
> Hi Dan,
> 
> I was able to setup the OVN external provider building and loading the
> updated OVS kernel module; I am currently running it on all 5 hosts,
> ovs-vsctl shows all the tunnels correctly instantiated.
> 
> However, after importing the provider into the oVirt engine and setting
> up a vNic profile, I cannot assign it to any VM; it doesn't show up in
> the vNic profiles list.
> 
> Any suggestions?
> 
> Thanks,
> Andrea
> 
> 
> On 18/11/2016 12:33, Dan Kenigsberg wrote:
> > On Fri, Nov 18, 2016 at 09:13:53AM +0100, Andrea Fagiani wrote:
> >> Hi Lance,
> >>
> >> thanks, I have currently deployed oVirt using the oVirt Node images, so
> >> indeed I would like to avoid updating;
> >> out of curiosity, is there actually a beta/pre-release version of the node
> >> avaiable?
> > I'm afraid that such version would be available only after the release
> > of centos7.3 and ovirt-4.1-beta. Now we're still speaking about
> > master-branch experiments.
> >
> >> I have since reinstalled the host to perform further testing but I'll give
> >> it a shot as soon as soon as I find the time.
> > We'd love to hear how that works for you.
> >
> > Regards,
> > Dan.
> 
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Released update for 4.0.5?

2016-11-23 Thread Gianluca Cecchi
Hello,
in webadmin GUI I see on my 4.0.5 hosts a message related to updates
available

If I run yum update in my plain CentOS 7.2 hosts I get:

1) vdsm packages
 vdsmx86_64 4.18.15.3-1.el7.centos ovirt-4.0
   688 k
 vdsm-apinoarch 4.18.15.3-1.el7.centos ovirt-4.0
53 k
 vdsm-clinoarch 4.18.15.3-1.el7.centos ovirt-4.0
67 k
 vdsm-glusternoarch 4.18.15.3-1.el7.centos ovirt-4.0
53 k
 vdsm-hook-vmfex-dev noarch 4.18.15.3-1.el7.centos ovirt-4.0
   6.6 k
 vdsm-infra  noarch 4.18.15.3-1.el7.centos ovirt-4.0
12 k
 vdsm-jsonrpcnoarch 4.18.15.3-1.el7.centos ovirt-4.0
25 k
 vdsm-python noarch 4.18.15.3-1.el7.centos ovirt-4.0
   602 k
 vdsm-xmlrpc noarch 4.18.15.3-1.el7.centos ovirt-4.0
25 k
 vdsm-yajsonrpc  noarch 4.18.15.3-1.el7.centos ovirt-4.0
27 k

Are they for anything particular?

2) gluster packages

 glusterfs   x86_64 3.7.17-1.el7
ovirt-4.0-centos-gluster37 483 k
 glusterfs-api   x86_64 3.7.17-1.el7
ovirt-4.0-centos-gluster37  87 k
 glusterfs-cli   x86_64 3.7.17-1.el7
ovirt-4.0-centos-gluster37 180 k
 glusterfs-client-xlators
 x86_64 3.7.17-1.el7
ovirt-4.0-centos-gluster37 857 k
 glusterfs-fuse  x86_64 3.7.17-1.el7
ovirt-4.0-centos-gluster37 130 k
 glusterfs-geo-replication
 x86_64 3.7.17-1.el7
ovirt-4.0-centos-gluster37 206 k
 glusterfs-libs  x86_64 3.7.17-1.el7
ovirt-4.0-centos-gluster37 355 k
 glusterfs-serverx86_64 3.7.17-1.el7
ovirt-4.0-centos-gluster37 1.4 M

Currently I have 3.7.16-1.el7.x86_64
How can I manage gluster updates on gluster environment with 3 hosts?
Separated by vdsm updates or in the same run?
Any hint/caveat?

Thanks
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to notify cluster nodes after "engine-config --set IPTablesConfigSiteCustom..." ?

2016-11-23 Thread Yedidyah Bar David
On Wed, Nov 23, 2016 at 12:02 PM,   wrote:
> Hmm. I just rebooted the host, but the iptables rules have not been updated :(
>
> On Engine server my custom iptables rules are visible:
>
> # engine-config --get IPTablesConfigSiteCustom
>
> IPTablesConfigSiteCustom:
> -A INPUT -p tcp --dport 2301 -j ACCEPT -m comment --comment 'HPE System 
> Management Homepage'
> -A INPUT -p tcp --dport 2381 -j ACCEPT -m comment --comment 'HPE System 
> Management Homepage (Secure port)'
>  version: general
>
> How to update the configuration on the hosts ?
>
> 23.11.2016, 11:30, "aleksey.maksi...@it-kb.ru" :
>> Hello oVirt guru`s !
>>
>> oVirt Engine Version: 4.0.5.5-1.el7.centos
>>
>> I updated the configuration of the firewall on the Engine server with 
>> "engine-config --set IPTablesConfigSiteCustom...".
>> How to notify cluster nodes (all virtualization hosts) about the changes 
>> without reboot?

Please check the other thread here "[ovirt-users] Hook to add firewall
rules". Thanks.

> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users



-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Cannot remove disk error for ovirt-image-repository VMs

2016-11-23 Thread Gianluca Cecchi
Hello,
I am in 4.0.5 with 3 hosts, Gluster and self hosted engine.
If I create a VM through iso and install OS then I can then delete the VM
and its related disks without errors
If I do the same creating a template (or directly a VM) using CentOS 7
Atomic Host Image in ovirt-image-repository I get these events in sequence
hen I delete the VM

10:47:29 VM atomic was successfully removed.
10:47:54 VDSM hosted_engine_1 command failed: Could not remove all image's
volumes
10:50:05 Refresh image list succeeded for domain(s): ovirt-image-repository
(All file type)

tried many times with same behavior
The VM has been removed, from web admin gui
All disks in "Disks" tab are marked as "OK"

Any commands to check actual integrity... in db and filesystems...?

Basic messages in engine.log:
2016-11-23 09:47:54,860 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler10) [53f3be13] Correlation ID: null, Call Stack:
null, Custom Event ID: -1, Message: VDSM hosted_engine_1 command failed:
Could not remove all image's volumes
2016-11-23 09:47:54,860 INFO
 [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (DefaultQuartzScheduler10)
[53f3be13] SPMAsyncTask::PollTask: Polling task
'555e7dd0-dc32-4cf7-be10-d469fc8b2f8d' (Parent Command 'RemoveVm',
Parameters Type
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') returned
status 'finished', result 'cleanSuccess'.
2016-11-23 09:47:54,880 ERROR
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (DefaultQuartzScheduler10)
[53f3be13] BaseAsyncTask::logEndTaskFailure: Task
'555e7dd0-dc32-4cf7-be10-d469fc8b2f8d' (Parent Command 'RemoveVm',
Parameters Type
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended with
failure:
-- Result: 'cleanSuccess'
-- Message: 'VDSGenericException: VDSErrorException: Failed in vdscommand
to HSMGetAllTasksStatusesVDS, error = Could not remove all image's volumes',
-- Exception: 'VDSGenericException: VDSErrorException: Failed in vdscommand
to HSMGetAllTasksStatusesVDS, error = Could not remove all image's volumes'

full files:

engine.log in gzip format:
https://drive.google.com/file/d/0BwoPbcrMv8mvQlVwVDlGTVEtR00/view?usp=sharing

vdsm.log of related host in gzip format:
https://drive.google.com/file/d/0BwoPbcrMv8mvdDFFOEhTQ3o1ZXM/view?usp=sharing

supervdsm.log in gzip format
https://drive.google.com/file/d/0BwoPbcrMv8mvbE5ZdXMyc0w1S1U/view?usp=sharing

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to notify cluster nodes after "engine-config --set IPTablesConfigSiteCustom..." ?

2016-11-23 Thread aleksey . maksimov
Hmm. I just rebooted the host, but the iptables rules have not been updated :(

On Engine server my custom iptables rules are visible:

# engine-config --get IPTablesConfigSiteCustom

IPTablesConfigSiteCustom:
-A INPUT -p tcp --dport 2301 -j ACCEPT -m comment --comment 'HPE System 
Management Homepage'
-A INPUT -p tcp --dport 2381 -j ACCEPT -m comment --comment 'HPE System 
Management Homepage (Secure port)'
 version: general

How to update the configuration on the hosts ?

23.11.2016, 11:30, "aleksey.maksi...@it-kb.ru" :
> Hello oVirt guru`s !
>
> oVirt Engine Version: 4.0.5.5-1.el7.centos
>
> I updated the configuration of the firewall on the Engine server with 
> "engine-config --set IPTablesConfigSiteCustom...".
> How to notify cluster nodes (all virtualization hosts) about the changes 
> without reboot?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] How to notify cluster nodes after "engine-config --set IPTablesConfigSiteCustom..." ?

2016-11-23 Thread aleksey.maksi...@it-kb.ru
Hello oVirt guru`s !

oVirt Engine Version: 4.0.5.5-1.el7.centos

I updated the configuration of the firewall on the Engine server with 
"engine-config --set IPTablesConfigSiteCustom...".
How to notify cluster nodes (all virtualization hosts) about the changes 
without reboot?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Storage questions

2016-11-23 Thread Oscar Segarra
possible unrecoverable gluster bugs" is a sweeping statement. Do you have
any particular issue that you can refer us to?

No, I don't have experienced any issue, but if under heavy loads a new one
appears, In this environment I could leave 1000 vdi out of service (or 1000
people without their workplace.

Once clarified all questions, with oVirt, is it possible to achieve this
architecture (or similar) ?

Do you have any customer who has run out a gluster environment for heavy
load vdi ?

Thanks a lot.


2016-11-23 9:01 GMT+01:00 Sahina Bose :

>
>
> On Wed, Nov 23, 2016 at 1:18 PM, Oscar Segarra 
> wrote:
>
>> Hi,
>>>
>>> As on oVirt is it possible to attach local storage I supose it can be
>>> used to run virtual machines:
>>>
>>> I have drawn a couple of diagrams in order to know if is it possible to
>>> set up this configuration:
>>>
>>> 1.- In on-going scenario:
>>> Every host runs 100 vdi virtual machines whose disks are placed on local
>>> storage. There is a common gluster volume shared between all nodes.
>>>
>>> [image: Imágenes integradas 1]
>>>
>>
>> With local storage you end up losing many of the benefits of shared
>> storage - including migration and HA.
>> If you do have SSD on your physical hosts, have you considered building
>> gluster volume using these? This could give you improved performance.
>> Regarding performance, I think it is best that you run a test comparing
>> gluster storage performance with local storage and see if this is
>> acceptable to you. Please share the results in case you do.
>>
>> Yes, but I want to avoid possible corruption problems due to possible
>> unrecoverable gluster bugs.
>> We have to make some developement and I don't want to spend money in this
>> process and then discover that the performance is not good enought and have
>> to do a
>>
>
> "possible unrecoverable gluster bugs" is a sweeping statement. Do you have
> any particular issue that you can refer us to?
>
>
>>
>>
>> In the above diagram each host is in its own cluster - as all hosts in a
>> cluster should have access to the storage domain?
>>
>> Yes, every host has ho have access to two storage domains: The local one
>> and the shared gluster one.
>>
>> Is the gluster volume for backup served from a separate set of server?
>>
>> No, each host will have 2 disks /dev/sdb1 (for runing vm on local
>> storage) and /dev/sdc1 (for shared gluster where store backups)
>>
>>
>>>
>>> 2.- If one node fails:
>>>
>>> [image: Imágenes integradas 2]
>>>
>>> oVirt has to be able to inventory the copy of machines (in our example
>>> vdi201 ... vdi300) and start them on remaining nodes.
>>>
>>> ¿Is it possible to reach this configuration with oVirt? ¿or something
>>> similar?
>>>
>>
>> This is the use case for gluster volume shared storage - where volume is
>> a replica 3. If any host goes down, the data is available on the remaining
>> 2 nodes, and the VMs can be migrated to other nodes.
>>
>> Yes, I know, but I'm already worried about corruption issues due to
>> possible gluster bugs or performance problems under heavy load.
>>
>> I don't think what you ask for is possible automatically. If you want
>> local storage to gluster volume backup, you would need 1-1 mapping. i.e
>> each local storage domain has its own gluster volume backup.You could then
>> import the storage domain that's backed up on the gluster volume and start
>> the VMs on the remaining hosts.
>>
>> I don't want local storage for backup, I prefer gluster shared storage
>> for backup.
>>
>>
>>> Making backup with the import-export procedure based on snapshot can
>>> take lot of time and resources. Incremental rsync is cheaper in terms of
>>> resources.
>>>
>>
>> Geo-replication based backup internally uses rsync, it also takes into
>> account that VM images are consistent on disk before being synced. It
>> however works as a backup option between two gluster volumes.
>>
>> Do you know if is it possible to have multiple masters geo-replicating
>> against a single slave?
>>
>
> No it is not possible. A master can have multiple slaves not the other way
> around.
>
>
>>
>> Thanks a lot.
>>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Storage questions

2016-11-23 Thread Sahina Bose
On Wed, Nov 23, 2016 at 1:18 PM, Oscar Segarra 
wrote:

> Hi,
>>
>> As on oVirt is it possible to attach local storage I supose it can be
>> used to run virtual machines:
>>
>> I have drawn a couple of diagrams in order to know if is it possible to
>> set up this configuration:
>>
>> 1.- In on-going scenario:
>> Every host runs 100 vdi virtual machines whose disks are placed on local
>> storage. There is a common gluster volume shared between all nodes.
>>
>> [image: Imágenes integradas 1]
>>
>
> With local storage you end up losing many of the benefits of shared
> storage - including migration and HA.
> If you do have SSD on your physical hosts, have you considered building
> gluster volume using these? This could give you improved performance.
> Regarding performance, I think it is best that you run a test comparing
> gluster storage performance with local storage and see if this is
> acceptable to you. Please share the results in case you do.
>
> Yes, but I want to avoid possible corruption problems due to possible
> unrecoverable gluster bugs.
> We have to make some developement and I don't want to spend money in this
> process and then discover that the performance is not good enought and have
> to do a
>

"possible unrecoverable gluster bugs" is a sweeping statement. Do you have
any particular issue that you can refer us to?


>
>
> In the above diagram each host is in its own cluster - as all hosts in a
> cluster should have access to the storage domain?
>
> Yes, every host has ho have access to two storage domains: The local one
> and the shared gluster one.
>
> Is the gluster volume for backup served from a separate set of server?
>
> No, each host will have 2 disks /dev/sdb1 (for runing vm on local storage)
> and /dev/sdc1 (for shared gluster where store backups)
>
>
>>
>> 2.- If one node fails:
>>
>> [image: Imágenes integradas 2]
>>
>> oVirt has to be able to inventory the copy of machines (in our example
>> vdi201 ... vdi300) and start them on remaining nodes.
>>
>> ¿Is it possible to reach this configuration with oVirt? ¿or something
>> similar?
>>
>
> This is the use case for gluster volume shared storage - where volume is a
> replica 3. If any host goes down, the data is available on the remaining 2
> nodes, and the VMs can be migrated to other nodes.
>
> Yes, I know, but I'm already worried about corruption issues due to
> possible gluster bugs or performance problems under heavy load.
>
> I don't think what you ask for is possible automatically. If you want
> local storage to gluster volume backup, you would need 1-1 mapping. i.e
> each local storage domain has its own gluster volume backup.You could then
> import the storage domain that's backed up on the gluster volume and start
> the VMs on the remaining hosts.
>
> I don't want local storage for backup, I prefer gluster shared storage for
> backup.
>
>
>> Making backup with the import-export procedure based on snapshot can take
>> lot of time and resources. Incremental rsync is cheaper in terms of
>> resources.
>>
>
> Geo-replication based backup internally uses rsync, it also takes into
> account that VM images are consistent on disk before being synced. It
> however works as a backup option between two gluster volumes.
>
> Do you know if is it possible to have multiple masters geo-replicating
> against a single slave?
>

No it is not possible. A master can have multiple slaves not the other way
around.


>
> Thanks a lot.
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users