Re: [ovirt-users] Ansible oVirt storage management module

2016-09-21 Thread Groten, Ryan
Hey guys,

Just wondering if this PR is the oVirt module you are referring to?  
https://github.com/ansible/ansible-modules-extras/pull/2836

If so looks like it can replace (and then some!) the module I opened a while 
back so I can cancel that PR.

Thanks,
Ryan

From: Oved Ourfali [mailto:oourf...@redhat.com]
Sent: Thursday, July 28, 2016 12:40 AM
To: Groten, Ryan <ryan.gro...@stantec.com>; Juan Antonio Hernandez Fernandez 
<jhern...@redhat.com>; Ondra Machacek <omach...@redhat.com>
Cc: Yaniv Kaul <yk...@redhat.com>; users <users@ovirt.org>
Subject: Re: [ovirt-users] Ansible oVirt storage management module

Hi

Adding some relevant folks here.
Until our module is ready I don't think there should be an issue with pushing 
this.
However, there might be a name collision in the future.

Juan/Ondra - thoughts?

Thanks,
Oved


On Wed, Jul 27, 2016 at 8:07 PM, Groten, Ryan 
<ryan.gro...@stantec.com<mailto:ryan.gro...@stantec.com>> wrote:
Thanks for the feedback Yaniv, I’d love to see a python module for ovirt 
actions.  What I’ve built is simple and a bit limited to the features I needed, 
so if we can implement an existing library that has all the features and is 
proven to work then the automation tasks would be much simpler.
And of course, a supported Ansible module would be even better, it would 
probably get more traction towards being added as an extra or core module that 
way too.

If you think this would be a valuable addition to the extra modules that ship 
with Ansible (at least temporarily), I’d appreciate if you comment ‘shipit’ on 
the pull request to mark it for inclusion!

Thanks,
Ryan

From: Yaniv Kaul [mailto:yk...@redhat.com<mailto:yk...@redhat.com>]
Sent: Sunday, July 24, 2016 4:52 AM
To: Groten, Ryan <ryan.gro...@stantec.com<mailto:ryan.gro...@stantec.com>>
Cc: users <users@ovirt.org<mailto:users@ovirt.org>>
Subject: Re: [ovirt-users] Ansible oVirt storage management module



On Wed, Jul 20, 2016 at 12:00 AM, Groten, Ryan 
<ryan.gro...@stantec.com<mailto:ryan.gro...@stantec.com>> wrote:
Hey Ansible users,

I wrote a module for storage management and created a pull request to have it 
added as an Extra module in Ansible.  It can be used to 
create/delete/attach/destroy pool disks.

https://github.com/ansible/ansible-modules-extras/pull/2509

Ryan

Hi Ryan,

This looks really interesting and surely is useful.
My only comment would be that I think we should start to think about some 
Python module for oVirt actions.
Otherwise, every project (this, ovirt-system-tests, others) that use the ovirt 
Python SDK for oVirt automation more or less re-implement the same functions.
What do you think?

Also, we are thinking about auto-generating such Ansible playbook (same way as 
the SDKs are generated).
It might look less 'human', but it will always be complete and up-to-date with 
all features.
Thanks,
Y.


___
Users mailing list
Users@ovirt.org<mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org<mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ansible oVirt storage management module

2016-07-27 Thread Groten, Ryan
Thanks for the feedback Yaniv, I’d love to see a python module for ovirt 
actions.  What I’ve built is simple and a bit limited to the features I needed, 
so if we can implement an existing library that has all the features and is 
proven to work then the automation tasks would be much simpler.
And of course, a supported Ansible module would be even better, it would 
probably get more traction towards being added as an extra or core module that 
way too.

If you think this would be a valuable addition to the extra modules that ship 
with Ansible (at least temporarily), I’d appreciate if you comment ‘shipit’ on 
the pull request to mark it for inclusion!

Thanks,
Ryan

From: Yaniv Kaul [mailto:yk...@redhat.com]
Sent: Sunday, July 24, 2016 4:52 AM
To: Groten, Ryan <ryan.gro...@stantec.com>
Cc: users <users@ovirt.org>
Subject: Re: [ovirt-users] Ansible oVirt storage management module



On Wed, Jul 20, 2016 at 12:00 AM, Groten, Ryan 
<ryan.gro...@stantec.com<mailto:ryan.gro...@stantec.com>> wrote:
Hey Ansible users,

I wrote a module for storage management and created a pull request to have it 
added as an Extra module in Ansible.  It can be used to 
create/delete/attach/destroy pool disks.

https://github.com/ansible/ansible-modules-extras/pull/2509

Ryan

Hi Ryan,

This looks really interesting and surely is useful.
My only comment would be that I think we should start to think about some 
Python module for oVirt actions.
Otherwise, every project (this, ovirt-system-tests, others) that use the ovirt 
Python SDK for oVirt automation more or less re-implement the same functions.
What do you think?

Also, we are thinking about auto-generating such Ansible playbook (same way as 
the SDKs are generated).
It might look less 'human', but it will always be complete and up-to-date with 
all features.
Thanks,
Y.


___
Users mailing list
Users@ovirt.org<mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Ansible oVirt storage management module

2016-07-19 Thread Groten, Ryan
Hey Ansible users,

I wrote a module for storage management and created a pull request to have it 
added as an Extra module in Ansible.  It can be used to 
create/delete/attach/destroy pool disks.

https://github.com/ansible/ansible-modules-extras/pull/2509

Ryan

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Adding direct lun from API doesn't populate attributes like size, vendor, etc

2015-10-16 Thread Groten, Ryan
Using this python I am able to create a direct FC lun properly (and it works if 
the lun_id is valid).  But in the GUI after the disk is added none of the 
fields are populated except LUN ID (Size is <1GB, Serial, Vendor, Product ID 
are all blank).

I see this Bugzilla [1] is very similar (for iSCSI) which says the issue was 
fixed in 3.5.0, but it seems to still be present in 3.5.1 for Fibre Channel 
Direct Luns at least.

Here's the python I used to test:

lun_id = '3600a098038303053453f463045727654'
lu = params.LogicalUnit()
lu.set_id(lun_id)
lus = list()
lus.append(lu)

storage_params = params.Storage()
storage_params.set_id(lun_id)
storage_params.set_logical_unit(lus)
storage_params.set_type('fcp')
disk_params = params.Disk()
disk_params.set_format('raw')
disk_params.set_interface('virtio')
disk_params.set_alias(disk_name)
disk_params.set_active(True)
disk_params.set_lun_storage(storage_params)
disk = api.disks.add(disk_params)


[1] https://bugzilla.redhat.com/show_bug.cgi?id=1096217

Thanks,
Ryan
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Moving HostedEngine from one Cluster to another

2015-08-25 Thread Groten, Ryan
My HostedEngine exists in the Default cluster, but since I'm upgrading my hosts 
to RHEL7 I created a new Cluster and migrated all the VMs to it (including 
HostedEngine).  However in the GUI VM Tab HostedEngine still appears as in the 
Default Cluster.  Because of this I can't remove this cluster (it thinks 
there's a VM in it) even though there are no more hosts or VMs in it.

I also can't change the cluster of HostedEngine, it says Cannot edit VM 
Cluster.  This VM is not managed by the engine.

Thanks,
Ryan
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Feature request: add stateless option in VM pool configuration

2015-08-24 Thread Groten, Ryan
As a workaround, if you create the Pool using the latest version of a 
template, all the VMs in that pool will automatically be stateless.

-Original Message-
From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of 
Steve Dainard
Sent: Friday, August 21, 2015 5:12 PM
To: users
Subject: Re: [ovirt-users] Feature request: add stateless option in VM pool 
configuration

I should also mention that I can't start multiple pool VM's at the same time 
with run-once from the admin portal, so I'd have to individually run once each 
VM which is a member of the pool.

On Fri, Aug 21, 2015 at 3:57 PM, Steve Dainard sdain...@spd1.com wrote:
 I'd like to request a setting in VM pools which forces the VM to be stateless.

 I see from the docs this will occur if selected under run once, or 
 started from the user portal, but it would be nice to set this for a 
 normal admin start of the VM(s) as well.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Migrating hosts from RHEL6 to RHEL7

2015-08-21 Thread Groten, Ryan
Thanks for the responses guys.  Do we know why it's recommended to power the 
VMs off and then move them to the new cluster?  Especially if live migration 
seems to work anyway.

From: matthew lagoe [mailto:matthew.la...@subrigo.net]
Sent: Thursday, August 20, 2015 3:25 PM
To: Groten, Ryan
Cc: users@ovirt.org
Subject: Re: [ovirt-users] Migrating hosts from RHEL6 to RHEL7

It is possible to migrate a VM from 6 to 7 it is recommended however that you 
don't do a live migration and that you power off the VM and then turn it back 
on on the cluster running version 7 Groten, Ryan wrote:
Has anyone succeeded in upgrading their hosts OS version from 6 to 7?  I 
assumed it could be done without downtime and one host at a time, but when 
trying it out I found that RHEL7 hosts can't be placed in the same Cluster as 
RHEL6 ones.
I then tried making a new Cluster and migrating VMs from the RHEL6 cluster to 
RHEL7.  Initial testing seems to show that it works from RHEL6 to RHEL7 but not 
the other way around.  Also when I select this option a warning pops up saying 
Choosing different cluster may lead to unexpected results.  Please consult 
documentation.

I looked in the Admin Guide and Technical Reference Guide but don't see where 
these unexpected results are mentioned.

Thanks,
Ryan Groten
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Migrating hosts from RHEL6 to RHEL7

2015-08-20 Thread Groten, Ryan
Has anyone succeeded in upgrading their hosts OS version from 6 to 7?  I 
assumed it could be done without downtime and one host at a time, but when 
trying it out I found that RHEL7 hosts can't be placed in the same Cluster as 
RHEL6 ones.
I then tried making a new Cluster and migrating VMs from the RHEL6 cluster to 
RHEL7.  Initial testing seems to show that it works from RHEL6 to RHEL7 but not 
the other way around.  Also when I select this option a warning pops up saying 
Choosing different cluster may lead to unexpected results.  Please consult 
documentation.

I looked in the Admin Guide and Technical Reference Guide but don't see where 
these unexpected results are mentioned.

Thanks,
Ryan Groten
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Benefits of Cluster separation

2015-08-18 Thread Groten, Ryan
We only have about 200 VMs, but we’ve run into this bug [1] because we use a 
high number of direct attach luns on our VMs.  It takes about 5 minutes to scan 
all the fibre channel luns (same issues exists with iSCSI) and used times out 
before completing (until we changed the value of vdsTimeout higher).

Also because of the way our FC storage is presented, we have about 1,400 
multipath devices presented to each host.  This contributes to really long scan 
times and I think causes some general slowness in administration side and “Bad 
Request” 500 Errors when running some storage commands from the API.  VMs 
themselves still run just fine with no noticeable performance issues though.

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1217401



From: Patrick Russell [mailto:patrick_russ...@volusion.com]
Sent: Tuesday, August 18, 2015 3:07 PM
To: Matthew Lagoe
Cc: Groten, Ryan; users@ovirt.org
Subject: Re: [ovirt-users] Benefits of Cluster separation

Can I ask at what scale you’re running into issues? We’ve got about 500 VM’s 
running now in a single cluster.

-Patrick

On Aug 18, 2015, at 4:03 PM, Matthew Lagoe 
matthew.la...@subrigo.netmailto:matthew.la...@subrigo.net wrote:

You can have different cluster policy’s at least, don’t know what other 
benefits there are however as I haven’t noticed any.

From: users-boun...@ovirt.orgmailto:users-boun...@ovirt.org 
[mailto:users-boun...@ovirt.org] On Behalf Of Groten, Ryan
Sent: Tuesday, August 18, 2015 01:59 PM
To: users@ovirt.orgmailto:users@ovirt.org
Subject: [ovirt-users] Benefits of Cluster separation

We’re running into some performance problems stemming from having too many 
Hosts/VMs/Disks running from the same Datacenter/Cluster.  Because of that I’m 
looking into splitting the DC into multiple separate ones with different 
Hosts/Storage.
But I’m a little confused what the benefit of separating hosts into clusters 
achieves.  Can someone please explain what the common use cases are?  Since all 
the clusters in a DC seem to need to see the same storage, I don’t think it 
would help my situation anyway.

Thanks,
Ryan
___
Users mailing list
Users@ovirt.orgmailto:Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Benefits of Cluster separation

2015-08-18 Thread Groten, Ryan
We're running into some performance problems stemming from having too many 
Hosts/VMs/Disks running from the same Datacenter/Cluster.  Because of that I'm 
looking into splitting the DC into multiple separate ones with different 
Hosts/Storage.
But I'm a little confused what the benefit of separating hosts into clusters 
achieves.  Can someone please explain what the common use cases are?  Since all 
the clusters in a DC seem to need to see the same storage, I don't think it 
would help my situation anyway.

Thanks,
Ryan
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM time offset

2015-08-10 Thread Groten, Ryan
I'm having the same issue where the guest time is offset by 7 hours (out 
timezone difference) from UTC.  I read in the VM System configuration for Time 
Zone that hwclock on Linux guests should have the TZ set to GMT+0, but if I 
change it to GMT-7, the clock is set as expected on boot.

-Original Message-
From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of 
Aslam, Usman
Sent: Friday, August 07, 2015 3:23 PM
To: users@ovirt.org
Subject: [ovirt-users] VM time offset

Hi guys,

I'm having an issue where the VMs do not keep correct time upon reboot. The 
clock is always off by ~4 hours.
I've checked and time and time zone on engine/host nodes is correct. Correct 
time zone is specified in the ovirt vm configuration as well.

The VM in question is a clean CentOS 7 install. It did not have NTP enabled. 
I've tried setting it up with a local server and no dice. Upon reboot it always 
gets messed up.

[root@xyz-dev ~]# date
Fri Aug  7 17:18:09 EDT 2015
[root@xyz-dev ~]# reboot
Connection to sso-dev closed by remote host.

[root@xyz-dev ~]# date
Fri Aug  7 13:20:47 EDT 2015
[root@xyz-dev ~]# ntpdate ntp1.xyz.domain.org
 7 Aug 17:21:05 ntpdate[2151]: step time server 130.64.25.6 offset 14410.388931 
sec

Thanks,
--Usman
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] virt-v2v

2015-07-30 Thread Groten, Ryan
This doesn’t answer your question directly, but I never had any luck using 
virt-v2v from VMWare.  I found it worked well to treat the VMWare VM just like 
a physical server, boot it from the virt-v2v iso and convert the VMWare VM that 
way.

From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of 
Will K
Sent: Thursday, July 30, 2015 3:58 PM
To: users@ovirt.org
Subject: [ovirt-users] virt-v2v

Hi

I have a project to convert VMs on VMware ESX 5.5 and some VM on standalone KVM 
server to oVirt.  I started to look into virt-v2v.  I wonder if I'm hitting the 
right list.  Please let me know if this list doesn't cover virt-v2v.

Issue:
when I run the following command on one of two hosts running oVirt 3.3.3-2-el6
virt-v2v  -ic esx://esxserver1/?no_verify=1 -os GFS1 virtmachine1

I got:
virt-v2v: Failed to connect to qemu:///system: libvirt error code: 45, message: 
authentication failed: Failed to step SASL negotiation: -7 (SASL(-7): invalid 
parameter supplied: Unexpectedly missing a prompt result)

I already added a .netrc which 600, correct oVirt login and password. I also 
ran saslpasswd2 as root already.

Thanks

Will


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Concerns with increasing vdsTimeout value on engine?

2015-07-13 Thread Groten, Ryan
Thanks for the responses everyone and for the RFE.  I do use HA in some places 
at the moment, but I do see another timeout value called vdsConnectionTimeout.  
Would HA use this value or vdsTimeout (set to 2 by default) when attempting to 
contact the host?

-Original Message-
From: Shubhendu Tripathi [mailto:shtri...@redhat.com] 
Sent: Monday, July 13, 2015 2:25 AM
To: Piotr Kliczewski
Cc: Omer Frenkel; Groten, Ryan; users@ovirt.org
Subject: Re: [ovirt-users] Concerns with increasing vdsTimeout value on engine?

On 07/13/2015 01:42 PM, Piotr Kliczewski wrote:
 On Mon, Jul 13, 2015 at 5:57 AM, Shubhendu Tripathi shtri...@redhat.com 
 wrote:
 On 07/12/2015 09:53 PM, Omer Frenkel wrote:

 - Original Message -
 From: Liron Aravot lara...@redhat.com
 To: Ryan Groten ryan.gro...@stantec.com
 Cc: users@ovirt.org
 Sent: Sunday, July 12, 2015 5:44:28 PM
 Subject: Re: [ovirt-users] Concerns with increasing vdsTimeout 
 value on engine?



 - Original Message -
 From: Ryan Groten ryan.gro...@stantec.com
 To: users@ovirt.org
 Sent: Friday, July 10, 2015 10:45:11 PM
 Subject: [ovirt-users] Concerns with increasing vdsTimeout value 
 on engine?



 When I try to attach new direct lun disks, the scan takes a very 
 long time to complete because of the number of pvs presented to my 
 hosts (there is already a bug on this, related to the pvcreate 
 command taking a very long time - 
 https://bugzilla.redhat.com/show_bug.cgi?id=1217401 )



 I discovered a workaround by setting the vdsTimeout value higher 
 (it is
 180
 seconds by default). I changed it to 300 seconds and now the 
 direct lun scan returns properly, but I’m hoping someone can warn 
 me if this workaround is safe or if it’ll cause other potential 
 issues? I made this change yesterday and so far so good.

 Hi, no serious issue can be caused by that.
 Keep in mind though that any other operation will have that amount 
 of time to complete before failing on timeout - which will cause 
 delays before failing (as the timeout was increased for all
 executions)
 when not everything is operational and up as expected (as in most 
 of the time).
 I'd guess that a RFE could be opened to allow increasing the 
 timeout of specific operations if a user want to do that.

 thanks,
 Liron.
 if you have HA vms and use power management (fencing), this might 
 cause longer downtime for HA vms if host has network timeouts:
 the engine will wait for 3 network failures before trying to fence 
 the host, so in case of timeouts, and increasing it to 5mins, you 
 should expect 15mins before engine will decide host is 
 non-responsive and fence, so if you have HA vm on this host, this 
 will be the vm downtime as well, as the engine will restart HA vms 
 only after fencing.

 you can read more on
 http://www.ovirt.org/Automatic_Fencing

 Even I am in a need where, I try to delete all the 256 gluster volume 
 snapshots using a single gluster CLI command, and engine gets timed out.
 So, as Liron suggested it would be better if at VDSM verb level we 
 are able to set timeout. That would be better option and caller needs 
 to use the feature judicially :)

 Please open a RFE for being able to set operation timeout for single 
 command call with description of use cases for which you would like to 
 set the timeout.

Piotr,

I created an RFE BZ at https://bugzilla.redhat.com/show_bug.cgi?id=1242373.

Thanks and Regards,
Shubhendu

 Thanks,

 Ryan

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Concerns with increasing vdsTimeout value on engine?

2015-07-10 Thread Groten, Ryan
When I try to attach new direct lun disks, the scan takes a very long time to 
complete because of the number of pvs presented to my hosts (there is already a 
bug on this, related to the pvcreate command taking a very long time - 
https://bugzilla.redhat.com/show_bug.cgi?id=1217401)

I discovered a workaround by setting the vdsTimeout value higher (it is 180 
seconds by default). I changed it to 300 seconds and now the direct lun scan 
returns properly, but I'm hoping someone can warn me if this workaround is safe 
or if it'll cause other potential issues?  I made this change yesterday and so 
far so good.

Thanks,
Ryan
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Help understanding Gluster in oVirt

2015-02-04 Thread Groten, Ryan
Nope in fact I followed the guide and found CTDB works quite well.  I am just 
trying to figure out the benefit because that would be another component to 
consider in the architecture.

From: Sahina Bose [mailto:sab...@redhat.com]
Sent: Tuesday, February 03, 2015 4:09 AM
To: Groten, Ryan; users@ovirt.org
Subject: Re: [ovirt-users] Help understanding Gluster in oVirt


On 01/28/2015 08:59 AM, Groten, Ryan wrote:
I was planning on making a Gluster Data domain to test, and found some great 
information on this page: 
http://community.redhat.com/blog/2014/05/ovirt-3-4-glusterized/
The article the author uses the CTDB service for VIP failover.  Is it 
possible/recommended to not do this, and just create a gluster volume on all 
the hosts in a cluster, then create the Gluster data domain as 
localhost:gluster_vol?

Theoretically, it should work - if you make sure that you have a replica 3 
gluster volume spread across 3 nodes, and these 3 nodes are your compute nodes 
as well - you should be fine without CTDB setup for failover and mounting as 
localhost.

But I've not tried this to recommend it. Maybe if others have tried it, they 
can chime in?

Btw, is there any reason you do not want to set up CTDB?



Thanks,
Ryan
ThTh



___

Users mailing list

Users@ovirt.orgmailto:Users@ovirt.org

http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Help understanding Gluster in oVirt

2015-01-27 Thread Groten, Ryan
I was planning on making a Gluster Data domain to test, and found some great 
information on this page: 
http://community.redhat.com/blog/2014/05/ovirt-3-4-glusterized/
The article the author uses the CTDB service for VIP failover.  Is it 
possible/recommended to not do this, and just create a gluster volume on all 
the hosts in a cluster, then create the Gluster data domain as 
localhost:gluster_vol?

Thanks,
Ryan
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Recommendations for Storage Pool sizes

2014-11-18 Thread Groten, Ryan
I've been looking around for any best practices/recommendations on how large a 
storage domain can be but can't seem to find anything.

For example, right now I have a 3TB domain made up of 3 1TB luns.  That domain 
has about 200 thin disks created from it.

When I want to add more space, is there any reason I should not simply increase 
the existing pool?  Or is it better to create a new pool?

Thanks,
Ryan


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Balloon driver unavailable

2014-11-17 Thread Groten, Ryan
I also recently started getting these errors.  They started when I upgraded 
from 3.4.0 to 3.4.2.
The error appears on certain VMs (but not all) consistently every 15 minutes.  
It doesn't matter if the Memory Balloon Device Enabled checkbox checked or 
unchecked.

I got the message to stop appearing by changing the value of Physical Memory 
Guaranteed to match the VMs configured memory.


From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of 
John Gardeniers
Sent: November-16-14 10:13 PM
To: users@ovirt.org
Subject: Re: [ovirt-users] Balloon driver unavailable

Just an FYI.

In my case the balloon driver was installed and it was running. The problem was 
eventually resolved by uninstalling the entire agent suite, rebooting and 
reinstalling it. Doing the same just for the balloon driver didn't work.

regards,
John

On 13/11/14 07:35, John Gardeniers wrote:
I'm seeing it for a VM that most definitely does have the balloon driver 
installed. Care to take another guess?

regards,
John

On 12/11/14 20:04, Amedeo Salvati wrote:
you receive this error because on your cluster configurations you have checked 
Enable Memory Balloon Optimization, and on some of your VMs there aren't 
balloon driver available; if you don't want anymore this warning messages you 
can uncheck this under

Clusters - (select your cluster) edit - Optimization - uncheck Enable Memory 
Balloon Optimization

Best Regards
Amedeo Salvati


Date: Wed, 12 Nov 2014 07:59:33 +
From: Karli Sj?berg karli.sjob...@slu.semailto:karli.sjob...@slu.se
To: tdeme...@itsmart.humailto:tdeme...@itsmart.hu 
tdeme...@itsmart.humailto:tdeme...@itsmart.hu
Cc: users@ovirt.orgmailto:users@ovirt.org 
users@ovirt.orgmailto:users@ovirt.org
Subject: Re: [ovirt-users] Balloon driver unavailable
Message-ID: 5F9E965F5A80BC468BE5F40576769F099DF97243@exchange2-1
Content-Type: text/plain; charset=utf-8

On Wed, 2014-11-12 at 08:35 +0100, Demeter Tibor wrote:
 Hi,


 I have a lot of centos 6 based vms and I have install ovirt guest
 agent for those vms.
 But two vm always say The balloon driver on xxxvm on hostX is
 requested but unavailable
 I did check the virtio_balloon module are loaded on vms.


 Anybody can me help?


 Thanks in advance


 Tibor

 I see that too, but only on Windows 2008 R2 guests... No one else ever
 said anything about it, so I thought it was just me:)


__
This email has been scanned by the Symantec Email Security.cloud service.
For more information please visit http://www.symanteccloud.com
__




___

Users mailing list

Users@ovirt.orgmailto:Users@ovirt.org

http://lists.ovirt.org/mailman/listinfo/users


__
This email has been scanned by the Symantec Email Security.cloud service.
For more information please visit http://www.symanteccloud.com
__




___

Users mailing list

Users@ovirt.orgmailto:Users@ovirt.org

http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted engine , how to make changes to the VM post deploy

2014-11-06 Thread Groten, Ryan
I went through this a couple months ago.  Migrated my hosted-engine from one 
NFS host to another.  Here are the steps that I documented from the experience. 
 There is probably a better way, but this worked for me on two separate 
hosted-engine environments.

1. Make a backup of RHEV-M
2. Migrate VMs off all hosts that run hosted-engine (except 
hosted-engine itself)
3. Put hosted-engine hosts in maintenance mode (except host that's 
running hosted-engine)
4. Put hosted-engine in global maintenance mode
5. Shutdown hosted-engine
6. Stop ovirt-ha-agent and ovirt-ha-broker services on all 
hosted-engine hosts
7. On each hosted-engine host:
a. service ovirt-ha-agent stop
b. service ovirt-ha-broker stop
c. sanlock client shutdown -f 1
d. service sanlock stop
e. umount /rhev/data-center/mnt/hosted-engine-share
f. service sanlock start
8. mount new NFS share on /hosted_tgt
9. mount old NFS share on /hosted_src
10. Copy data (make sure sparse files are kept sparse):
a. rsync --sparse -crvlp /hosted_src/* /hosted_tgt/
11. Edit /etc/ovirt-hosted-engine/hosted-engine.conf and change path:
storage=10.1.208.122:/HostedEngine_Test
12. Make sure permissions are vdsm:kvm in /hosted_tgt/
13. umount /hosted_tgt
14. umount /hosted_src
15. Pick one hosted-engine host and reboot, then run:
a. hosted-engine --connect-storage (make sure the new NFS is 
mounted properly)
b. hosted-engine --start-pool (wait a few seconds then try 
again if you get an error)
c. service ovirt-ha-broker start
d. service ovirt-ha-agent start
e. hosted-engine --vm-start


-Original Message-
From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of 
Frank Wall
Sent: November-06-14 8:12 AM
To: Jiri Moskovcak
Cc: users
Subject: Re: [ovirt-users] hosted engine , how to make changes to the VM post 
deploy

On Wed, Nov 05, 2014 at 08:11:39AM +0100, Jiri Moskovcak wrote:
 On 11/04/2014 03:52 PM, Alastair Neil wrote:
  So is this the workflow?
 
  set the hosted-engine maintenance to global
  shutdown the engine VM
  make changes via virsh or editing vm.conf
  sync changes to the other ha nodes
  restart the VM
  set hosted-engine maintenance to none
 
 - well, not official, because it can cause a lot of troubles, so I 
 would not recommend it unless you have a really good reason to do it.

I'd like to move my ovirt-engine VM to a new NFS storage.
I was thinking to adopt this workflow for this use-case (in combination with 
rsync to mirror the old storage). 

Do you think this would succeed or is there another (and maybe supported) way 
to move ovirt-engine to a different storage?


Regards
- Frank
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted engine , how to make changes to the VM post deploy

2014-11-06 Thread Groten, Ryan
Good catch, you’re right that should say “On all hosted-engine hosts, edit 
hosted-engine.conf”.  It does not automatically sync the changes between the 
hosts.

From: Alastair Neil [mailto:ajneil.t...@gmail.com]
Sent: November-06-14 11:06 AM
To: Groten, Ryan
Cc: Frank Wall; Jiri Moskovcak; users
Subject: Re: [ovirt-users] hosted engine , how to make changes to the VM post 
deploy

does the broker automatically sync the change you made in 
/etc/ovirt-hosted-engine/hosted-engine.conf to the other ha hosts or did you 
omit a step?


On 6 November 2014 11:10, Groten, Ryan 
ryan.gro...@stantec.commailto:ryan.gro...@stantec.com wrote:
I went through this a couple months ago.  Migrated my hosted-engine from one 
NFS host to another.  Here are the steps that I documented from the experience. 
 There is probably a better way, but this worked for me on two separate 
hosted-engine environments.

1. Make a backup of RHEV-M
2. Migrate VMs off all hosts that run hosted-engine (except 
hosted-engine itself)
3. Put hosted-engine hosts in maintenance mode (except host that's 
running hosted-engine)
4. Put hosted-engine in global maintenance mode
5. Shutdown hosted-engine
6. Stop ovirt-ha-agent and ovirt-ha-broker services on all 
hosted-engine hosts
7. On each hosted-engine host:
a. service ovirt-ha-agent stop
b. service ovirt-ha-broker stop
c. sanlock client shutdown -f 1
d. service sanlock stop
e. umount /rhev/data-center/mnt/hosted-engine-share
f. service sanlock start
8. mount new NFS share on /hosted_tgt
9. mount old NFS share on /hosted_src
10. Copy data (make sure sparse files are kept sparse):
a. rsync --sparse -crvlp /hosted_src/* /hosted_tgt/
11. On all hosted-engine hosts, edit 
/etc/ovirt-hosted-engine/hosted-engine.conf and change path:
storage=10.1.208.122:/HostedEngine_Test
12. Make sure permissions are vdsm:kvm in /hosted_tgt/
13. umount /hosted_tgt
14. umount /hosted_src
15. Pick one hosted-engine host and reboot, then run:
a. hosted-engine --connect-storage (make sure the new NFS is 
mounted properly)
b. hosted-engine --start-pool (wait a few seconds then try 
again if you get an error)
c. service ovirt-ha-broker start
d. service ovirt-ha-agent start
e. hosted-engine --vm-start


-Original Message-
From: users-boun...@ovirt.orgmailto:users-boun...@ovirt.org 
[mailto:users-boun...@ovirt.orgmailto:users-boun...@ovirt.org] On Behalf Of 
Frank Wall
Sent: November-06-14 8:12 AM
To: Jiri Moskovcak
Cc: users
Subject: Re: [ovirt-users] hosted engine , how to make changes to the VM post 
deploy

On Wed, Nov 05, 2014 at 08:11:39AM +0100, Jiri Moskovcak wrote:
 On 11/04/2014 03:52 PM, Alastair Neil wrote:
  So is this the workflow?
 
  set the hosted-engine maintenance to global
  shutdown the engine VM
  make changes via virsh or editing vm.conf
  sync changes to the other ha nodes
  restart the VM
  set hosted-engine maintenance to none

 - well, not official, because it can cause a lot of troubles, so I
 would not recommend it unless you have a really good reason to do it.

I'd like to move my ovirt-engine VM to a new NFS storage.
I was thinking to adopt this workflow for this use-case (in combination with 
rsync to mirror the old storage).

Do you think this would succeed or is there another (and maybe supported) way 
to move ovirt-engine to a different storage?


Regards
- Frank
___
Users mailing list
Users@ovirt.orgmailto:Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.orgmailto:Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] vmware type functionality

2014-10-24 Thread Groten, Ryan
Yep, note that it's a little different from VMWare in that it takes a snapshot 
for each disk in the VM to do a live storage migration, and the snapshots can't 
be deleted while the VM is powered up (in 3.4 at least).  You can also only 
move 1 disk at a time per VM if it's powered up.

-Original Message-
From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of 
Sven Kieske
Sent: October-24-14 9:19 AM
To: Dan Yasny
Cc: Users@ovirt.org List
Subject: Re: [ovirt-users] vmware type functionality



On 24/10/14 16:39, Dan Yasny wrote:
 In the same DC all you need to do is right click and select move, 
 can be done live.

This works between storage domains?

--
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH  Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ISO_DOMAIN by NFS behavior (if NFS is unavailable)

2014-10-01 Thread Groten, Ryan
I had the same challenge and ended up taking the service off my hosted-engine 
and putting it elsewhere as a workaround.  
But if I remember right you can still set maintenance mode when the 
hosted-engine is down, just can't run vm-status?

-Original Message-
From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of ml 
ml
Sent: October-01-14 7:03 AM
To: Michael Keller
Cc: users@ovirt.org
Subject: Re: [ovirt-users] ISO_DOMAIN by NFS behavior (if NFS is unavailable)

Hello Michael,

this cant be the answer. Maybe NFS is the wrong tool for the wrong purpose in 
this case then.

I am talking about disaster recovery, but you only have soft failures in mind.




On Wed, Oct 1, 2014 at 2:40 PM, Michael Keller mkel...@psi.de wrote:
 Hello Mario,

 do it before you turn of your engine and leave it detached, because 
 normally it is not needed all the time.
 Or create a new ISO_DOMAIN storage at a more save place.

 Regards
 Michael

 Am 01.10.2014 um 14:21 schrieb ml ml:

 Hello Michael,


 On Wed, Oct 1, 2014 at 1:39 PM, Michael Keller mkel...@psi.de wrote:

 Hello Mario,

 you can set the ISO_DOMAIN to maintenance and then detach it from 
 the cluster. All nodes will umount this NFS share when detaching it.
 You can do your backup etc.


 unfortunatelly this does not help if my ovirt-engine dies over sudden.

 If i turn of my ovirt-engine the NFS ISO Domain goes down and i can 
 not set it to maintaince mode anymore.

 Any other ideas?


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to disconnect hosted-engine NFS storage pool?

2014-09-22 Thread Groten, Ryan
Thanks, RFE created (I hope I did it right)
https://bugzilla.redhat.com/show_bug.cgi?id=1145259


-Original Message-
From: Doron Fediuck [mailto:dfedi...@redhat.com] 
Sent: September-21-14 6:57 AM
To: Groten, Ryan
Cc: users@ovirt.org
Subject: Re: [ovirt-users] How to disconnect hosted-engine NFS storage pool?



- Original Message -
 From: Ryan Groten ryan.gro...@stantec.com
 To: users@ovirt.org
 Sent: Friday, September 19, 2014 1:51:13 AM
 Subject: [ovirt-users]  How to disconnect hosted-engine NFS storage pool?
 
 
 
 I want to unmounted the hosted-engine NFS share without affecting all 
 the other running VMs on the host. When I shutdown the hosted-engine 
 and enable global maintenance, the storage pool is still mounted and I 
 can’t unmount it because the “sanlock” process is using it.
 
 
 
 Is there any way to disconnect the storage pool? There is a 
 hosted-engine --connect-storage option but I see nothing to disconnect it.
 
 
 
 Thanks,
 
 Ryan
 

Hi Ryan,
Hosted engine does not unmount the share since there may be other VMs using it 
(as a general rule).
However this may deserve some additional thoughts. Do you mind opening an RFE 
for it?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] How to disconnect hosted-engine NFS storage pool?

2014-09-18 Thread Groten, Ryan
I want to unmounted the hosted-engine NFS share without affecting all the other 
running VMs on the host.  When I shutdown the hosted-engine and enable global 
maintenance, the storage pool is still mounted and I can't unmount it because 
the sanlock process is using it.

Is there any way to disconnect the storage pool?  There is a hosted-engine 
--connect-storage option but I see nothing to disconnect it.

Thanks,
Ryan
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Moving hosted-engine NFS storage

2014-09-16 Thread Groten, Ryan
I'm planning on moving my hosted-engine storage from one NFS server to another 
shortly.  I was thinking it would be relatively simple:


1.   Stop hosted-engine

2.   Copy existing share to new nfs share

3.   Edit /etc/ovirt-hosted-engine/hosted-engine.conf and change storage to 
the new address

4.   Restart hosted-engine services on hosts

5.   Restart hosted-engine

However online reading says that this won't work because the storage domain 
info is stored in different places.  Is there a procedure or anything for how 
to do this?

Thanks,
Ryan
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to backup/restore the rhevm-VM in hosted-engine ?

2014-09-05 Thread Groten, Ryan
In my case I actually have a spare server (physical) that I keep patched and 
up-to-date, the engine backups are automatically transferred there.  If my 
hosted-engine goes down I restore the engine database on the physical spare and 
run like that until I can get the hosted-engine back up.

I’ve never tried restoring onto a newly deployed hosted-engine VM using 
hosted-engine --deploy, but I can’t see why that wouldn’t work (maybe someone 
else knows if this is possible?).  I’ll give it a try too.

From: Xie, Chao [mailto:xiec.f...@cn.fujitsu.com]
Sent: September-05-14 3:23 AM
To: Groten, Ryan; users@ovirt.org
Subject: 答复: How to backup/restore the rhevm-VM in hosted-engine ?

Hi, Ryan
 Thanks for replying. I know your meaning and engine-backup command. 
But I have some questions with your way: “If the hosted-engine needs to be 
restored for some reason I will just recreate the OS and restore the engine 
database”.
Do you recreate the Guest using original host? And the host needn’t to be 
fresh installed and just excute  hosted-engine �Cdeployed again, we can 
recreate another OS?

发件人: Groten, Ryan [mailto:ryan.gro...@stantec.com]
发送时间: 2014年9月5日 4:29
收件人: Xie, Chao/谢 超; users@ovirt.orgmailto:users@ovirt.org
主题: RE: How to backup/restore the rhevm-VM in hosted-engine ?

In 3.4 there is a backup/restore utility called engine-backup.  You can use 
this to backup the RHEV-M database(s) as well as restore.  Of course this won’t 
backup the Guest OS itself.
My DR strategy is to simply copy off these engine-backup files to another 
location.  If the hosted-engine needs to be restored for some reason I will 
just recreate the OS and restore the engine database.

Check this link for documentation:

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.4/html/Administration_Guide/chap-Backups.html


From: users-boun...@ovirt.orgmailto:users-boun...@ovirt.org 
[mailto:users-boun...@ovirt.org] On Behalf Of 
xiec.f...@cn.fujitsu.commailto:xiec.f...@cn.fujitsu.com
Sent: September-03-14 9:54 PM
To: users@ovirt.orgmailto:users@ovirt.org
Subject: [ovirt-users] How to backup/restore the rhevm-VM in hosted-engine ?

Hi,All
 As the mauual, I can’t find anyway to backup/restore the rhevm-VM in 
the hosted-engine. The I just try to backup the storage of rhevm-VM (using cp 
�Cprf  to make a copy of folder). Then I make some change and replace the 
origin RHEVM-VM content with my backup folder (of course I shutdown the 
rhevm-vm first).  At last the vm can be up,but the status is NOT health but as 
below:


Status up-to-date  : False
Hostname   : 193.168.195.248
Host ID: 1
Engine status  : unknown stale-data
Score  : 2400
Local maintenance  : False
Host timestamp : 1409743461
Extra metadata (valid at timestamp):
 metadata_parse_version=1
 metadata_feature_version=1
 timestamp=1409743461 (Wed Sep  3 07:24:21 2014)
 host-id=1
 score=2400
 maintenance=False
 state=EngineUp

=

So are there some proper way to backup the rhevm-vm?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to backup/restore the rhevm-VM in hosted-engine ?

2014-09-04 Thread Groten, Ryan
In 3.4 there is a backup/restore utility called engine-backup.  You can use 
this to backup the RHEV-M database(s) as well as restore.  Of course this won't 
backup the Guest OS itself.
My DR strategy is to simply copy off these engine-backup files to another 
location.  If the hosted-engine needs to be restored for some reason I will 
just recreate the OS and restore the engine database.

Check this link for documentation:

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.4/html/Administration_Guide/chap-Backups.html


From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of 
xiec.f...@cn.fujitsu.com
Sent: September-03-14 9:54 PM
To: users@ovirt.org
Subject: [ovirt-users] How to backup/restore the rhevm-VM in hosted-engine ?

Hi,All
 As the mauual, I can't find anyway to backup/restore the rhevm-VM in 
the hosted-engine. The I just try to backup the storage of rhevm-VM (using cp 
-prf  to make a copy of folder). Then I make some change and replace the origin 
RHEVM-VM content with my backup folder (of course I shutdown the rhevm-vm 
first).  At last the vm can be up,but the status is NOT health but as below:


Status up-to-date  : False
Hostname   : 193.168.195.248
Host ID: 1
Engine status  : unknown stale-data
Score  : 2400
Local maintenance  : False
Host timestamp : 1409743461
Extra metadata (valid at timestamp):
 metadata_parse_version=1
 metadata_feature_version=1
 timestamp=1409743461 (Wed Sep  3 07:24:21 2014)
 host-id=1
 score=2400
 maintenance=False
 state=EngineUp

=

So are there some proper way to backup the rhevm-vm?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Network real-time bar - nothing happened

2014-09-02 Thread Groten, Ryan
Are you sure there is network traffic to/from these VMs? Most of my VMs show 0% 
as well because they’re not using much network. Try generating a bunch of 
network traffic and see if the number jumps.

From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of 
Grzegorz Szypa
Sent: September-02-14 11:48 AM
To: users@ovirt.org
Subject: [ovirt-users] Network real-time bar - nothing happened

Hi.
For a long time I have a problem with a certain functionality
On real-time bar nothing happens, just as if it did not work, like on atached 
screen

[cid:image002.png@01CFC6AF.075CC350]

--
G.Sz.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How long can a disk snapshot exist for?

2014-08-28 Thread Groten, Ryan
Thanks that's exactly the explanation I was looking for.

-Original Message-
From: Vered Volansky [mailto:ve...@redhat.com] 
Sent: August-28-14 9:29 AM
To: Groten, Ryan; users
Subject: Re: [ovirt-users] How long can a disk snapshot exist for?

Hi Ryan,

Should have replied to all, my bad.
See my answer embedded below:

- Original Message -
 From: Ryan Groten ryan.gro...@stantec.com
 To: Vered Volansky ve...@redhat.com
 Sent: Thursday, August 28, 2014 5:50:12 PM
 Subject: RE: [ovirt-users]  How long can a disk snapshot exist for?
 
 Thanks for the reply!  So when keeping a snapshot for a long time I have to
 keep an eye on how large it will get over time.
The snapshot's size itself is determined when it's taken according to the disks 
size at the time.
It then stays the same.
When taking a snapshot, the active image of the disk is frozen at this point 
in time, and a new, empty active image is created to hold the new data on the 
disk.
The new data is saved in the form of diffs, so if there are mainly additions to 
the snapshot time, the space difference of the snapshot is negligible.
But if the diffs include many reductions from the snapshot's state - this might 
consume a lot of space, again, depending on your usage.
Note that you're also limited by the disk's size.

  But there's no (or very
 little) performance impact or potential issues from keeping snapshots (other
 than the storage pool filling up maybe)?
All the vm operations take into consideration snapshots. Storage is an issue, 
but so is every operation you'll make on the vm.
When you have one image, the vm is handled only through this image. But when 
you have several, for each operation the right layer will search for the 
existence/ability to do the operation in all the snapshots (worst case).
So there is in fact an impact, but it's due to the mere existence of the 
snapshots and their number, not their age.
We support 26 snapshots per VM, but you should only use it if you actually need 
the backup.
If you need RT performance, try to avoid it as possible.

 
 Thanks,
 Ryan
 
 -Original Message-
 From: Vered Volansky [mailto:ve...@redhat.com]
 Sent: August-27-14 11:19 PM
 To: Groten, Ryan
 Subject: Re: [ovirt-users] How long can a disk snapshot exist for?
 
 Ryan,
 
 Disk snapshots consume fixed storage space (fixed since time of creation).
 The more differences there are from on your disk since the snapshot was
 taken, the more space is consumed, but that happens with no relation to the
 snapshot.
 If your frequent changes are in the form of adding data to you disks (on top
 of data snapshot time), then the space consumption of the snapshot is
 negligible.
 If you are undoing stuff from the snapshot time, there is actually more space
 consumed (to save the differences), otherwise the space would have just been
 released.
 
 Vered
 
 
 - Original Message -
  From: Ryan Groten ryan.gro...@stantec.com
  To: users@ovirt.org
  Sent: Thursday, August 28, 2014 12:49:02 AM
  Subject: [ovirt-users]  How long can a disk snapshot exist for?
  
  
  
  Is there any limit/performance considerations to keeping a disk
  snapshot for extended periods of time? What if the disk is changing
  frequently vs mostly static?
  
  
  
  Thanks,
  
  Ryan
  
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
  
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] How long can a disk snapshot exist for?

2014-08-27 Thread Groten, Ryan
Is there any limit/performance considerations to keeping a disk snapshot for 
extended periods of time?  What if the disk is changing frequently vs mostly 
static?

Thanks,
Ryan
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users