...@redhat.com]
Sent: Thursday, July 28, 2016 12:40 AM
To: Groten, Ryan <ryan.gro...@stantec.com>; Juan Antonio Hernandez Fernandez
<jhern...@redhat.com>; Ondra Machacek <omach...@redhat.com>
Cc: Yaniv Kaul <yk...@redhat.com>; users <users@ovirt.org>
Subject: Re: [ovir
comment ‘shipit’ on
the pull request to mark it for inclusion!
Thanks,
Ryan
From: Yaniv Kaul [mailto:yk...@redhat.com]
Sent: Sunday, July 24, 2016 4:52 AM
To: Groten, Ryan <ryan.gro...@stantec.com>
Cc: users <users@ovirt.org>
Subject: Re: [ovirt-users] Ansible oVirt storage manag
Hey Ansible users,
I wrote a module for storage management and created a pull request to have it
added as an Extra module in Ansible. It can be used to
create/delete/attach/destroy pool disks.
https://github.com/ansible/ansible-modules-extras/pull/2509
Ryan
Using this python I am able to create a direct FC lun properly (and it works if
the lun_id is valid). But in the GUI after the disk is added none of the
fields are populated except LUN ID (Size is <1GB, Serial, Vendor, Product ID
are all blank).
I see this Bugzilla [1] is very similar (for
My HostedEngine exists in the Default cluster, but since I'm upgrading my hosts
to RHEL7 I created a new Cluster and migrated all the VMs to it (including
HostedEngine). However in the GUI VM Tab HostedEngine still appears as in the
Default Cluster. Because of this I can't remove this cluster
As a workaround, if you create the Pool using the latest version of a
template, all the VMs in that pool will automatically be stateless.
-Original Message-
From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of
Steve Dainard
Sent: Friday, August 21, 2015 5:12 PM
Thanks for the responses guys. Do we know why it's recommended to power the
VMs off and then move them to the new cluster? Especially if live migration
seems to work anyway.
From: matthew lagoe [mailto:matthew.la...@subrigo.net]
Sent: Thursday, August 20, 2015 3:25 PM
To: Groten, Ryan
Cc
Has anyone succeeded in upgrading their hosts OS version from 6 to 7? I
assumed it could be done without downtime and one host at a time, but when
trying it out I found that RHEL7 hosts can't be placed in the same Cluster as
RHEL6 ones.
I then tried making a new Cluster and migrating VMs from
from the API. VMs
themselves still run just fine with no noticeable performance issues though.
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1217401
From: Patrick Russell [mailto:patrick_russ...@volusion.com]
Sent: Tuesday, August 18, 2015 3:07 PM
To: Matthew Lagoe
Cc: Groten, Ryan; users
We're running into some performance problems stemming from having too many
Hosts/VMs/Disks running from the same Datacenter/Cluster. Because of that I'm
looking into splitting the DC into multiple separate ones with different
Hosts/Storage.
But I'm a little confused what the benefit of
I'm having the same issue where the guest time is offset by 7 hours (out
timezone difference) from UTC. I read in the VM System configuration for Time
Zone that hwclock on Linux guests should have the TZ set to GMT+0, but if I
change it to GMT-7, the clock is set as expected on boot.
This doesn’t answer your question directly, but I never had any luck using
virt-v2v from VMWare. I found it worked well to treat the VMWare VM just like
a physical server, boot it from the virt-v2v iso and convert the VMWare VM that
way.
From: users-boun...@ovirt.org
: Shubhendu Tripathi [mailto:shtri...@redhat.com]
Sent: Monday, July 13, 2015 2:25 AM
To: Piotr Kliczewski
Cc: Omer Frenkel; Groten, Ryan; users@ovirt.org
Subject: Re: [ovirt-users] Concerns with increasing vdsTimeout value on engine?
On 07/13/2015 01:42 PM, Piotr Kliczewski wrote:
On Mon, Jul 13, 2015
When I try to attach new direct lun disks, the scan takes a very long time to
complete because of the number of pvs presented to my hosts (there is already a
bug on this, related to the pvcreate command taking a very long time -
https://bugzilla.redhat.com/show_bug.cgi?id=1217401)
I discovered
Nope in fact I followed the guide and found CTDB works quite well. I am just
trying to figure out the benefit because that would be another component to
consider in the architecture.
From: Sahina Bose [mailto:sab...@redhat.com]
Sent: Tuesday, February 03, 2015 4:09 AM
To: Groten, Ryan; users
I was planning on making a Gluster Data domain to test, and found some great
information on this page:
http://community.redhat.com/blog/2014/05/ovirt-3-4-glusterized/
The article the author uses the CTDB service for VIP failover. Is it
possible/recommended to not do this, and just create a
I've been looking around for any best practices/recommendations on how large a
storage domain can be but can't seem to find anything.
For example, right now I have a 3TB domain made up of 3 1TB luns. That domain
has about 200 thin disks created from it.
When I want to add more space, is there
I also recently started getting these errors. They started when I upgraded
from 3.4.0 to 3.4.2.
The error appears on certain VMs (but not all) consistently every 15 minutes.
It doesn't matter if the Memory Balloon Device Enabled checkbox checked or
unchecked.
I got the message to stop
I went through this a couple months ago. Migrated my hosted-engine from one
NFS host to another. Here are the steps that I documented from the experience.
There is probably a better way, but this worked for me on two separate
hosted-engine environments.
1. Make a backup of RHEV-M
Good catch, you’re right that should say “On all hosted-engine hosts, edit
hosted-engine.conf”. It does not automatically sync the changes between the
hosts.
From: Alastair Neil [mailto:ajneil.t...@gmail.com]
Sent: November-06-14 11:06 AM
To: Groten, Ryan
Cc: Frank Wall; Jiri Moskovcak; users
Yep, note that it's a little different from VMWare in that it takes a snapshot
for each disk in the VM to do a live storage migration, and the snapshots can't
be deleted while the VM is powered up (in 3.4 at least). You can also only
move 1 disk at a time per VM if it's powered up.
I had the same challenge and ended up taking the service off my hosted-engine
and putting it elsewhere as a workaround.
But if I remember right you can still set maintenance mode when the
hosted-engine is down, just can't run vm-status?
-Original Message-
From: users-boun...@ovirt.org
Thanks, RFE created (I hope I did it right)
https://bugzilla.redhat.com/show_bug.cgi?id=1145259
-Original Message-
From: Doron Fediuck [mailto:dfedi...@redhat.com]
Sent: September-21-14 6:57 AM
To: Groten, Ryan
Cc: users@ovirt.org
Subject: Re: [ovirt-users] How to disconnect hosted
I want to unmounted the hosted-engine NFS share without affecting all the other
running VMs on the host. When I shutdown the hosted-engine and enable global
maintenance, the storage pool is still mounted and I can't unmount it because
the sanlock process is using it.
Is there any way to
I'm planning on moving my hosted-engine storage from one NFS server to another
shortly. I was thinking it would be relatively simple:
1. Stop hosted-engine
2. Copy existing share to new nfs share
3. Edit /etc/ovirt-hosted-engine/hosted-engine.conf and change storage to
the
the OS and restore the engine
database”.
Do you recreate the Guest using original host? And the host needn’t to be
fresh installed and just excute hosted-engine �Cdeployed again, we can
recreate another OS?
发件人: Groten, Ryan [mailto:ryan.gro...@stantec.com]
发送时间: 2014年9月5日 4:29
收件人: Xie, Chao
In 3.4 there is a backup/restore utility called engine-backup. You can use
this to backup the RHEV-M database(s) as well as restore. Of course this won't
backup the Guest OS itself.
My DR strategy is to simply copy off these engine-backup files to another
location. If the hosted-engine needs
Are you sure there is network traffic to/from these VMs? Most of my VMs show 0%
as well because they’re not using much network. Try generating a bunch of
network traffic and see if the number jumps.
From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of
Grzegorz Szypa
Thanks that's exactly the explanation I was looking for.
-Original Message-
From: Vered Volansky [mailto:ve...@redhat.com]
Sent: August-28-14 9:29 AM
To: Groten, Ryan; users
Subject: Re: [ovirt-users] How long can a disk snapshot exist for?
Hi Ryan,
Should have replied to all, my bad
Is there any limit/performance considerations to keeping a disk snapshot for
extended periods of time? What if the disk is changing frequently vs mostly
static?
Thanks,
Ryan
___
Users mailing list
Users@ovirt.org
30 matches
Mail list logo