>> > Users mailing list
>> > Users@ovirt.org
>> > http://lists.ovirt.org/mailman/listinfo/users
>> >
>>
>
>
>
> --
> Nic Seltzer
> Esports Ops Tech | Riot Games
> Cell: +1.402.431.2642 | NA Summoner: Riot Dankeboop
> http://www.riot
Is there any way migrate VM’s more evenly across the cluster when a host is
being placed into maintenance? Currently it attempts to auto migrate all the
VM’s to another single host and then balance out. When the destination host is
more than 50% memory utilized this has caused over subscription
You could try vmware converter, but that’s probably a better question for
vmware.
-Patrick
From: > on behalf of
alireza sadeh seighalan >
Date: Thursday, January 7, 2016 at 2:37 PM
To:
Put the host it’s on into maintenance mode in the GUI. It will migrate to
another HE host automatically.
-Patrick
From: > on behalf of
Budur Nagaraju >
Date: Wednesday, January 6, 2016 at 4:20
Using ovirt-node EL7 we’ve been able to live merge since 3.5.3 without any
issues.
-Patrick
> On Oct 23, 2015, at 12:24 AM, Christopher Cox wrote:
>
> On 10/22/2015 10:46 PM, Indunil Jayasooriya wrote:
> ...
>>
>> Hmm,
>>
>> *How to list the sanphot?
>> *
>> *how to
We had this same issue after 3.5.2 and I forget the reason why the official
build hasn’t been released, but we were pointed in the direction of using the
nightly build from Jenkins:
http://jenkins.ovirt.org/job/ovirt-node_ovirt-3.5_create-iso-el7_merged/
I can vouch that
to:bil...@edu.physics.uoc.gr>> wrote:
On 19/10/15 18:14, Patrick Russell wrote:
We use FCoE in our setup. All the configs are in /etc/fcoe/ and fcoeadm is part
of the ovirt-node iso. So this should work the same as setting up FCoE on
CentOS or RHEL.
https://access.redhat.com/documen
We use FCoE in our setup. All the configs are in /etc/fcoe/ and fcoeadm is part
of the ovirt-node iso. So this should work the same as setting up FCoE on
CentOS or RHEL.
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/fcoe-config.html
It will all work using different VLAN tags on the same physical nics. At least
in 3.5.x that’s the case, we don’t have a 3.4.x install so I can’t speak to
that. You’ll want to watch your NFS and migration traffic though. Make sure you
don’t overrun the bandwidth for management traffic or you’re
We had this exact issue on that same build. Upgrading to oVirt Node - 3.5 -
0.999.201507082312.el7.centos made the issue disappear for us. It was one of
the 3.5.3 builds.
Hope this helps.
-Patrick
On Aug 19, 2015, at 1:15 PM, Chris Jones - BookIt.com Systems Administrator
Can I ask at what scale you’re running into issues? We’ve got about 500 VM’s
running now in a single cluster.
-Patrick
On Aug 18, 2015, at 4:03 PM, Matthew Lagoe
matthew.la...@subrigo.netmailto:matthew.la...@subrigo.net wrote:
You can have different cluster policy’s at least, don’t know what
We didn’t use the iso at all. If you have vcenter, try something like this
(note using vpx, etc):
virt-v2v -ic
vpx://username@$vcenter_hostname/$DataCenterName/$ClusterName/$esxiHostName?no_verify=1
$VMName -o rhev -os $EXPORT_DOMAIN --bridge $NetworkNameinOvirt
Here’s our versions, using
Will,
Is this esxi or esx and vcenter? If you have regular esx you need to connect
through vcenter first. We’ve migrated over 500 (windows and linux mix) VM’s
using virt-v2v. Some of the windows VM’s did require us to use a testing repo
for libvirt on our V2V box.
-Patrick
On Jul 30, 2015,
and guest traffic. As a result we’re currently pushing about 5Gb on
bond1 when we do live migrations between hosts.
-Patrick
On Jul 28, 2015, at 1:34 AM, Alan Murrell li...@murrell.ca wrote:
Hi Patrick,
On 27/07/2015 7:25 AM, Patrick Russell wrote:
We currently have all our nics
Alan,
We currently have all our nics in the same bond. So we have guest traffic,
management, and storage running over the same physical nics, but different
vlans.
Hope this helps,
Patrick
On Jul 26, 2015, at 4:38 AM, Alan Murrell li...@murrell.ca wrote:
If I am using a NIC on my host on
Any chance that 3.5.3 EL7 node will get a release soon?
-Patrick
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
Great, thanks Fabian!
-Patrick
On Jul 9, 2015, at 1:28 PM, Fabian Deutsch fdeut...@redhat.com wrote:
- Original Message -
Any chance that 3.5.3 EL7 node will get a release soon?
Hey Patrick,
currently we are not publishing official isos to resources.ovirt.org,
instead please
Thank you Lev for the clarification. We had been installing manually via the
ISO, but I had mis-read some other articles about using python to automate the
process.
I will pass on the notes around /S and your article to our internal windows
team. Maybe they have some ideas around the cert
Hi all,
We’ve got a large migration in progress for a windows (2k3, 2k8, and 2k12)
environment from vmware to ovirt. Does anyone have any suggestions for an
unattended ovirt-tools install? Our windows team has pretty much shot down
installing python on their VM’s. Are there any flags we can
Nicolas,
We have newer Dell working with the following setting:
type: drac5
slot:
options: cmd_prompt=
secure: checked
Works fine for us. Even on the new Dell FC630’s this configuration is working.
-Patrick
On Jun 1, 2015, at 2:12 PM, Nicolas Ecarnot nico...@ecarnot.net wrote:
Le
]
(DefaultQuartzScheduler_Worker-16) [4cbab1c6] Correlation ID: 4cbab1c6, Call
Stack: null, Custom Event ID: -1, Message: Failed to delete snapshot
'test_snap1' for VM '3.5.2.wintest1’.
On Apr 30, 2015, at 9:54 AM, Patrick Russell
patrick_russ...@volusion.commailto:patrick_russ...@volusion.com wrote:
Hi
Hi everyone,
We’re not seeing live merge working as of the 3.5.2 update. We’ve tested using
fibre channel and NFS attached storage. Both throwing the same error code. Are
other people seeing success with live-merge after the update?
Here’s the environment:
Engine Running on CentOS 6x64
If you turn the host back on, do the VM then power up on the host that was
never down?
If so, we have filed a bug around this in our 3.5.1 environment.
https://bugzilla.redhat.com/show_bug.cgi?id=1192596
-Patrick
On Apr 16, 2015, at 7:01 PM, Ron V ronv...@abacom.com wrote:
Hello,
I am
Patrick, are you doing this on the engine or on the nodes?
Thanks again for the input.
Bill
From: Patrick Russell
Date: Wednesday, 8 April 2015 20:13
To: Bill Dossett
Cc: users@ovirt.orgmailto:users@ovirt.org
Subject: Re: [ovirt-users] gluster export
Hi BIll,
You do need to create the export. Here
Hi BIll,
You do need to create the export. Here are the steps we noted from our build.
I’ll try and find where we got some of the build ideas from.
mkdir -p /gluster/{data,engine,meta,xport}/brick ; mkdir /mnt/lock
for i in data engine meta xport ; do gluster volume create ${i} replica 3
management test.
Is it a known bug?
___
Users mailing list
Users@ovirt.orgmailto:Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
Le 06/03/2015 03:54, Patrick Russell a écrit :
Looks like it’s just the CMC, I can use power management
Looks like it’s just the CMC, I can use power management on the individual sled
DRAC’s using the drac5 fence agent no problem.
-Patrick
From: Volusion Inc
Date: Thursday, March 5, 2015 at 8:22 PM
To: users@ovirt.orgmailto:users@ovirt.org
Subject: [ovirt-users] Dell DRAC 8
Anyone having
Anyone having success with fencing and DRAC 8 via CMC? We just received a
couple Dell FX2 chassis and we’re having trouble getting the fencing agents to
work on these. It is a CMC setup similar to the dell blade chassis, but it DRAC
version 8.
-Patrick
28 matches
Mail list logo