Hi all i am trying ovirt 3.4.
How can i auto start all vms once the host is up similar to setting
symlinks in /etc/libvirt/autostart
Thanks
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
Hello All,
I am using NFS as shared storage and is working fine,able
to migrate instances across nodes.
Is it possible to use iscsi backend and achieve the same,a shared
iscsi[i am not able to find a way to do a shared iscsi across hosts]
can some one help with a shared iscsi storage fo
Thanks Maor,in non ovirt kvm setup i use GFS2 to make the block device
shareable across multiple hosts.Could you tell me how is it achieved
in ovirt.
Regards
On Thu, Dec 4, 2014 at 5:31 PM, Maor Lipchuk wrote:
>
>
> - Original Message -
>> From: "mad Engineer"
On Thu, Dec 4, 2014 at 4:23 PM, mad Engineer
> wrote:
>>
>> Thanks Maor,in non ovirt kvm setup i use GFS2 to make the block device
>> shareable across multiple hosts.Could you tell me how is it achieved
>> in ovirt.
>>
>> Regards
>>
>
&
PM, mad Engineer wrote:
> Thanks Gianluca,it says only NFS is supported as export domain.I
> believe export domain is the one i am currently using for live
> migrating vms across hosts.So iscsi is not supported,please correct me
> if i am wrong.Thanks for your help
>
> On Thu, D
Hello All,
I am using centos6.5 x64 on a server with 48 G RAM and 8
Cores.Managed by Ovirt
There is only one running VM with RAM 34 G and with 6 VCPU (pinned to
proper numa nodes)
from top
top - 06:42:48 up 67 days, 20:05, 1 user, load average: 0.26, 0.20, 0.17
Tasks: 285 total,
, Markus Stockhausen
wrote:
> Memory usage > 80%: ksm kicks in. There it will run at full speed until
> usage is below 80%. There is an open BZ from me. Bad behaviour is controlled
> by mom.
>
> Markus
>
> Am 06.12.2014 15:58 schrieb mad Engineer :
>
> Hello All,
>
host you have) to something that
> suits your load. We have the below bug opened and hopefully handle it
> in one of the next versions.
> https://bugzilla.redhat.com/show_bug.cgi?id=1026294
>
> Doron
>
> ----- Original Message -
>> From: "mad Engineer"
>>
I am running RHEL6.5 as Host and Guest on HP server.
Server has 128G and 48 Core[with HT enabled.]
3 VMs are running 2 pinned to first 24 PCPU with proper NUMA pinning,
Guests:
VM1:
6 VCPU pinned to 6 PCPU NUMA node 1,with 16G RAM
VM2:
6 VCPU pinned to 6 PCPU on NUMA node 0,with 16G RAM
VM3:
tuned-adm profile throughput-performance
>
> maybe try to experiment with other profiles.
>
> HTH
>
> Martin Pavlik
> RHEV QE
>
>> On 14 Jan 2015, at 12:06, mad Engineer wrote:
>>
>> I am running RHEL6.5 as Host and Guest on HP server.
>> Server
Hello All,
We are using NFS shared storage between hosts,and have running vms.
Now we are planning to remove existing disks and replace it with with
different disks.
Is there any way we can add one more NFS say NFS2 and migrate all
volumes to NFS2 while we upgrade primary NFS,is it sup
:38 AM, Dan Yasny wrote:
> Sure, create and activate a second NFS based storage domain. Move the VMs
> over (right-click VM, -> Move).
> To deactivate the first SD, when it's empty, first put it in maintenance, in
> the DC>Storage tab
>
> On Sun, Feb 1, 2015 at 10
Hello All,
Trying ovirt on centos 6.5 but fails at PostgreSQL
configuration
with error
[ ERROR ] Failed to execute stage 'Misc configuration': Command
'/usr/share/ovirt-engine/dbscripts/create_schema.sh' failed to execute
Inside log it shows this
2015-03-31 17:47:15 DEBUG
otop
Hi i was testing HA of ovirt,(i am from Xenserver background and trying to
use KVM solution in our environment)
Have 2 hosts with NFS storage and i tested migration between these two
hosts.
On Host2 while VM was running on it, i unplugged Power cable and it took
long minutes to update that VM is
Thanks Arik,
I think that's the issue,i havent configured any
power management but could you tell me what to specify in power management
i see Address,Username,Password,SSH port and Type.
Using HP server with ILO2 so i filled Type as ILO2 but its confusing what
these
* Addr
ong
Thanks
On Wed, Jun 25, 2014 at 12:46 AM, Joop wrote:
> On 24-6-2014 21:02, mad Engineer wrote:
>
> Thanks Arik,
> I think that's the issue,i havent configured any
> power management but could you tell me what to specify in power management
>
>
Hi,
I have two UCS server,both are configured for power management, but
when i put host to maintenance and restart it by selecting power
management>restart on the OVirt Manager it shows as rebooting but the host
is not really rebooting,i had the same issue with HP servers ILO2.
What should i d
Hi,
I am using Cisco UCS C200 M2 as Host running Centos 6.5 and KVM,
Power Management not working properly,hence even with Node down Ovirt shows
VM as still up,with uptime of VM increasing(on the manager)
if i continue and save the changes its causing problems:
1. HA is not working ( Nod
Hi,
Is it possible to tune time required to reboot VM in case of host
failures.
in our test we powered off one of the Host.
Manager was still showing both Guest and Host as UP for around 5 minutes
and after that it took another 4-5 minutes to start VM.
Is there any tunable parameter that
hi i have an old HP server with ILO2
on manager i configured power management and configured SSH port to use for
ILO2
for checking SSH i manually ssh to ILO and is working fine,
but power management test always fail with "*Unable to connect/login to
fencing device*"
log shows its using fence_i
he default port is specified, but I'm glad that you
> found a workaround.
>
> Ivan
>
>
> On 06/30/2014 08:36 AM, mad Engineer wrote:
>
> hi i have an old HP server with ILO2
>
> on manager i configured power management and configured SSH port to use
> f
hello all i am having strange network issue with vms that are running on
centos 6.7 ovirt nodes.
I recently added one more ovirt node which is running centos6.7 and
upgraded from centos6.5 to centos6.7 on all other nodes.
All VMs running on nodes with centos6.7 as host Operating system fail to
re
G with MTU 9000.
During my test only change i made was booting to centos6.5 kernel.
Facing another issue with PXE booting guest machines switching to old
kernel fixes this too
On Sun, Nov 29, 2015 at 1:29 PM, Dan Kenigsberg wrote:
> On Sat, Nov 28, 2015 at 08:10:06PM +0530, mad Engineer wrote:
&
23 matches
Mail list logo