Hello,
According to what I understood, ManageIQ VM appliance needs to attach to
every storage domain it has to scan.
1 : I'm new to ManageIQ and the above is what I think I have understood
from the answer of the ManageIQ team. If anyone here knows more about
it, please comment.
2 : When tr
Am 21.04.2015 um 16:09 schrieb Maikel vd Mosselaar:
> Hi Fred,
>
>
> This is one of the nodes from yesterday around 01:00 (20-04-15). The
> issue started around 01:00.
> https://bpaste.net/raw/67542540a106
>
> The VDSM logs are very big so i am unable to paste a bigger part of the
> logfile, i
Hi All,
I have a bit of an issue with a new install of Ovirt 3.5 (our 3.4 cluster
is working fine) in a 4 node cluster.
When I test fencing (or cause a kernal panic triggering a fence) the
fencing fails. On investigation it appears that the fencing options are not
being passed to the fencing scri
On 04/21/2015 06:36 PM, Roy Golan wrote:
Hi all,
Upcoming in 3.6 is enhancement for managing the hosted engine VM.
In short, we want to:
* Allow editing the Hosted engine VM, storage domain, disks, networks etc
* Have a shared configuration for the hosted engine VM
* Have a backup for the host
Hi all,
Upcoming in 3.6 is enhancement for managing the hosted engine VM.
In short, we want to:
* Allow editing the Hosted engine VM, storage domain, disks, networks etc
* Have a shared configuration for the hosted engine VM
* Have a backup for the hosted engine VM configuration
please review
Why not just script them to migrate one after the other? The CLI is nice
and simple, and the SDK is even nicer
On Tue, Apr 21, 2015 at 11:29 AM, Ernest Beinrohr <
ernest.beinr...@axonpro.sk> wrote:
> Ovirt uses dd and qemu-img for live migration. Is it possible to limit
> the number of concurren
Ovirt uses dd and qemu-img for live migration. Is it possible to limit
the number of concurrent live storage moves or limit the bandwidth used?
I'd like to move about 30 disks to another storage during the night, but
each takes about 30 minutes each and if more than one runs, it chokes my
sto
Greetings users and developers,
Just put up a feature page for the "NUMA aware KSM support";
In summary,
===
The KSM service is optimizing shared memory pages across all NUMA nodes. The
consequences is: a shared memory pages (controlled by KSM) to be read from many
CPUs across NUMA node
Am 21.04.2015 um 16:19 schrieb Maikel vd Mosselaar:
> Hi Juergen,
>
> The load on the nodes rises far over >200 during the event. Load on the
> nexenta stays normal and nothing strange in the logging.
ZFS + NFS could be still the root of this. Your Pool Configuration is
RaidzX or Mirror, with or
Hi Juergen,
The load on the nodes rises far over >200 during the event. Load on the
nexenta stays normal and nothing strange in the logging.
For our storage interfaces on our nodes we use bonding in mode 4
(802.3ad) 2x 1Gb. The nexenta has 4x 1Gb bond in mode 4 also.
Kind regards,
Maikel
Hi Fred,
This is one of the nodes from yesterday around 01:00 (20-04-15). The
issue started around 01:00.
https://bpaste.net/raw/67542540a106
The VDSM logs are very big so i am unable to paste a bigger part of the
logfile, i don't know what the maximum allowed attachment size is of the
mail
On 20/04/15 17:29 +0200, Arman Khalatyan wrote:
In my ovirt-GUI(Version 3.5.2-1.el6 prerelease) I can see following:
Size:
20479 GB
Available:
11180 GB
Used:
9299 GB
Allocated:
290 GB
Over Allocation Ratio:
-52%
What does it mean??
I think it just means that you haven't overcommitted your sto
Hi,
how about Load, Latency, strange dmesg messages on the Nexenta ? You are
using bonded Gbit Networking? If yes, which mode?
Cheers,
Juergen
Am 20.04.2015 um 14:25 schrieb Maikel vd Mosselaar:
> Hi,
>
> We are running ovirt 3.5.1 with 3 nodes and seperate engine.
>
> All on CentOS 6.6:
> 3
Hi,
Can you please attach VDSM logs ?
Thanks,
Fred
- Original Message -
> From: "Maikel vd Mosselaar"
> To: users@ovirt.org
> Sent: Monday, April 20, 2015 3:25:38 PM
> Subject: [ovirt-users] storage issue's with oVirt 3.5.1 + Nexenta NFS
>
> Hi,
>
> We are running ovirt 3.5.1 with 3
Hi,
We are running ovirt 3.5.1 with 3 nodes and seperate engine.
All on CentOS 6.6:
3 x nodes
1 x engine
1 x storage nexenta with NFS
For multiple weeks we are experiencing issues of our nodes that cannot
access the storage at random moments (atleast thats what the nodes think).
When the no
Hi all,
So, the kernel might be the problem, I'm running Centos6 using a
kernel 2.6:
[root@bigvirt qemu]# uname -r
2.6.32-504.12.2.el6.x86_64
It seems I need to upgrade first to Centos7
Winfried
Op 21-04-15 om 10:0
I enabled nested virtualization via:
1) echo "options kvm-amd nested=1" > /etc/modprobe.d/kvm-amd.conf
2) modprobe -r kvm-amd
3) modprobe kvm-amd
After it I can see svm flag on vm cpu, but from some reason I still receive the
same error "No virtualization Hardware was detected", when try to deplo
Hi,
I would like to add to the oVirt Weekly Sync agenda the following topics
- oVirt 3.5.2 GA - Go / No Go
- oVirt 3.6.0 Feature submission deadline
- oVirt Infra security hardening status
- Action items status from previous sync meeting
- Schedule / Volunteers for "office hours" series
Thanks,
Hi all,
For testing purposes I installed vdsm-hook-nestedvt:
rpm -qi vdsm-hook-nestedvt.noarch
Name : vdsm-hook-nestedvt Relocations: (not
relocatable)
Version : 4.16.10 Vendor: (none)
Hi
Did you try to copy the template to the new storage domain?
Under Template tab -> Disks sub-tab -> copy
Regards,
__
Aharon Canan
- Original Message -
> From: "Dael Maselli"
> To: users@ovirt.org
> Sent: Monday, April 20, 2015 5:4
20 matches
Mail list logo