Please disregard prev. post. Clear l8 problem.
On Sat, 2015-02-21 at 02:42 +0100, Tobias Fiebig wrote:
Heho,
I currently move some vms from fcal-store to nfs, so i can restructure
the fcal storage.
However, i noticed, that dd is running with oflag=direct. I/o is
currently very slow
Heho,
I currently move some vms from fcal-store to nfs, so i can restructure
the fcal storage.
However, i noticed, that dd is running with oflag=direct. I/o is
currently very slow (~160mbit) on the nfs-servers network. Starting dd
without oflag=direct leads to link-speed transmission (~1gbit/s).
Heho,
i could establish that this only happens when
CustomVdsFenceOptionMapping is not set.
So probably a bug.
With best Regards,
Tobias
On Mon, 2015-02-09 at 17:24 +0100, Tobias Fiebig wrote:
Heho,
Currently there is four active hosts in the cluster. With the pre-existing
fence
, Eli Mesika emes...@redhat.com wrote:
- Original Message -
From: Tobias Fiebig m...@wybt.net
To: users@ovirt.org
Sent: Thursday, February 5, 2015 8:43:26 PM
Subject: [ovirt-users] Issues adding a custom fencing agent
Heho,
i currently try to get an intelmodular server working
Heho,
Assigned both BZ to me, will handle ...
Thanks for finding that
Np. Btw... is there some further documentation on the fencing scripts?
I basically ended up reading the python scripts to figure out how they
work, and that the supplied commands do no really equal to the options
expected via
, Eli Mesika emes...@redhat.com wrote:
- Original Message -
From: Tobias Fiebig m...@wybt.net
To: users@ovirt.org
Sent: Thursday, February 5, 2015 8:43:26 PM
Subject: [ovirt-users] Issues adding a custom fencing agent
Heho,
i currently try to get an intelmodular server working
Heho,
solved (more or less) with:
https://bugzilla.redhat.com/show_bug.cgi?id=1190843
and
https://bugzilla.redhat.com/show_bug.cgi?id=1190845
With best Regards,
Tobias
___
Users mailing list
Users@ovirt.org
Heho,
i currently try to get an intelmodular server working with fencing,
following:
http://lists.ovirt.org/pipermail/devel/2014-February/006525.html
and
http://www.ovirt.org/Custom_Fencing
BaseInfo:
Scientific Linux 6.6 (engine/hosts)
Ovirt 3.5
I added an agent with:
engine-config -s
Heho,
for some reason i currently have two active datacenters in ovirt 3.3
which should be cold migrated to one new datacenter.
Is there a simple way to attach an nfs-storage to all datacenters, so it
can be used to migrate the virtual machines? I was sadly unable to find
documentation on this
Heho,
On Thu, 2014-01-16 at 10:14 -0500, Elad Ben Aharon wrote:
When the domain is active on the second DC, you'll be able to see your
exported VMs on 'VM import' sub-tab under the export storage domain.
Import the VMs to the second DC.
I was afraid that this would be the way to go. So
Heho,
On Thu, 2014-01-16 at 11:10 -0500, Elad Ben Aharon wrote:
You don't have to attach/detach the domain for every VM export. Sorry for not
being clear. You can export all your VMs to the export domain at once and
then detach the export domain and attach it to the second DC.
May have
Heho,
Unfortunately yes, you'll suffer from a down time during this process.
Well, as the downtime could be significantly decreased/the comfort for
an migrate one vm, then the next largely increased, if an nfs storage
could be mounted on multiple datacenters simultaneous i wonder if such a
Heho,
we are using an intel modular server (IMS) as virtualization center in
our university chair.
These systems should now be migrated to ovirt. However, apparently
fencing/power management via /usr/sbin/fence_intelmodular can not be
configured in ovirt 3.3.
Does anyone have an idea how to
13 matches
Mail list logo