this in that scenario?
On Sat, Mar 5, 2016 at 3:32 PM, combuster <combus...@gmail.com> wrote:
It's great to know that it's working.
Best of luck Clint.
On 03/05/2016 09:09 PM, cl...@theboggios.com wrote:
On 2016-03-05 13:34, combuster wrote:
Correct procedure would be:
1. On each of your ovirt nod
It's great to know that it's working.
Best of luck Clint.
On 03/05/2016 09:09 PM, cl...@theboggios.com wrote:
On 2016-03-05 13:34, combuster wrote:
Correct procedure would be:
1. On each of your ovirt nodes run:
yum install vdsm-hook-macspoof
2. On the engine run:
sudo engine-config -s
rd and set the value "true" for it.
If you want to remove filtering for a single interface, then replace
steps 2 and 3 as outlined in the README.
Kind regards,
Ivan
On 03/05/2016 08:21 PM, cl...@theboggios.com wrote:
On 2016-03-05 13:13, combuster wrote:
Ignore the link (minor ac
Ignore the link (minor accident while pasting). Yum will download the
appropriate one from the repos.
On 03/05/2016 08:09 PM, combuster wrote:
Just the hook rpm (vdsm-hook-macspoof
<http://resources.ovirt.org/pub/ovirt-3.5/rpm/el7Server/noarch/vdsm-hook-macspoof-4.16.10-0.el7.noarch.
rhev-h, is there anything required to
be installed on the hypervisor hosts themselves?
Thanks,
Chris
On Mar 5, 2016 1:47 PM, "combuster" <combus...@gmail.com
<mailto:combus...@gmail.com>> wrote:
Hi Clint, you might want to check the macspoof hook features here:
https://githu
Hi Clint, you might want to check the macspoof hook features here:
https://github.com/oVirt/vdsm/tree/master/vdsm_hooks/macspoof
This should override arp/spoofing filtering, that might be the cause of
your issues with OpenVPN setup (first guess).
On 03/05/2016 07:30 PM, Clint Boggio wrote:
2d82-d8f9-442d-ba93-da5ec225c8c3`::ref 1 aborting True
bf482d82-d8f9-442d-ba93-da5ec225c8c3::DEBUG::2016-01-19
18:03:20,799::task::919::Storage.TaskManager.Task::(_runJobs)
Task=`bf482d82-d8f9-442d-ba93-da5ec225c8c3`::aborting: Task is
aborted: 'Cannot zero out volume' - code 374
On 01/18/2016
Increasing network ping timeout and lowering the number of io threads
helped. Disk image gets created, but during that time nodes are pretty
much unresponsive. I should've expected that on my setup...
In any case, I hope this helps...
Ivan
On 01/19/2016 06:43 PM, combuster wrote:
OK, setting
, combuster <combus...@gmail.com> wrote:
Hi Fil,
this worked for me a couple of months back:
http://lists.ovirt.org/pipermail/users/2015-November/036235.html
I'll try to set this up again, and see if there are any issues. Which oVirt
release are you running ?
Ivan
On 01/18/2016 02:56 PM,
Hi Fil,
this worked for me a couple of months back:
http://lists.ovirt.org/pipermail/users/2015-November/036235.html
I'll try to set this up again, and see if there are any issues. Which
oVirt release are you running ?
Ivan
On 01/18/2016 02:56 PM, Fil Di Noto wrote:
I'm having trouble
Hi,
you need at least two servers in the cluster for pm test to succeed. If
you do, make sure that IP address of the iLO is pingable from all hosts
in the cluster. oVirt engine log would also help in troubleshooting the
issue.
On 01/18/2016 02:10 PM, alireza sadeh seighalan wrote:
hi
Well if it's a bug then it would be resolved by now :)
https://bugzilla.redhat.com/show_bug.cgi?id=1026662
Had the same doubts as you did. I really don't know why it wouldn't
connect to iLO if the default port is specified, but I'm glad that you
found a workaround.
Ivan
On 06/30/2014 08:36
/etc/libvirt/libvirtd.conf and /etc/vdsm/logger.conf
, but unfortunately maybe I've jumped to conclusions, last weekend, that
very same thin provisioned vm was running a simple export for 3hrs
before I've killed the process. But I wondered:
1. The process that runs behind the export is
OK, I have good news and bad news :)
Good news is that I can run different VM's on different nodes when all
of their drives are on FC Storage domain. I don't think that all of I/O
is running through SPM, but I need to test that. Simply put, for every
virtual disk that you create on the shared
Bad news happens only when running a VM for the first time, if it helps...
On 06/09/2014 01:30 PM, combuster wrote:
OK, I have good news and bad news :)
Good news is that I can run different VM's on different nodes when all
of their drives are on FC Storage domain. I don't think that all
it in reproduce steps.
If you know another steps to reproduce this error, without blocking connection
to storage it also can be wonderful if you can provide them.
Thanks
- Original Message -
From: Andrew Lau and...@andrewklau.com
To: combuster combus...@archlinux.us
Cc: users users
disk
creation in case I have selected to run from the specific node?
On 06/09/2014 01:49 PM, combuster wrote:
Bad news happens only when running a VM for the first time, if it helps...
On 06/09/2014 01:30 PM, combuster wrote:
OK, I have good news and bad news :)
Good news is that I can run
this thought up.
On Tue, Jun 10, 2014 at 3:11 PM, combuster combus...@archlinux.us wrote:
Nah, I've explicitly allowed hosted-engine vm to be able to access the NAS
device as the NFS share itself, before the deploy procedure even started.
But I'm puzzled at how you can reproduce the bug, all was well
went all to ... My strong
recommendation is not to use self hosted engine feature for production
purposes untill the mentioned bug is resolved. But it would really help
to hear someone from the dev team on this one.
Thanks,
Andrew
On Fri, Jun 6, 2014 at 3:20 PM, combuster combus
message :(
Thanks,
Andrew
On Fri, Jun 6, 2014 at 3:20 PM, combuster combus...@archlinux.us
wrote:
Hi Andrew,
this is something that I saw in my logs too, first on one node and
then on
the other three. When that happend on all four of them, engine was
corrupted
beyond repair.
First of all, I
Hi Andrew,
this is something that I saw in my logs too, first on one node and then
on the other three. When that happend on all four of them, engine was
corrupted beyond repair.
First of all, I think that message is saying that sanlock can't get a
lock on the shared storage that you defined
One word of caution so far, when exporting any vm, the node that acts as SPM
is stressed out to the max. I releived the stress by a certain margin with
lowering libvirtd and vdsm log levels to WARNING. That shortened out the
export procedure by at least five times. But vdsm process on the SPM
Hi,
I have a 4 node cluster setup and my storage options right now are a FC based
storage, one partition per node on a local drive (~200GB each) and a NFS based
NAS device. I want to setup export and ISO domain on the NAS and there are no
issues or questions regarding those two. I wasn't aware
I need to scratch gluster off because setup is based on CentOS 6.5, so
essential prerequisites like qemu 1.3 and libvirt 1.0.1 are not met.
Any info regarding FC storage domain would be appreciated though.
Thanks
Ivan
On Sunday, 1. June 2014. 11.44.33 combus...@archlinux.us wrote:
Hi,
I
24 matches
Mail list logo