Hi all,
I used "engine-manage-domains" to add AD to ovirt in earlier version.
What should I do in ovirt 4.1? Hope someone can help. Thanks!
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
On 6/6/17, Yedidyah Bar David wrote:
> On Tue, Jun 6, 2017 at 6:55 AM, Leni Kadali Mutungi
> wrote:
> So, did the engine also start successfully?
Successfully ran
`/home/user/ovirt_engine/share/ovirt-engine/services/ovirt-engine/ovirt-engine.py
start`
Hey Matthew,
I think it's VDSM that handles the pausing & resuming of the VMs.
An analogous small-scale scenario...the Gluster layer for one of our
smaller oVirt clusters temporarily lost quorum the other week, locking all
I/O for about 30 minutes. The VMs all went into pause & then resumed
I managed to solve the problem with guestfish and the help of this page
http://manpages.ubuntu.com/manpages/precise/man1/guestfish.1.html
On 6 June 2017 at 22:07, Maton, Brett wrote:
> I'm pretty sure that I've got a failing (to boot) kernel on my Self hosted
>
I finally figured out what the error was all about
The default location for the gdeploy script is:
/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh
The oVirt node installer
"ovirt-node-ng-installer-ovirt-4.1-2017060504.iso" installed it in a
different location:
I'm pretty sure that I've got a failing (to boot) kernel on my Self hosted
engine, as this has happened a few times on 'regular' guest VM's
I've tried getting it to boot from (several) CD Image, but there is no
VNC console and hosted-engine --console just hangs at Escape character is ^]
Any
Ok, I will re-check a few things based on this:
https://bugzilla.redhat.com/show_bug.cgi?id=1405447
On 2017-06-06 12:58, ov...@fateknollogee.com wrote:
How do I check that?
Today, I'm re-installing but getting this error message:
PLAY [gluster_servers]
How do I check that?
Today, I'm re-installing but getting this error message:
PLAY [gluster_servers]
*
TASK [Run a shell script]
**
fatal: [ovirt-N1-f25.fatek-dc.lab]: FAILED! =>
Hello,
I'm trying to migrate my Hosted Engine from a old NFS storage domain to
a new NFS storage domain. I am running 4.1 now.
I've searched around and found this reference
http://lists.ovirt.org/pipermail/users/2017-January/078739.html. Is it
possible now to migrate Host Engine storage using
I use Open-E in production on standard Intel (Supermicro) hardware. It
can work in A/A (only in respect of ovirt, ie one LUN normally active on
one server, the other LUN normally stays on the the other node) or A/P
mode with multipath. Even in A/P mode it fails over quick enough to
avoid VM
Thanks for the replies, all!
Yep, Chris is right. TrueNAS HA is active/passive and there isn't a way around
that when failing between heads.
Sven: In my experience with iX support, they have directed me to reboot the
active node to initiate failover. There's "hactl takeover" and "hactl
On Tuesday, June 6, 2017 11:18:35 AM EDT Anthony. Fillmore wrote:
> Hey Alexander,
>
> I did those exact steps roughly two days ago...the host is still stuck in
> preparing for maintenance mode. Confirming the host has been rebooted
> seems to have no effect.
>
> Any other ideas? Some way to
On Monday, June 5, 2017 4:10:54 PM EDT Brandon. Markgraf wrote:
> Hello oVirt Users,
> We have a cluster that has been decommissioned and we are trying to remove
> the hosts from the oVirt Engine but one host is stuck in "Preparing for
> Maintenance". It's preventing me from removing that host
Once upon a time, Juan Pablo said:
> Chris, if you have active-active with multipath: you upgrade one system,
> reboot it, check it came active again, then upgrade the other.
Yes, but that's still not how a TrueNAS (and most other low- to
mid-range SANs) works, so is
Chris, if you have active-active with multipath: you upgrade one system,
reboot it, check it came active again, then upgrade the other.
-seamless.
-no service interruption.
-not locked to any storage solution.
multipath was designed exactly for that.
2017-06-06 11:03 GMT-03:00 Chris Adams
Once upon a time, Juan Pablo said:
> Im saying you can do it with multipath and not rely on truenas/freenas.
> with an active/active configuration on the virt side...instead of
> active/passive on the storage side.
But there's still only one active system (the active
Im saying you can do it with multipath and not rely on truenas/freenas.
with an active/active configuration on the virt side...instead of
active/passive on the storage side.
2017-06-06 10:44 GMT-03:00 Chris Adams :
> Once upon a time, Juan Pablo
Hello oVirt Users,
We have a cluster that has been decommissioned and we are trying to remove the
hosts from the oVirt Engine but one host is stuck in "Preparing for
Maintenance". It's preventing me from removing that host and the associated
cluster.
The physical server has been shut down and
Once upon a time, Juan Pablo said:
> I think its not related to something on the trueNAS side. if you are using
> iscsi multipath you should be using round-robin
TrueNAS HA is active/standby, so multipath has nothing to do with
rebooting/upgrading a TrueNAS.
--
Chris
I think its not related to something on the trueNAS side. if you are using
iscsi multipath you should be using round-robin , if one of the paths goes
down you still have the other path with your information., so no sanlock .
unfortunately if you want iscsi mpath on ovirt, its prefered to edit the
Once upon a time, Sven Achtelik said:
> I was failing over by rebooting one of the TrueNas nodes and this took some
> time for the other node to take over. I was thinking about asking the TN guys
> if there is a command or procedure to speed up the failover.
That's the
Well you can always run the engine on a physical node directly too.
The question is why would you want that when hosted engine gives you
fail-over and reliability features.
Failover & reliability are definitely worth having
So I would install Node on all four hosts, enable all of them for
Just to note that the mentioned logs below are from the dd with bs=512,
which were failing.
Attached the full logs from mount and brick.
Alex
On Tue, Jun 6, 2017 at 3:18 PM, Abi Askushi wrote:
> Hi Krutika,
>
> My comments inline.
>
> Also attached the strace of:
>
Hi Krutika,
My comments inline.
Also attached the strace of:
strace -y -ff -o /root/512-trace-on-root.log dd if=/dev/zero
of=/mnt/test2.img oflag=direct bs=512 count=1
and of:
strace -y -ff -o /root/4096-trace-on-root.log dd if=/dev/zero
of=/mnt/test2.img oflag=direct bs=4096 count=16
I have
Thanks for the update Lev.
On Sun, Jun 4, 2017 at 11:48 AM, Lev Veyde wrote:
> Hi Cam,
>
> The reason why it works in RHEL 6.7 clients is due to the fact that
> version of virt-viewer that is supplied with it, doesn't support the
> mechanism to check for the minimum required
I stand corrected.
Just realised the strace command I gave was wrong.
Here's what you would actually need to execute:
strace -y -ff -o
-Krutika
On Tue, Jun 6, 2017 at 3:20 PM, Krutika Dhananjay
wrote:
> OK.
>
> So for the 'Transport endpoint is not connected' issue,
Can anybody help me to solve this.
I'm having trouble in live migration. Migration always finished with error. May
be it is essential, on dashboard cluster status always N/A. VM can run on both
hosts. After turn on debug log for libvirt i'v got such errors
2017-06-06 09:41:04.842+: 1302:
OK.
So for the 'Transport endpoint is not connected' issue, could you share the
mount and brick logs?
Hmmm.. 'Invalid argument' error even on the root partition. What if you
change bs to 4096 and run?
The logs I showed in my earlier mail shows that gluster is merely returning
the error it got
Hi,
> Real hardware: what I meant to say was I have 4 hosts (not vm's).
> If I understand you correctly, I should install oVirt Node (using the iso)
> on 3 of my hosts & the hosted engine runs as a vm on "Host 1"?
Hosted engine runs as VM on one of the hosts. But not necessarily on
the first one
Sandro, thx for the reply.
Once I get comfortable with oVirt + CentOS, then I'll go & use Fedora
25/26 and contribute!
Real hardware: what I meant to say was I have 4 hosts (not vm's).
If I understand you correctly, I should install oVirt Node (using the
iso) on 3 of my hosts & the hosted
Ive forgot to mention, i'm runing the latest oVirt 4.1.2 version.
Thanks
On 06/06/2017 09:50 AM, Arsène Gschwind wrote:
Hi,
I've migrated our oVirt engine to hosted-engine located on a FC
storage LUN, so far so good.
For some reason I'm not able to start the hosted-engine VM, after
Hi,
I've migrated our oVirt engine to hosted-engine located on a FC storage
LUN, so far so good.
For some reason I'm not able to start the hosted-engine VM, after
digging in the log files i could figured out the reason. The Network
device was set to "None" as follow:
On Mon, Jun 5, 2017 at 10:13 PM, wrote:
> I want to test oVirt with real hardware, no more nested VMs.
> 3 hosts, each vm will be Fedora (maybe CentOS, I prefer Fedora)
>
Please note Fedora support within oVirt project is a best-effort task.
There's no testing of oVirt
On Tue, Jun 6, 2017 at 4:24 AM, Langley, Robert
wrote:
> FYI: My miss. Firewall port for VDSM needed to be added to my zone(s).
> Yay! The host is now in GREEN status within the Default Cluster.
>
Happy to see you solved the issue!
>
> Sent using OWA for iPhone
>
On Tue, Jun 6, 2017 at 2:10 AM, Brendan Hartzell wrote:
> As requested,
>
It seams fine, there are no pending locks now.
Could you please retry?
>
> The output of ovirt-hosted-engine-cleanup
>
> [root@node-1 ~]# ovirt-hosted-engine-cleanup
> This will de-configure the host
Hi Matthew,
I'm also using a HA TrueNAS as the storage. I have NFS as well as iscsi shares
and did do some in place upgrade. The failover went more or less smooth, it was
more of an issue on the TrueNas side where the different vlans didn't come up.
This caused the engine to take down the
On Tue, Jun 6, 2017 at 6:55 AM, Leni Kadali Mutungi
wrote:
> Setup was successful. Attached is the message I received. I didn't
> mind the firewalld bits since I don't have that installed. However all
> the ovn-* commands didn't work. I tried locating their equivalents,
>
37 matches
Mail list logo