On Mon, Feb 6, 2017 at 12:42 PM, Ralf Schenk wrote:
> Hello,
>
> I set the host to maintenance mode and tried to undeploy engine via GUI.
> The action in GUI doesn't show an error but afterwards it still shows only
> "Undeploy" on hosted-engine tab od the host.
>
> Even removing
Hi Sandro,
I upgraded my 2 host setup + engine (engine is currently on separate
hardware, but I plan to make it self-hosted), and it went like
clockwork. My engine + hosts were running 4.0.5 and 7.2, so after
installing 4.1 release, I did an OS update to 7.3 first, starting with
the engine, then
Hello,
I set the host to maintenance mode and tried to undeploy engine via GUI.
The action in GUI doesn't show an error but afterwards it still shows
only "Undeploy" on hosted-engine tab od the host.
Even removing the host from the cluster doesn't work because the GUI
says "The hosts maekred
On Sat, Feb 4, 2017 at 11:52 AM, Ralf Schenk wrote:
> Hello,
>
> I have set up 3 hosts for engine, 2 of them are working correct. There is
> no other host even having broker/agent installed. Is it possible that the
> error occurs because the hosts are multihomed (Management IP,
https://bugzilla.redhat.com/show_bug.cgi?id=1419352
Done.
***
Dr. Arman Khalatyan eScience -SuperComputing
Leibniz-Institut für Astrophysik Potsdam (AIP)
An der Sternwarte 16, 14482 Potsdam, Germany
On Sun, Feb 5, 2017 at 9:39 PM, Arman Khalatyan wrote:
> All upgrades are went smoothly! Thanks for the release.
> There is an minor problem I saw:
> After upgrading from 4.0.6 to 4.1 the GUI dialog for moving the disks from
> one Storage to another is not rendered correctly
All upgrades are went smoothly! Thanks for the release.
There is an minor problem I saw:
After upgrading from 4.0.6 to 4.1 the GUI dialog for moving the disks from
one Storage to another is not rendered correctly when multiple disks(>8)
are selected for move.
please see the attachment:
On Fri, Feb 3, 2017 at 9:24 AM, Sandro Bonazzola
wrote:
>
>
> On Fri, Feb 3, 2017 at 9:14 AM, Yura Poltoratskiy
> wrote:
>
>> I've done an upgrade of ovirt-engine tomorrow. There were two problems.
>>
>> The first - packages from epel repo, solved by
Hi all,
Am 02.02.2017 um 13:19 schrieb Sandro Bonazzola:
did you install/update to 4.1.0? Let us know your experience!
We end up knowing only when things doesn't work well, let us know it
works fine for you :-)
I just updated my test environment (3 hosts, hosted engine, iSCSI) to
4.1 and it
Hello,
I have set up 3 hosts for engine, 2 of them are working correct. There
is no other host even having broker/agent installed. Is it possible that
the error occurs because the hosts are multihomed (Management IP, IP for
storage) and can communicate with different IP's ?
hosted-engine
On Fri, Feb 3, 2017 at 7:20 PM, Simone Tiraboschi
wrote:
>
>
> On Fri, Feb 3, 2017 at 5:22 PM, Ralf Schenk wrote:
>
>> Hello,
>>
>> of course:
>>
>> [root@microcloud27 mnt]# sanlock client status
>> daemon 8a93c9ea-e242-408c-a63d-a9356bb22df5.microcloud
>>
On Fri, Feb 3, 2017 at 5:22 PM, Ralf Schenk wrote:
> Hello,
>
> of course:
>
> [root@microcloud27 mnt]# sanlock client status
> daemon 8a93c9ea-e242-408c-a63d-a9356bb22df5.microcloud
> p -1 helper
> p -1 listener
> p -1 status
>
> sanlock.log attached. (Beginning 2017-01-27
Hi,
I updated our oVirt cluster day after 4.1.0 went public.
Upgrade was simple but while migrating and upgrading hosts some vms was
stucked with 100% cpu usage and totally non responsible. I had to power
of them and start again. But it could be some problem with
CentOS7.2->7.3 transition or
wait a little bit longer for it…
Cheers
AG
From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of
Sandro Bonazzola
Sent: Thursday, February 2, 2017 1:19 PM
To: users <users@ovirt.org>
Subject: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?
Hi,
d
The hosted-engine storage domain is mounted for sure,
but the issue is here:
Exception: Failed to start monitoring domain
(sd_uuid=7c8deaa8-be02-4aaf-b9b4-ddc8da99ad96, host_id=3): timeout during
domain acquisition
The point is that in VDSM logs I see just something like:
2017-02-02 21:05:22,283
Hello,
I also put host in Maintenance and restarted vdsm while ovirt-ha-agent
is running. I can mount the gluster Volume "engine" manually in the host.
I get this repeatedly in /var/log/vdsm.log:
2017-02-03 15:29:28,891 INFO (MainThread) [vds] Exiting (vdsm:167)
2017-02-03 15:29:30,974 INFO
I see there an ERROR on stopMonitoringDomain but I cannot see the
correspondent startMonitoringDomain; could you please look for it?
On Fri, Feb 3, 2017 at 1:16 PM, Ralf Schenk wrote:
> Hello,
>
> attached is my vdsm.log from the host with hosted-engine-ha around the
>
Hello,
attached is my vdsm.log from the host with hosted-engine-ha around the
time-frame of agent timeout that is not working anymore for engine (it
works in Ovirt and is active). It simply isn't working for engine-ha
anymore after Update.
At 2017-02-02 19:25:34,248 you'll find an error
- Original Message -
> From: "Ralf Schenk" <r...@databay.de>
> To: "Ramesh Nachimuthu" <rnach...@redhat.com>
> Cc: users@ovirt.org
> Sent: Friday, February 3, 2017 4:19:02 PM
> Subject: Re: [ovirt-users] [Call for feedback]
Hello,
in reality my cluster is a hyper-converged cluster. But how do I tell
this Ovirt Engine ? Of course I activated the checkbox "Gluster"
(already some versions ago around 4.0.x) but that didn't change anything.
Bye
Am 03.02.2017 um 11:18 schrieb Ramesh Nachimuthu:
>> 2. I'm missing any
On Fri, Feb 3, 2017 at 11:17 AM, Sandro Bonazzola
wrote:
>
>
> On Fri, Feb 3, 2017 at 10:54 AM, Ralf Schenk wrote:
>
>> Hello,
>>
>> I upgraded my cluster of 8 hosts with gluster storage and
>> hosted-engine-ha. They were already Centos 7.3 and using Ovirt
- Original Message -
> From: "Ralf Schenk" <r...@databay.de>
> To: users@ovirt.org
> Sent: Friday, February 3, 2017 3:24:55 PM
> Subject: Re: [ovirt-users] [Call for feedback] did you install/update to
> 4.1.0?
>
>
>
> Hello,
>
&
On Fri, Feb 3, 2017 at 10:54 AM, Ralf Schenk wrote:
> Hello,
>
> I upgraded my cluster of 8 hosts with gluster storage and
> hosted-engine-ha. They were already Centos 7.3 and using Ovirt 4.0.6 and
> gluster 3.7.x packages from storage-sig testing.
>
> I'm missing the storage
Hello,
I upgraded my cluster of 8 hosts with gluster storage and
hosted-engine-ha. They were already Centos 7.3 and using Ovirt 4.0.6 and
gluster 3.7.x packages from storage-sig testing.
I'm missing the storage listed under storage tab but this is already
filed by a bug. Increasing Cluster and
Title: Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?
On Thu, Feb 2, 2017 at 9:59 PM, <ser...@msm.ru> wrote:
Updated from 4.0.6
Docs are quite incomplete, it's not mentioned about installing ovirt-release41 on centos HV and ovirt-nodes manually, yo
On Fri, Feb 3, 2017 at 9:14 AM, Yura Poltoratskiy
wrote:
> I've done an upgrade of ovirt-engine tomorrow. There were two problems.
>
> The first - packages from epel repo, solved by disable repo and downgrade
> package to an existing version in ovirt-release40 repo (yes,
I've done an upgrade of ovirt-engine tomorrow. There were two problems.
The first - packages from epel repo, solved by disable repo and
downgrade package to an existing version in ovirt-release40 repo (yes,
there is info in documentation about epel repo).
The second (and it is not only for
On Thu, Feb 2, 2017 at 9:59 PM, wrote:
>
>
> Updated from 4.0.6
> Docs are quite incomplete, it's not mentioned about installing
> ovirt-release41 on centos HV and ovirt-nodes manually, you need to guess.
> Also links in release notes are broken ( https://www.ovirt.org/release/
>
On Fri, Feb 3, 2017 at 5:51 AM, Shalabh Goel
wrote:
> HI,
>
> I am having the following issue in two of my nodes after upgrading. The
> ovirt engine says that it is not able to find ovirtmgmt network on the
> nodes and hence the nodes are set to non-operational. More
On Fri, Feb 3, 2017 at 7:02 AM, Lars Seipel wrote:
> On Thu, Feb 02, 2017 at 01:19:29PM +0100, Sandro Bonazzola wrote:
> > did you install/update to 4.1.0? Let us know your experience!
> > We end up knowing only when things doesn't work well, let us know it
> works
> >
On Thu, Feb 02, 2017 at 01:19:29PM +0100, Sandro Bonazzola wrote:
> did you install/update to 4.1.0? Let us know your experience!
> We end up knowing only when things doesn't work well, let us know it works
> fine for you :-)
Will do that in a week or so. What's the preferred way to upgrade to
HI,
I am having the following issue in two of my nodes after upgrading. The
ovirt engine says that it is not able to find ovirtmgmt network on the
nodes and hence the nodes are set to non-operational. More details are in
the following message.
Thanks
Shalabh Goel
>
Title: Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?
Updated from 4.0.6
Docs are quite incomplete, it's not mentioned about installing ovirt-release41 on centos HV and ovirt-nodes manually, you need to guess.
Also links in release notes are broken ( https
On Thu, Feb 2, 2017 at 4:23 PM, Краснобаев Михаил wrote:
> Hi,
>
> upgraded my cluster (3 hosts, engine, nfs-share) to the latest 4.1 release
> and Centos 7.3 (from 4.06).
>
> Did the following:
>
> 1. Upgraded engine machine to Centos 7.3
> 2. Upgraded engine packages and ran
Hi, >Why did you have to restart VMs for the migration to work ? Is it mandatory for an upgrade ? I had to restart the VMs (even shutdown and start) for the raised compatibility level to kick in.Migration works even if you don't restart VMs. > Is it mandatory for an upgrade ? No. But at some point
Hello
Thanks for sharing your procedures.
Why did you have to restart VMs for the migration to work ? Is it
mandatory for an upgrade ?
Fernando
On 02/02/2017 12:23, Краснобаев Михаил wrote:
Hi,
upgraded my cluster (3 hosts, engine, nfs-share) to the latest 4.1
release and Centos 7.3
Hi, upgraded my cluster (3 hosts, engine, nfs-share) to the latest 4.1 release and Centos 7.3 (from 4.06). Did the following: 1. Upgraded engine machine to Centos 7.32. Upgraded engine packages and ran "engine-setup"3. Upgraded one by one hosts to 7.3 + packages from the new 4.1. repo and
Hi,
did you install/update to 4.1.0? Let us know your experience!
We end up knowing only when things doesn't work well, let us know it works
fine for you :-)
If you're not planning an update to 4.1.0 in the near future, let us know
why.
Maybe we can help.
Thanks!
--
Sandro Bonazzola
Better
38 matches
Mail list logo