Adding to what me and my colleague shared
I am able to locate the disk images of the VMs, I copied some of them and tried
to boot them from another standalone kvm host, however booting the disk images
wasn't succesful as it landed on a rescue mode. The strange part is that the VM
disk images
I am wondering whether global maintenance inhibits fencing of non-responsive
hosts. Is this so?
Background: I plan on migrating the engine from one cluster to another. I
understand this means to backup/restore the engine. While migrating the engine
it is shut down and all VMs will continue
Il giorno mar 9 apr 2019 alle ore 13:13 Jorick Astrego
ha scritto:
> We get a lot of spam lately, anything that can be done about this?
>
> I see the list is powered by Mailman
>
>
> https://wikitech.wikimedia.org/wiki/Lists.wikimedia.org#Fighting_spam_in_mailman
>
Opening a ticket to infra
Hello All,
it seems that "systemd-1" is from the automount unit , and not from the systemd
unit.
[root@ovirt1 system]# systemctl cat gluster_bricks-isos.automount
# /etc/systemd/system/gluster_bricks-isos.automount
[Unit]
Description=automount for gluster brick ISOS
[Automount]
Hello, we have recently updated few ovirts from 4.2.5 to 4.2.8 version
(actually 9 ovirt engine nodes), where the live storage migration
stopped to work, and leave auto-generated snapshot behind.
If we power the guest VM down, the migration works as expected. Is there
a known bug for this?
Hello All,
I have tried to enable debug and see the reason for the issue. Here is the
relevant glusterd.log:
[2019-04-12 07:56:54.526508] E [MSGID: 106077]
[glusterd-snapshot.c:1882:glusterd_is_thinp_brick] 0-management: Failed to get
pool name for device systemd-1
[2019-04-12 07:56:54.527509]
I am in the process of migrating the engine to a new cluster. I hope I will
accomplish it this weekend. Fingers crossed.
What you need to know:
The migration is really a backup and restore process.
1. You create a backup of the engine.
2. Place the cluster into global maintenance and shutdown
2019-04-12 10:39:25,643+0200 ERROR (jsonrpc/0) [virt.vm]
(vmId='71f27df0-f54f-4a2e-a51c-e61aa26b370d') Unable to start
replication for vda to {'domainID':
'244dfdfb-2662-4103-9d39-2b13153f2047', 'volumeInfo': {'path':
On Fri, 12 Apr 2019 12:31:15 -
"Dee Slaw" wrote:
> Hello, I've installed oVirt 4.3.2 and the problem is that it log messages:
>
> VDSM ovirt-04 command Get Host Statistics failed: Internal JSON-RPC error:
> {'reason': '[Errno 19] genev_sys_6081 is not present in the system'} in
> Open
The oVirt Project is pleased to announce the availability of the oVirt
4.3.3 Fourth Release Candidate, as of April 12th, 2019.
This update is a release candidate of the third in a series of
stabilization updates to the 4.3 series.
This is pre-release software. This pre-release should not to be
This one is probably saving my weekend. Thanks a lot for your great work.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
Hi all,
A few weeks ago I did a clean install of the latest oVirt-4.3.2 and
imported some VMs from oVirt-3. Three nodes running oVirt Node and oVirt
Engine installed on a separate system.
I noticed that some times some VMs will boot successfully but the Web UI
will still show "Powering UP" for
I hope this is the last update on the issue -> opened a bug
https://bugzilla.redhat.com/show_bug.cgi?id=1699309
Best regards,Strahil Nikolov
В петък, 12 април 2019 г., 7:32:41 ч. Гринуич-4, Strahil Nikolov
написа:
Hi All,
I have tested gluster snapshot without systemd.automount
On Fri, Apr 12, 2019 at 11:16 AM wrote:
> Adding to what me and my colleague shared
>
> I am able to locate the disk images of the VMs, I copied some of them and
> tried to boot them from another standalone kvm host, however booting the
> disk images wasn't succesful as it landed on a rescue
On Fri, Apr 12, 2019, 12:07 Ladislav Humenik
wrote:
> Hello, we have recently updated few ovirts from 4.2.5 to 4.2.8 version
> (actually 9 ovirt engine nodes), where the live storage migration
> stopped to work, and leave auto-generated snapshot behind.
>
> If we power the guest VM down, the
Are the VMs from the pool 'up'? If so, no assignation can be done unless
they are powered off.
El 2019-04-12 14:31, Florian Rädler escribió:
I am getting the following Error after a Pool was generated and
migrated to another host.
START_POOL fehlgeschlagen [Cannot allocate and run VM from
Hi All,
I have tested gluster snapshot without systemd.automount units and it works as
follows:
[root@ovirt1 system]# gluster snapshot create isos-snap-2019-04-11 isos
description TEST
snapshot create: success: Snap isos-snap-2019-04-11_GMT-2019.04.12-11.18.24
created successfully
I am getting the following Error after a Pool was generated and migrated to
another host.
START_POOL fehlgeschlagen [Cannot allocate and run VM from VM-Pool. There are
no available VMs in the VM-Pool.]
No user is connected to any of the running VMs. What can I do to solve this
problem?
On Fri, Apr 12, 2019 at 11:47 AM Andreas Elvers <
andreas.elvers+ovirtfo...@solutions.work> wrote:
> I am in the process of migrating the engine to a new cluster. I hope I
> will accomplish it this weekend. Fingers crossed.
>
> What you need to know:
>
> The migration is really a backup and
Hello, I've installed oVirt 4.3.2 and the problem is that it log messages:
VDSM ovirt-04 command Get Host Statistics failed: Internal JSON-RPC error:
{'reason': '[Errno 19] genev_sys_6081 is not present in the system'} in
Open Virtualization Manager.
It also keeps on logging in
Looking for some help/suggestions to correct an issue I'm having. I have a 3
host HA setup running a hosted-engine and gluster storage. The hosts are
identical hardware configurations and have been running for several years very
solidly. I was performing an upgrade to 4.1. 1st host when
On Fri, Apr 12, 2019, 12:16 wrote:
> Adding to what me and my colleague shared
>
> I am able to locate the disk images of the VMs, I copied some of them and
> tried to boot them from another standalone kvm host, however booting the
> disk images wasn't succesful as it landed on a rescue mode.
I have 8 machines acting as gluster servers. They each have 12 drives
raid 50'd together (3 sets of 4 drives raid 5'd then 0'd together as
one).
They connect to the compute hosts and to each other over lacp'd 10GB
connections split across two cisco nexus switched with VPC.
Gluster has the
On Fri, Apr 12, 2019, 12:53 Andreas Elvers <
andreas.elvers+ovirtfo...@solutions.work> wrote:
> I am wondering whether global maintenance inhibits fencing of
> non-responsive hosts. Is this so?
>
> Background: I plan on migrating the engine from one cluster to another. I
> understand this means
24 matches
Mail list logo