[ovirt-users] Unable to start vdsm, upgrade 4.0 to 4.1

2019-04-12 Thread Todd Barton
Looking for some help/suggestions to correct an issue I'm having.  I have a 3 host HA setup running a hosted-engine and gluster storage.  The hosts are identical hardware configurations and have been running for several years very solidly.  I was performing an upgrade to 4.1.  1st host when

[ovirt-users] Tuning Gluster Writes

2019-04-12 Thread Alex McWhirter
I have 8 machines acting as gluster servers. They each have 12 drives raid 50'd together (3 sets of 4 drives raid 5'd then 0'd together as one). They connect to the compute hosts and to each other over lacp'd 10GB connections split across two cisco nexus switched with VPC. Gluster has the

[ovirt-users] Re: Global maintenance and fencing of hosts

2019-04-12 Thread Alex K
On Fri, Apr 12, 2019, 12:53 Andreas Elvers < andreas.elvers+ovirtfo...@solutions.work> wrote: > I am wondering whether global maintenance inhibits fencing of > non-responsive hosts. Is this so? > > Background: I plan on migrating the engine from one cluster to another. I > understand this means

[ovirt-users] Re: HostedEngine cleaned up

2019-04-12 Thread Alex K
On Fri, Apr 12, 2019, 12:16 wrote: > Adding to what me and my colleague shared > > I am able to locate the disk images of the VMs, I copied some of them and > tried to boot them from another standalone kvm host, however booting the > disk images wasn't succesful as it landed on a rescue mode.

[ovirt-users] oVirt 4.3.2.1-1.el7 Errors at VM boot

2019-04-12 Thread Wood Peter
Hi all, A few weeks ago I did a clean install of the latest oVirt-4.3.2 and imported some VMs from oVirt-3. Three nodes running oVirt Node and oVirt Engine installed on a separate system. I noticed that some times some VMs will boot successfully but the Web UI will still show "Powering UP" for

[ovirt-users] Re: Migrate self-hosted engine between cluster

2019-04-12 Thread Andreas Elvers
This one is probably saving my weekend. Thanks a lot for your great work. ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct:

[ovirt-users] Re: oVirt 4.3.2 Error: genev_sys_6081 is not present in the system

2019-04-12 Thread Dominik Holler
On Fri, 12 Apr 2019 12:31:15 - "Dee Slaw" wrote: > Hello, I've installed oVirt 4.3.2 and the problem is that it log messages: > > VDSM ovirt-04 command Get Host Statistics failed: Internal JSON-RPC error: > {'reason': '[Errno 19] genev_sys_6081 is not present in the system'} in > Open

[ovirt-users] Re: Live storage migration is failing in 4.2.8

2019-04-12 Thread Nir Soffer
On Fri, Apr 12, 2019, 12:07 Ladislav Humenik wrote: > Hello, we have recently updated few ovirts from 4.2.5 to 4.2.8 version > (actually 9 ovirt engine nodes), where the live storage migration > stopped to work, and leave auto-generated snapshot behind. > > If we power the guest VM down, the

[ovirt-users] Re: Migrate self-hosted engine between cluster

2019-04-12 Thread Simone Tiraboschi
On Fri, Apr 12, 2019 at 11:47 AM Andreas Elvers < andreas.elvers+ovirtfo...@solutions.work> wrote: > I am in the process of migrating the engine to a new cluster. I hope I > will accomplish it this weekend. Fingers crossed. > > What you need to know: > > The migration is really a backup and

[ovirt-users] Re: HostedEngine cleaned up

2019-04-12 Thread Simone Tiraboschi
On Fri, Apr 12, 2019 at 11:16 AM wrote: > Adding to what me and my colleague shared > > I am able to locate the disk images of the VMs, I copied some of them and > tried to boot them from another standalone kvm host, however booting the > disk images wasn't succesful as it landed on a rescue

[ovirt-users] Re: Cannot allocate and run VM from VM-Pool. There are no available VMs in the VM-Pool

2019-04-12 Thread nicolas
Are the VMs from the pool 'up'? If so, no assignation can be done unless they are powered off. El 2019-04-12 14:31, Florian Rädler escribió: I am getting the following Error after a Pool was generated and migrated to another host. START_POOL fehlgeschlagen [Cannot allocate and run VM from

[ovirt-users] Cannot allocate and run VM from VM-Pool. There are no available VMs in the VM-Pool

2019-04-12 Thread Florian Rädler
I am getting the following Error after a Pool was generated and migrated to another host. START_POOL fehlgeschlagen [Cannot allocate and run VM from VM-Pool. There are no available VMs in the VM-Pool.] No user is connected to any of the running VMs. What can I do to solve this problem?

[ovirt-users] [ANN] oVirt 4.3.3 Fourth Release Candidate is now available

2019-04-12 Thread Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the oVirt 4.3.3 Fourth Release Candidate, as of April 12th, 2019. This update is a release candidate of the third in a series of stabilization updates to the 4.3 series. This is pre-release software. This pre-release should not to be

[ovirt-users] oVirt 4.3.2 Error: genev_sys_6081 is not present in the system

2019-04-12 Thread Dee Slaw
Hello, I've installed oVirt 4.3.2 and the problem is that it log messages: VDSM ovirt-04 command Get Host Statistics failed: Internal JSON-RPC error: {'reason': '[Errno 19] genev_sys_6081 is not present in the system'} in Open Virtualization Manager. It also keeps on logging in

[ovirt-users] Re: [Gluster-users] Gluster snapshot fails

2019-04-12 Thread Strahil Nikolov
I hope this is the last update on the issue -> opened a bug https://bugzilla.redhat.com/show_bug.cgi?id=1699309 Best regards,Strahil Nikolov В петък, 12 април 2019 г., 7:32:41 ч. Гринуич-4, Strahil Nikolov написа: Hi All, I have tested gluster snapshot without systemd.automount

[ovirt-users] Re: [Gluster-users] Gluster snapshot fails

2019-04-12 Thread Strahil Nikolov
Hi All, I have tested gluster snapshot without systemd.automount units and it works as follows: [root@ovirt1 system]# gluster snapshot create isos-snap-2019-04-11 isos  description TEST snapshot create: success: Snap isos-snap-2019-04-11_GMT-2019.04.12-11.18.24 created successfully

[ovirt-users] Re: Live storage migration is failing in 4.2.8

2019-04-12 Thread Benny Zlotnik
2019-04-12 10:39:25,643+0200 ERROR (jsonrpc/0) [virt.vm] (vmId='71f27df0-f54f-4a2e-a51c-e61aa26b370d') Unable to start replication for vda to {'domainID': '244dfdfb-2662-4103-9d39-2b13153f2047', 'volumeInfo': {'path':

[ovirt-users] Global maintenance and fencing of hosts

2019-04-12 Thread Andreas Elvers
I am wondering whether global maintenance inhibits fencing of non-responsive hosts. Is this so? Background: I plan on migrating the engine from one cluster to another. I understand this means to backup/restore the engine. While migrating the engine it is shut down and all VMs will continue

[ovirt-users] Re: Migrate self-hosted engine between cluster

2019-04-12 Thread Andreas Elvers
I am in the process of migrating the engine to a new cluster. I hope I will accomplish it this weekend. Fingers crossed. What you need to know: The migration is really a backup and restore process. 1. You create a backup of the engine. 2. Place the cluster into global maintenance and shutdown

[ovirt-users] Re: spam

2019-04-12 Thread Sandro Bonazzola
Il giorno mar 9 apr 2019 alle ore 13:13 Jorick Astrego ha scritto: > We get a lot of spam lately, anything that can be done about this? > > I see the list is powered by Mailman > > > https://wikitech.wikimedia.org/wiki/Lists.wikimedia.org#Fighting_spam_in_mailman > Opening a ticket to infra

[ovirt-users] Re: HostedEngine cleaned up

2019-04-12 Thread tau
Adding to what me and my colleague shared I am able to locate the disk images of the VMs, I copied some of them and tried to boot them from another standalone kvm host, however booting the disk images wasn't succesful as it landed on a rescue mode. The strange part is that the VM disk images

[ovirt-users] Live storage migration is failing in 4.2.8

2019-04-12 Thread Ladislav Humenik
Hello, we have recently updated few ovirts from 4.2.5 to 4.2.8 version (actually 9 ovirt engine nodes), where the live storage migration stopped to work, and leave auto-generated snapshot behind. If we power the guest VM down, the migration works as expected. Is there a known bug for this?

[ovirt-users] Re: [Gluster-users] Gluster snapshot fails

2019-04-12 Thread Strahil Nikolov
Hello All, it seems that "systemd-1" is from the automount unit , and not from the systemd unit. [root@ovirt1 system]# systemctl cat gluster_bricks-isos.automount # /etc/systemd/system/gluster_bricks-isos.automount [Unit] Description=automount for gluster brick ISOS [Automount]

[ovirt-users] Re: [Gluster-users] Gluster snapshot fails

2019-04-12 Thread Strahil Nikolov
Hello All, I have tried to enable debug and see the reason for the issue. Here is the relevant glusterd.log: [2019-04-12 07:56:54.526508] E [MSGID: 106077] [glusterd-snapshot.c:1882:glusterd_is_thinp_brick] 0-management: Failed to get pool name for device systemd-1 [2019-04-12 07:56:54.527509]