[ovirt-users] strength of oVirt compared to others???

2020-01-06 Thread yam yam
Hi everyone!

I want to know clear strength of oVirt compared to openstack and kubevirt(VM 
add-on for k8s).

compared OpenStack, I heard oVirt is specialized in long-lasting traditional 
apps requiring robust and resilient infra while OpenStack is cloud solution.
so, backend is suited for oVirt and frontend like web server is suited for 
OpenStack.
but I 'm wondering why these traditional apps are not suited for OpenStack 
because I think OpenStack encompasses functions(live migration, scale-up, 
snapshot, ...) oVirt provides.
I want to know which features in oVirt make such difference.

compared to kubevirt, it seems more difficult to find strength in that both are 
designed to run legacy apps.
under mixed environment, it seems like rather kubevirt has more benefits :'(
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4CG6LJANYBQWSKEJAQRHTNTXOZXHHHL4/


[ovirt-users] Re: Question about HCI gluster_inventory.yml

2020-01-06 Thread Gobinda Das
Hi John,
 I am seeing error:
TASK [ovirt.hosted_engine_setup : Get local VM dir path]
*
task path:
/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/bootstrap_local_vm/03_engine_initial_tasks.yml:57
fatal: [localhost]: FAILED! => {
"msg": "The task includes an option with an undefined variable. The
error was: 'ansible.vars.hostvars.HostVarsVars object' has no attribute
'he_local_vm_dir'\n\nThe error appears to be in
'/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/bootstrap_local_vm/03_engine_initial_tasks.yml':
line 57, column 7, but may\nbe elsewhere in the file depending on the exact
syntax problem.\n\nThe offending line appears to be:\n\n  delegate_to:
\"{{ he_ansible_host_name }}\"\n- name: Get local VM dir path\n  ^
here\n"
}
Adding @Simone Tiraboschi 
Simone do we need to explicitly specify he_local_vm_dir ?
In gluster_inventory you can't mention other hosts and SD informations to
auto add. For that you need to create separate file under:
/usr/share/ovirt-hosted-engine-setup/gdeploy-inventory.yml
and add entries there like:

gluster:
 hosts:
  host2:
  host3:
 vars:
  storage_domains:
[{"name":"data","host":"host1","address":"host1","path":"/data","mount_options":"backup-volfile-servers=host2:host3"},{"name":"vmstore","host":"host1","address":"host1","path":"/vmstore","mount_options":"backup-volfile-servers=host2:host3"}]

Then hook will automatically read this file and perform action accordingly.

On Tue, Jan 7, 2020 at 11:53 AM John Call  wrote:

> Hi Gobinda, ovirt-users,
>
> I think I'm close, but I'm running into a timeout/error when the
> ovirt.hosted-engine-setup tasks try to connect to the new
> "HostedEngineLocal" VM.  It seems to complain about host_key_checking.  I
> disabled that via ansible.cfg, but it fails on the next task (see short
> logs.)  Do you see anything obviously wrong in my inventory and json
> files?  I've attached the verbose playbook logs when I run this from the
> first HCI host...
>
> [root@rhhi1 hc-ansible-deployment]# pwd
> /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment
>
> [root@rhhi1 hc-ansible-deployment]# ansible-playbook -b -vvv -i
> gluster_inventory.yml hc_deployment.yml
> --extra-vars='@he_gluster_vars.json' | tee playbook.logs
>
>
> Thank you,
> John Call
> Red Hat - Storage Domain Architect
> jc...@redhat.com
> (714) 267-8802
>
>
> On Wed, Dec 11, 2019 at 11:09 PM Gobinda Das  wrote:
>
>> Hi John,
>>   You need to specify storage-fqdn(Which should mapped to storage
>> network) and ovirtmgmt-fqdn (Which should mapped to frontend network)  like
>> this:
>> hc_nodes:
>>   hosts:
>> host1-STORAGE-fqdn:
>> host2-STORAGE-fqdn:
>> host3-STORAGE-fqdn:
>>   vars:
>> cluster_nodes:
>>- host1-STORAGE-fqdn
>>- host2-STORAGE-fqdn
>>- host3-STORAGE-fqdn
>> gluster_features_hci_cluster: "{{ cluster_nodes }}"
>>
>> gluster:
>>   host2-ovirtmgmt-fqdn:
>>   host3-ovirtmgmt-fqdn:
>>   storage_domains:
>> [{"name":"data","host":"host1-STORAGE-fqdn","address":"host1-STORAGE-fqdn","path":"/data","mount_options":"backup-volfile-servers=host2-STORAGE-fqdn:host3-STORAGE-fqdn"},{"name":"vmstore","host":"host1-STORAGE-fqdn","address":"host1-STORAGE-fqdn","path":"/vmstore","mount_options":"backup-volfile-servers=host2-STORAGE-fqdn:host3-STORAGE-fqdn"}]
>>
>>
>> On Thu, Dec 12, 2019 at 2:47 AM John Call  wrote:
>>
>>> Hi ovirt-users,
>>>
>>> I'm trying to automate my HCI deployment, but can't figure out how to
>>> specify multiple network interfaces in gluster_inventory.yml.  My servers
>>> have two NICs, one for ovirtmgmt (and everything else), and the other is
>>> just for Gluster.  How should I populate the inventory/vars file?  Is this
>>> correct?
>>>
>>> [root@rhhi1 hc-ansible-deployment]# pwd
>>> /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment
>>>
>>> [root@rhhi1 hc-ansible-deployment]# cat gluster_inventory.yml
>>> --lots of stuff omitted--
>>> hc_nodes:
>>>   hosts:
>>> host1-STORAGE-fqdn:
>>> host2-STORAGE-fqdn:
>>> host3-STORAGE-fqdn:
>>>   vars:
>>> cluster_nodes:
>>>- host1-ovirtmgmt-fqdn
>>>- host2-ovirtmgmt-fqdn
>>>- host3-ovirtmgmt-fqdn
>>> gluster_features_hci_cluster: "{{ cluster_nodes }}"
>>>
>>> gluster:
>>>   host2-STORAGE-fqdn:
>>>   host3-STORAGE-fqdn:
>>>   storage_domains:
>>> [{"name":"data","host":"host1-STORAGE-fqdn","address":"host1-STORAGE-fqdn","path":"/data","mount_options":"backup-volfile-servers=host2-STORAGE-fqdn:host3-STORAGE-fqdn"},{"name":"vmstore","host":"host1-STORAGE-fqdn","address":"host1-STORAGE-fqdn","path":"/vmstore","mount_options":"backup-volfile-servers=host2-STORAGE-fqdn:host3-STORAGE-fqdn"}]
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: 

[ovirt-users] Re: OVirt Engine Server Died - Steps for Rebuilding the Ovirt Engine System

2020-01-06 Thread Yedidyah Bar David
On Mon, Jan 6, 2020 at 6:01 PM Bob Franzke  wrote:
>
> I just had some VMs go offline over the weekend. I really cannot figure out 
> how to tell why without the engine working.

If you suspect that they failed, as opposed to being shut down from
inside them (by an admin or whatever), then you can check vdsm logs on
the host they ran on, and might find a clue.

> I don’t need to really 'control' the VMs but seems without the engine its not 
> just the control aspect. It’s the visibility it gives you into the state of 
> your environment. We used Ovirt as also a lab setup for users to access and 
> build VMs as needed. This is completely offline now without a working Engine. 
> Seems like having an engine available all the time would be pretty important 
> generally.
>
> I have never understood the idea of having the machine that controls VMs, 
> being in the same infrastructure its controlling. Seems very 'chicken or the 
> egg' sort of thing to me. If the engine decides to move itself from one host 
> to another, and it fails for some reason because the process of moving itself 
> caused a problem (stopping services, etc.)then not sure what you would end up 
> with there. Seems very iffy to me, but maybe I am reading too much into it. 
> Again I admittedly don’t know enough about Ovirt to know if this thinking is 
> off base or not. My own experience with networking systems means you would 
> never set things up like this. Each system is autonomous and can take over 
> for the other if one part fails. But then again, if Ovirt Engine had been set 
> up this way, maybe I wouldn't be in the position I am now with no working 
> engine. Lots to sort out. Thanks for the help.

Each host participating in the hosted-engine cluster has two small
daemons, called agent and broker, in the package
ovirt-hosted-engine-ha, that should take care of the engine VM.

You are right that this is a chicken-and-egg problem, and this is the
solution that oVirt includes.

>
> -Original Message-
> From: Yedidyah Bar David (d...@redhat.com) 
> Sent: Monday, January 6, 2020 8:26 AM
> To: Bob Franzke 
> Cc: users 
> Subject: [ovirt-users] Re: OVirt Engine Server Died - Steps for Rebuilding 
> the Ovirt Engine System
>
> On Mon, Jan 6, 2020 at 4:19 PM Bob Franzke  wrote:
> >
> > So I am getting the impression that without a working ovirt engine, you are 
> > sort of cooked from being able to control VMs such that your whole 
> > organization can potentially come down to the availability of a single 
> > machine? Is this really correct?
>
> Correct.
>
> This does not mean that the engine itself is necessarily critical - if it's 
> down, your VMs should still be ok. If _controlling_ VMs is considered 
> critical for you, then yes - you do need to make sure your engine is alive 
> and well.
>
> > Are there HA options available for the engine server itself?
>
> The standard option is using hosted-engine with several hosts - you get HA 
> out-of-the-box.
>
> I also heard about people using standalone active/standby clustering/HA 
> solutions for the engine.
>
> >
> > -Original Message-
> > From: Yedidyah Bar David (d...@redhat.com) 
> > Sent: Monday, January 6, 2020 12:57 AM
> > To: Bob Franzke 
> > Cc: users 
> > Subject: [ovirt-users] Re: OVirt Engine Server Died - Steps for
> > Rebuilding the Ovirt Engine System
> >
> > On Mon, Jan 6, 2020 at 12:00 AM Bob Franzke  wrote:
> > >
> > > Thanks for the reply here. Still waiting on a server to rebuild this 
> > > with. Should be here tomorrow. The engine was running on bare metal 
> > > server, and was not a VM.
> > >
> > > In the mean time we had a few of the VMs go dark for some reason. I 
> > > discovered the vdsm-client commands and tried figuring out what happened. 
> > > Is there any way I can start a VM via command line on one of the VM 
> > > hosts? Is the vdsm-client command the way to do this without a working 
> > > engine?
> >
> > It is, in principle, but that's not supported and is risky - because the 
> > engine will not know what you do.
> >
> > See also e.g.:
> >
> > https://www.ovirt.org/develop/release-management/features/integration/
> > cockpit.html
> >
> > >
> > > -Original Message-
> > > From: Yedidyah Bar David (d...@redhat.com) 
> > > Sent: Tuesday, December 24, 2019 1:50 AM
> > > To: Bob Franzke 
> > > Cc: users 
> > > Subject: [ovirt-users] Re: OVirt Engine Server Died - Steps for
> > > Rebuilding the Ovirt Engine System
> > >
> > > On Mon, Dec 23, 2019 at 7:08 PM Bob Franzke  
> > > wrote:
> > > >
> > > > > Which nightly backups? Do they run engine-backup?
> > > >
> > > > Yes sorry. The backups are the backups created when running the 
> > > > engine-backup script. So I have the files and the DB backed up and off 
> > > > onto different storage. I just grabbed a copy of the entire /etc 
> > > > directory as well just in case there was something needed in there that 
> > > > is not included in the engine-backup solution.
> > > >
> > > > > In 

[ovirt-users] Re: null

2020-01-06 Thread Staniforth, Paul
Hi,
if you have a network in the cluster that is set to required any hosts 
in the cluster will become non-operational if it's isn't available. You can set 
the logical network not to be required while testing or if it isn't essential.

Regards,
 Paul S.

From: Strahil 
Sent: 27 December 2019 12:18
To: zhouhao 
Cc: users 
Subject: [ovirt-users] Re: null


Caution External Mail: Do not click any links or open any attachments unless 
you trust the sender and know that the content is safe.

I do use  automatic migration policy.

The main question you have to solve is:
1. Why the nodes became 'Non-operational' .Usually this happens when the 
management interface (in your case HoatedEngine VM) could not reach the nodes 
over the management network.

By default, management is going over the ovirtmgmt network. I guess you have 
created the new network. Marked that new network as management network and then 
the switch was off , causing 'Non-Operational  state'.

2. Migrating VMs is usually a safe approach, but this behavior is quite 
strange. If a node ia Non-operational ->  there could be no successful 
migration.

3. Some of the VMs got paused  due to storage issue.  Are you using GlusterFS, 
NFS or iSCSI ? If yes, you need to clarify why you lost your storage.

I guess  for now you can mark each VM to be migrated only manually (VM -> Edit) 
and if they are critical VMs , set a high Availability from each VM's Edit 
options.

In such case, if a node fails, the VMs will be restarted on another node.

4.  Have you setup node fencing ? For example APC, iLo, iDRAC and other fencing 
mechanisms can allow the HostedEngine use another Host as fencing proxy and to 
reset the problematic Hypervisor.

P.S.: You can define the following alias  in '~/.bashrc' :
alias virsh='virsh -c 
qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf'
Then you can verify your VMs even when a HostedEngine is down:
'virsh list --all'

Best Regards,
Strahil Nikolov

On Dec 27, 2019 08:40, zhou...@vip.friendtimes.net wrote:

I had a crash yesterday in my ovirt cluster, which is made up of 3 nodes.

I just tried to add a new network, but the whole cluster crashed

I added a new network to my cluster, but while I was debugging the newswitch, 
when the switch was poweroff, the node detected the network card status down 
and then moved to Non-Operational state.

[cid:_Foxmail.1@70509720-d6f2-703b-588f-74d3f9e5d744]



At this time  all of 3 nodes moved to Non-Operational state.

All virtual machines have started automatic migration,When I received the alert 
email, all virtual machines were suspended

[cid:_Foxmail.1@ada368b8-1ee1-7900-b071-893abafedb48]

[cid:_Foxmail.1@b944d96d-8869-080a-1dbb-303835b5d836]


[cid:_Foxmail.1@958fb6db-56f3-e0c2-e5d7-89529eba4dca]


[cid:_Foxmail.1@8d068bd2-6b94-9885-2d56-4b29840d3729]



[cid:_Foxmail.1@076577a9-4aee-f6ee-748d-f76c9b20f8c4]

In 15 minutes my newswitch were power up again.The 3 ovirt-nodes become active 
again, but many virtual machines become unresponsive or suspended due to forced 
migration, and only a few virtual machines are pulled up again due to cancelled 
migration

After I tried to terminate the migration tasks and restart ovirt-engine  
service, I was still unable to restore most of the virtual machines, so I had 
to restart 3 ovirt-nodes to restore my virtual machine

I didn't recover all the virtual machines until an hour later


Then  I modify my migration policy  to " Do Not Migrate Virtual Machines"

Which migration Policy do you recommend?

I'm afraid to use cluster...

[cid:_Foxmail.1@98fc2844-fd89-acac-56b9-62752c2d69f8]

zhou...@vip.friendtimes.net
To view the terms under which this email is distributed, please go to:-
http://leedsbeckett.ac.uk/disclaimer/email/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/THPQB6VRSF6SYATT6JLTVKEUYBDPU6AA/


[ovirt-users] Re: terraform integration

2020-01-06 Thread Roy Golan
The merge window is now open for the masters branches of the various origin
components.
Post merge there should be an OKD release - this is not under my control,
but when it will be available I'll let you know.

On Mon, 6 Jan 2020 at 20:54, Nathanaël Blanchet  wrote:

> Hello Roy
> Le 21/11/2019 à 13:57, Roy Golan a écrit :
>
>
>
> On Thu, 21 Nov 2019 at 08:48, Roy Golan  wrote:
>
>>
>>
>> On Wed, 20 Nov 2019 at 09:49, Nathanaël Blanchet 
>> wrote:
>>
>>>
>>> Le 19/11/2019 à 19:23, Nathanaël Blanchet a écrit :
>>>
>>>
>>> Le 19/11/2019 à 13:43, Roy Golan a écrit :
>>>
>>>
>>>
>>> On Tue, 19 Nov 2019 at 14:34, Nathanaël Blanchet 
>>> wrote:
>>>
 Le 19/11/2019 à 08:55, Roy Golan a écrit :

 oc get -o json clusterversion

 This is the output of the previous failed deployment, I'll give a try
 to a newer one when I'll have a minute to test

>>> Without changing nothing with template,  I gave a new try and... nothing
>>> works anymore now, none of provided IPs can be pingued : dial tcp
>>> 10.34.212.51:6443: connect: no route to host", so none of masters can
>>> be provisonned by bootstrap.
>>>
>>> I tried with the latest rhcos and latest ovirt 4.3.7, it is the same.
>>> Obviously something changed since my first attempt 12 days ago... is your
>>> docker image for openshift-installer up to date?
>>>
>>> Are you still able to your side to deploy a valid cluster ?
>>>
>>> I investigated looking at bootstrap logs (attached) and it seems that
>>> every containers die immediately after been started.
>>>
>>> Nov 20 07:02:33 localhost podman[2024]: 2019-11-20 07:02:33.60107571
>>> + UTC m=+0.794838407 container init
>>> 446dc9b7a04ff3ff4bbcfa6750e3946c084741b39707eb088c9d7ae648e35603 (image=
>>> registry.svc.ci.openshift.org/origin/release:4.3, name=eager_cannon)
>>> Nov 20 07:02:33 localhost podman[2024]: 2019-11-20 07:02:33.623197173
>>> + UTC m=+0.816959853 container start
>>> 446dc9b7a04ff3ff4bbcfa6750e3946c084741b39707eb088c9d7ae648e35603 (image=
>>> registry.svc.ci.openshift.org/origin/release:4.3, name=eager_cannon)
>>> Nov 20 07:02:33 localhost podman[2024]: 2019-11-20 07:02:33.623814258
>>> + UTC m=+0.817576965 container attach
>>> 446dc9b7a04ff3ff4bbcfa6750e3946c084741b39707eb088c9d7ae648e35603 (image=
>>> registry.svc.ci.openshift.org/origin/release:4.3, name=eager_cannon)
>>> Nov 20 07:02:34 localhost systemd[1]:
>>> libpod-446dc9b7a04ff3ff4bbcfa6750e3946c084741b39707eb088c9d7ae648e35603.scope:
>>> Consumed 814ms CPU time
>>> Nov 20 07:02:34 localhost podman[2024]: 2019-11-20 07:02:34.100569998
>>> + UTC m=+1.294332779 container died
>>> 446dc9b7a04ff3ff4bbcfa6750e3946c084741b39707eb088c9d7ae648e35603 (image=
>>> registry.svc.ci.openshift.org/origin/release:4.3, name=eager_cannon)
>>> Nov 20 07:02:35 localhost podman[2024]: 2019-11-20 07:02:35.138523102
>>> + UTC m=+2.332285844 container remove
>>> 446dc9b7a04ff3ff4bbcfa6750e3946c084741b39707eb088c9d7ae648e35603 (image=
>>> registry.svc.ci.openshift.org/origin/release:4.3, name=eager_cannon)
>>>
>>> and this:
>>>
>>> Nov 20 07:04:16 localhost hyperkube[1909]: E1120 07:04:16.4895271909
>>> remote_runtime.go:200] CreateContainer in sandbox
>>> "58f2062aa7b6a5b2bdd6b9cf7b41a9f94ca2b30ad5a20e4fa4dec8a9b82f05e5" from
>>> runtime service failed: rpc error: code = Unknown desc = container create
>>> failed: container_linux.go:345: starting container process caused "exec:
>>> \"runtimecfg\": executable file not found in $PATH"
>>> Nov 20 07:04:16 localhost hyperkube[1909]: E1120 07:04:16.4897141909
>>> kuberuntime_manager.go:783] init container start failed:
>>> CreateContainerError: container create failed: container_linux.go:345:
>>> starting container process caused "exec: \"runtimecfg\": executable file
>>> not found in $PATH"
>>>
>>> What do you think about this?
>>>
>>
>> I'm seeing the same now, checking...
>>
>>
> Because of the move upstream to release OKD the release-image that comes
> with the installer I gave you are no longer valid.
>
> I need to prepare an installer version with the preview of OKD, you can
> find the details here
> https://mobile.twitter.com/smarterclayton/status/1196477646885965824
>
> I tested your last openshift-installer container on quay.io, but the
> ovirt provider is not available anymore. Will ovirt be supported as an OKD
> 4.2 iaas provider ?
>
>
>
>
> (do I need to use the terraform-workers tag instead of latest?)

 docker pull quay.io/rgolangh/openshift-installer:terraform-workers


 [root@openshift-installer
 openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit]# ./oc get -o
 json clusterversion
 {
 "apiVersion": "v1",
 "items": [
 {
 "apiVersion": "config.openshift.io/v1",
 "kind": "ClusterVersion",
 "metadata": {
 "creationTimestamp": "2019-11-07T12:23:06Z",
 "generation": 1,
 "name": 

[ovirt-users] Re: ISO Upload

2020-01-06 Thread Strahil Nikolov
 ISO domains are deprecated.Just upload it to the data domain.
Best Regards,Strahil Nikolov
В понеделник, 6 януари 2020 г., 17:20:16 ч. Гринуич+2, Christian Reiss 
 написа:  
 
 Hey folks,

I have a cluster setup as follows (among others)

/vms for vm storage. Attached as data domain.
/isos for, well, isos. Attached as iso domain.

When I try to upload a(ny) iso via the ovirt engine, I can only select 
the vms storage path, not the iso storage path. Both are green and 
active across the board.

Is my understanding correct that I can only upload and ISO to a data 
domain, then (once uploaded), move the iso file to the iso domain via 
the web-ui?

-Chris.


-- 
with kind regards,
mit freundlichen Gruessen,

Christian Reiss
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RTYFZI3JCS25NC2EIYFOZTDUJ5RLF6TK/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WREZCXSYWLIZS55X4XT25UHDMSOBBJ2A/


[ovirt-users] Re: OVirt Engine Server Died - Steps for Rebuilding the Ovirt Engine System

2020-01-06 Thread Strahil Nikolov
 It's necessary to mention , that having the xml of each VM can let you define 
and start your VM even without the HostedEngine VM.
So , if you implement snapshot-based backup , consider having a dump of the 
configuration of each VM.
Best Regards,Strahil Nikolov
В понеделник, 6 януари 2020 г., 16:29:21 ч. Гринуич+2, Yedidyah Bar David 
 написа:  
 
 On Mon, Jan 6, 2020 at 4:19 PM Bob Franzke  wrote:
>
> So I am getting the impression that without a working ovirt engine, you are 
> sort of cooked from being able to control VMs such that your whole 
> organization can potentially come down to the availability of a single 
> machine? Is this really correct?

Correct.

This does not mean that the engine itself is necessarily critical - if
it's down, your VMs should still be ok. If _controlling_ VMs is
considered critical for you, then yes - you do need to make sure your
engine is alive and well.

> Are there HA options available for the engine server itself?

The standard option is using hosted-engine with several hosts - you
get HA out-of-the-box.

I also heard about people using standalone active/standby
clustering/HA solutions for the engine.

>
> -Original Message-
> From: Yedidyah Bar David (d...@redhat.com) 
> Sent: Monday, January 6, 2020 12:57 AM
> To: Bob Franzke 
> Cc: users 
> Subject: [ovirt-users] Re: OVirt Engine Server Died - Steps for Rebuilding 
> the Ovirt Engine System
>
> On Mon, Jan 6, 2020 at 12:00 AM Bob Franzke  wrote:
> >
> > Thanks for the reply here. Still waiting on a server to rebuild this with. 
> > Should be here tomorrow. The engine was running on bare metal server, and 
> > was not a VM.
> >
> > In the mean time we had a few of the VMs go dark for some reason. I 
> > discovered the vdsm-client commands and tried figuring out what happened. 
> > Is there any way I can start a VM via command line on one of the VM hosts? 
> > Is the vdsm-client command the way to do this without a working engine?
>
> It is, in principle, but that's not supported and is risky - because the 
> engine will not know what you do.
>
> See also e.g.:
>
> https://www.ovirt.org/develop/release-management/features/integration/cockpit.html
>
> >
> > -Original Message-
> > From: Yedidyah Bar David (d...@redhat.com) 
> > Sent: Tuesday, December 24, 2019 1:50 AM
> > To: Bob Franzke 
> > Cc: users 
> > Subject: [ovirt-users] Re: OVirt Engine Server Died - Steps for
> > Rebuilding the Ovirt Engine System
> >
> > On Mon, Dec 23, 2019 at 7:08 PM Bob Franzke  wrote:
> > >
> > > > Which nightly backups? Do they run engine-backup?
> > >
> > > Yes sorry. The backups are the backups created when running the 
> > > engine-backup script. So I have the files and the DB backed up and off 
> > > onto different storage. I just grabbed a copy of the entire /etc 
> > > directory as well just in case there was something needed in there that 
> > > is not included in the engine-backup solution.
> > >
> > > > In either case, assuming this is a production env, I suggest to first 
> > > > test on a separate env to see how it all looks like.
> > >
> > > This is a production environment. My plan is to get a new server ordered 
> > > and built, removing the old server from the equation (old server is old 
> > > and needs to be replaced anyway). Then rebuild the Ovirt bits and restore 
> > > the data from my backups.
> >
> > I assume, from your first post, that you refer to the host running the 
> > engine, and that this is a standalone engine, not hosted-engine.
> > Right? Meaning, it's running on bare-metal, not inside a VM managed by 
> > itself.
> >
> > For testing you can try stuff on an isolated VM somewhere, no need to wait 
> > for your new server to arrive.
> >
> > >
> > > I just more needed a quick set up steps to take here. From what I gather 
> > > I need to basically:
> > >
> > > 1. reinstall CentOS
> > > 2. Reconfigure storage (this server has several ISCSI LUNs its attached 
> > > to currently. I don’t know if they are required for this or what).
> >
> > I obviously have no idea what is your storage design and requirements, but 
> > this is largely a local matter, unrelated to the hosts that run VMs. The 
> > engine machine's storage is (normally) not used for that, only for the 
> > engine itself (and its db, etc.).
> >
> > > 3. Install PostGreSQL (maybe? Or does the ovirt engine script do
> > > this for you?) 3. Install Ovirt/run ovirt-engine script maybe?
> >
> > Add relevant repo, by installing relevant ovirt-releast* package (see the 
> > web site), and then 'yum install ovirt-engine' - this should grab for you 
> > postgresql etc.
> >
> > > 4. Restore DB and data
> >
> > Yes. Run basically 'engine-backup --mode=restore' and then 'engine-setup'. 
> > Please check the backup/restore documentation on the web site.
> > If your current engine used only defaults (meaning, engine+dwh+their DBs 
> > all on the engine machine, provisioned by engine-setup), then the restore 
> > command should be 

[ovirt-users] Re: Deleted gluster volume

2020-01-06 Thread Strahil Nikolov
 Do you see the gluster volume under command line ?gluster volume list
I have a very strong believe that the volume was actually deleted.
Best Regards,Strahil Nikolov
В понеделник, 6 януари 2020 г., 15:13:52 ч. Гринуич+2, Christian Reiss 
 написа:  
 
 Hey folks,

I accidently deleted a volume in the oVirt Engine, Storage -> Volumes. 
The gluster is still up on all three nodes, but the ovirt engine refuses 
to re-scan it. Or to re-introduce it.

How could I add the still valid and working (but deleted in the ovirt 
engine) gluster volume back to the engine?

It's a test setup. I am trying to break & repair things. Still I am 
unable to find any documentation on this "use case".

Thanks!

-- 
with kind regards,
mit freundlichen Gruessen,

Christian Reiss
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UW4AQ3DQCFYYYKTPJ4V5D54AJUVODO5K/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QQLOMM6ZHVXGHGHGQCNMUSCL6MF2WLUO/


[ovirt-users] Re: Ovirt OVN help needed

2020-01-06 Thread Strahil Nikolov
 Hi Miguel,
I had read some blogs about OVN and I tried to collect some data that might 
hint where the issue is.
I still struggle to "decode" that , but it may be easier for you or anyone on 
the list.
I am eager to receive your reply.
Thanks in advance and Happy New Year !

Best Regards,Strahil Nikolov
В сряда, 18 декември 2019 г., 21:10:31 ч. Гринуич+2, Strahil Nikolov 
 написа:  
 
  That's a good question.ovirtmgmt is using linux bridge, but I'm not so sure 
about the br-int.'brctl show' is not understanding what type is br-int , so I 
guess openvswitch.
This is still a guess, so you can give me the command to verify that :)
As the system was first build on 4.2.7 , most probably it never used anything 
except openvswitch.
Thanks in advance for your help. I really appreciate that.
Best Regards,Strahil Nikolov

В сряда, 18 декември 2019 г., 17:53:31 ч. Гринуич+2, Miguel Duarte de Mora 
Barroso  написа:  
 
 On Wed, Dec 18, 2019 at 6:35 AM Strahil Nikolov  wrote:
>
> Hi Dominik,
>
> sadly reinstall of all hosts is not helping.
>
> @ Miguel,
>
> I have 2 clusters
> 1. Default (amd-based one) -> ovirt1 (192.168.1.90) & ovirt2 (192.168.1.64)
> 2. Intel (intel-based one and a gluster arbiter) -> ovirt3 (192.168.1.41)

But what are the switch types used on the clusters: openvswitch *or*
legacy / linux bridges ?


>
> The output of the 2 commands (after I run reinstall on all hosts ):
>
> [root@engine ~]# ovn-sbctl list encap
> _uuid              : d4d98c65-11da-4dc8-9da3-780e7738176f
> chassis_name        : "baa0199e-d1a4-484c-af13-a41bcad19dbc"
> ip                  : "192.168.1.90"
> options            : {csum="true"}
> type                : geneve
>
> _uuid              : ed8744a5-a302-493b-8c3b-19a4d2e170de
> chassis_name        : "25cc77b3-046f-45c5-af0c-ffb2f77d73f1"
> ip                  : "192.168.1.64"
> options            : {csum="true"}
> type                : geneve
>
> _uuid              : b72ff0ab-92fc-450c-a6eb-ab2869dee217
> chassis_name        : "5668499c-7dd0-41ee-bc5d-2e6ee9cd61c3"
> ip                  : "192.168.1.41"
> options            : {csum="true"}
> type                : geneve
>
>
> [root@engine ~]# ovn-sbctl list chassis
> _uuid              : b1da5110-f477-4c60-9963-b464ab96c644
> encaps              : [ed8744a5-a302-493b-8c3b-19a4d2e170de]
> external_ids        : {datapath-type="", 
> iface-types="erspan,geneve,gre,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan",
>  ovn-bridge-mappings=""}
> hostname            : "ovirt2.localdomain"
> name                : "25cc77b3-046f-45c5-af0c-ffb2f77d73f1"
> nb_cfg              : 0
> transport_zones    : []
> vtep_logical_switches: []
>
> _uuid              : dcc94e1c-bf44-46a3-b9d1-45360c307b26
> encaps              : [b72ff0ab-92fc-450c-a6eb-ab2869dee217]
> external_ids        : {datapath-type="", 
> iface-types="erspan,geneve,gre,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan",
>  ovn-bridge-mappings=""}
> hostname            : "ovirt3.localdomain"
> name                : "5668499c-7dd0-41ee-bc5d-2e6ee9cd61c3"
> nb_cfg              : 0
> transport_zones    : []
> vtep_logical_switches: []
>
> _uuid              : 897b34c5-d1d1-41a7-b2fd-5f1fa203c1da
> encaps              : [d4d98c65-11da-4dc8-9da3-780e7738176f]
> external_ids        : {datapath-type="", 
> iface-types="erspan,geneve,gre,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan",
>  ovn-bridge-mappings=""}
> hostname            : "ovirt1.localdomain"
> name                : "baa0199e-d1a4-484c-af13-a41bcad19dbc"
> nb_cfg              : 0
> transport_zones    : []
> vtep_logical_switches: []
>
>
> If you know an easy method to reach default settings will be best, as I'm 
> currently not using OVN in production (just for tests and to learn more about 
> how it works) and I can afford any kind of downtime.
>
> Best Regards,
> Strahil Nikolov
>
> В вторник, 17 декември 2019 г., 11:28:25 ч. Гринуич+2, Miguel Duarte de Mora 
> Barroso  написа:
>
>
> On Tue, Dec 17, 2019 at 10:19 AM Miguel Duarte de Mora Barroso
>  wrote:
> >
> > On Tue, Dec 17, 2019 at 9:17 AM Dominik Holler  wrote:
> > >
> > >
> > >
> > > On Tue, Dec 17, 2019 at 6:28 AM Strahil  wrote:
> > >>
> > >> Hi Dominik,
> > >>
> > >> Thanks for your reply.
> > >>
> > >> On ovirt1 I got the following:
> > >> [root@ovirt1 openvswitch]# less  ovn-controller.log-20191216.gz
> > >> 2019-12-15T01:49:02.988Z|00032|vlog|INFO|opened log file 
> > >> /var/log/openvswitch/ovn-controller.log
> > >> 2019-12-16T01:18:02.114Z|00033|vlog|INFO|closing log file
> > >> ovn-controller.log-20191216.gz (END)
> > >>
> > >> Same is on the other node:
> > >>
> > >> [root@ovirt2 openvswitch]# less ovn-controller.log-20191216.gz
> > >> 2019-12-15T01:26:03.477Z|00028|vlog|INFO|opened log file 
> > >> /var/log/openvswitch/ovn-controller.log
> > >> 2019-12-16T01:30:01.718Z|00029|vlog|INFO|closing log file
> > >> ovn-controller.log-20191216.gz (END)
> > >>
> > >> The strange thing is that the geneve tunnels are 

[ovirt-users] Support for Shared SAS storage

2020-01-06 Thread Vinícius Ferrão
Hello,

I’ve two compute nodes with SAS Direct Attached sharing the same disks.

Looking at the supported types I can’t see this on the documentation: 
https://www.ovirt.org/documentation/admin-guide/chap-Storage.html

There’s is local storage on this documentation, but my case is two machines, 
both using SAS, connected to the same machines. It’s the VRTX hardware from 
Dell.

Is there any support for this? It should be just like Fibre Channel and iSCSI, 
but with SAS instead.

Thanks,



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JNV4GWOHDNXZMLVZ2ZRUTX4V2OMQZI4T/


[ovirt-users] Re: terraform integration

2020-01-06 Thread Nathanaël Blanchet

Hello Roy

Le 21/11/2019 à 13:57, Roy Golan a écrit :



On Thu, 21 Nov 2019 at 08:48, Roy Golan > wrote:




On Wed, 20 Nov 2019 at 09:49, Nathanaël Blanchet mailto:blanc...@abes.fr>> wrote:


Le 19/11/2019 à 19:23, Nathanaël Blanchet a écrit :



Le 19/11/2019 à 13:43, Roy Golan a écrit :



On Tue, 19 Nov 2019 at 14:34, Nathanaël Blanchet
mailto:blanc...@abes.fr>> wrote:

Le 19/11/2019 à 08:55, Roy Golan a écrit :

oc get -o json clusterversion


This is the output of the previous failed deployment,
I'll give a try to a newer one when I'll have a minute
to test


Without changing nothing with template,  I gave a new try
and... nothing works anymore now, none of provided IPs can be
pingued : dial tcp 10.34.212.51:6443
: connect: no route to host", so
none of masters can be provisonned by bootstrap.

I tried with the latest rhcos and latest ovirt 4.3.7, it is
the same. Obviously something changed since my first attempt
12 days ago... is your docker image for openshift-installer
up to date?

Are you still able to your side to deploy a valid cluster ?


I investigated looking at bootstrap logs (attached) and it
seems that every containers die immediately after been started.

Nov 20 07:02:33 localhost podman[2024]: 2019-11-20
07:02:33.60107571 + UTC m=+0.794838407 container init
446dc9b7a04ff3ff4bbcfa6750e3946c084741b39707eb088c9d7ae648e35603
(image=registry.svc.ci.openshift.org/origin/release:4.3
,
name=eager_cannon)
Nov 20 07:02:33 localhost podman[2024]: 2019-11-20
07:02:33.623197173 + UTC m=+0.816959853 container start
446dc9b7a04ff3ff4bbcfa6750e3946c084741b39707eb088c9d7ae648e35603
(image=registry.svc.ci.openshift.org/origin/release:4.3
,
name=eager_cannon)
Nov 20 07:02:33 localhost podman[2024]: 2019-11-20
07:02:33.623814258 + UTC m=+0.817576965 container attach
446dc9b7a04ff3ff4bbcfa6750e3946c084741b39707eb088c9d7ae648e35603
(image=registry.svc.ci.openshift.org/origin/release:4.3
,
name=eager_cannon)
Nov 20 07:02:34 localhost systemd[1]:

libpod-446dc9b7a04ff3ff4bbcfa6750e3946c084741b39707eb088c9d7ae648e35603.scope:
Consumed 814ms CPU time
Nov 20 07:02:34 localhost podman[2024]: 2019-11-20
07:02:34.100569998 + UTC m=+1.294332779 container died
446dc9b7a04ff3ff4bbcfa6750e3946c084741b39707eb088c9d7ae648e35603
(image=registry.svc.ci.openshift.org/origin/release:4.3
,
name=eager_cannon)
Nov 20 07:02:35 localhost podman[2024]: 2019-11-20
07:02:35.138523102 + UTC m=+2.332285844 container remove
446dc9b7a04ff3ff4bbcfa6750e3946c084741b39707eb088c9d7ae648e35603
(image=registry.svc.ci.openshift.org/origin/release:4.3
,
name=eager_cannon)

and this:

Nov 20 07:04:16 localhost hyperkube[1909]: E1120
07:04:16.489527    1909 remote_runtime.go:200] CreateContainer
in sandbox
"58f2062aa7b6a5b2bdd6b9cf7b41a9f94ca2b30ad5a20e4fa4dec8a9b82f05e5"
from runtime service failed: rpc error: code = Unknown desc =
container create failed: container_linux.go:345: starting
container process caused "exec: \"runtimecfg\": executable
file not found in $PATH"
Nov 20 07:04:16 localhost hyperkube[1909]: E1120
07:04:16.489714    1909 kuberuntime_manager.go:783] init
container start failed: CreateContainerError: container create
failed: container_linux.go:345: starting container process
caused "exec: \"runtimecfg\": executable file not found in $PATH"

What do you think about this?


I'm seeing the same now, checking...


Because of the move upstream to release OKD the release-image that 
comes with the installer I gave you are no longer valid.


I need to prepare an installer version with the preview of OKD, you 
can find the details here 
https://mobile.twitter.com/smarterclayton/status/1196477646885965824
I tested your last openshift-installer container on quay.io, but the 
ovirt provider is not available anymore. Will ovirt be supported as an 
OKD 4.2 iaas provider ?





(do I need to use the terraform-workers tag instead of
latest?)

docker pullquay.io/rgolangh/openshift-installer:terraform-workers  




[ovirt-users] Re: ISO Upload

2020-01-06 Thread m . skrzetuski
I'd give up on the ISO domain. I started like you and then read the docs which 
said that ISO domain is deprecated.
I'd upload all files to a data domain.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4HXSEANPUGZLPH3DMHX72KTQRCT2VK2X/


[ovirt-users] Re: Confusion about storage domain ISO. How do I provision VMs?

2020-01-06 Thread Jan Zmeskal
Hi, I'm glad you could figure it out in the end. It's true that the message
"Paused by System" isn't super useful when trying to figure out what's
wrong. It would definitely be worth to submit a bug requesting more
descriptive error message. Since you already have an environment that
reproduces the issue, would you please consider reporting the bug here
?

Thank you for reaching out to the mailing list and sharing the solution!

Jan


On Sat, Jan 4, 2020 at 11:10 PM  wrote:

> Solved after ridiculous research effort by finding
> https://access.redhat.com/solutions/3607351. I'd switch to Proxmox days
> ago if I wasn't such a Redhat and CentOS fanboy.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NGKPM2W3ILNZ7RF7WUTVW5LQTIJMLORE/
>


-- 

Jan Zmeskal

Quality Engineer, RHV Core System

Red Hat 

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SS56HPVNIX7UWKCCOKJQLN345LWL22QM/


[ovirt-users] Re: OVirt Engine Server Died - Steps for Rebuilding the Ovirt Engine System

2020-01-06 Thread Bob Franzke
I just had some VMs go offline over the weekend. I really cannot figure out how 
to tell why without the engine working. I don’t need to really 'control' the 
VMs but seems without the engine its not just the control aspect. It’s the 
visibility it gives you into the state of your environment. We used Ovirt as 
also a lab setup for users to access and build VMs as needed. This is 
completely offline now without a working Engine. Seems like having an engine 
available all the time would be pretty important generally.

I have never understood the idea of having the machine that controls VMs, being 
in the same infrastructure its controlling. Seems very 'chicken or the egg' 
sort of thing to me. If the engine decides to move itself from one host to 
another, and it fails for some reason because the process of moving itself 
caused a problem (stopping services, etc.)then not sure what you would end up 
with there. Seems very iffy to me, but maybe I am reading too much into it. 
Again I admittedly don’t know enough about Ovirt to know if this thinking is 
off base or not. My own experience with networking systems means you would 
never set things up like this. Each system is autonomous and can take over for 
the other if one part fails. But then again, if Ovirt Engine had been set up 
this way, maybe I wouldn't be in the position I am now with no working engine. 
Lots to sort out. Thanks for the help.

-Original Message-
From: Yedidyah Bar David (d...@redhat.com)  
Sent: Monday, January 6, 2020 8:26 AM
To: Bob Franzke 
Cc: users 
Subject: [ovirt-users] Re: OVirt Engine Server Died - Steps for Rebuilding the 
Ovirt Engine System

On Mon, Jan 6, 2020 at 4:19 PM Bob Franzke  wrote:
>
> So I am getting the impression that without a working ovirt engine, you are 
> sort of cooked from being able to control VMs such that your whole 
> organization can potentially come down to the availability of a single 
> machine? Is this really correct?

Correct.

This does not mean that the engine itself is necessarily critical - if it's 
down, your VMs should still be ok. If _controlling_ VMs is considered critical 
for you, then yes - you do need to make sure your engine is alive and well.

> Are there HA options available for the engine server itself?

The standard option is using hosted-engine with several hosts - you get HA 
out-of-the-box.

I also heard about people using standalone active/standby clustering/HA 
solutions for the engine.

>
> -Original Message-
> From: Yedidyah Bar David (d...@redhat.com) 
> Sent: Monday, January 6, 2020 12:57 AM
> To: Bob Franzke 
> Cc: users 
> Subject: [ovirt-users] Re: OVirt Engine Server Died - Steps for 
> Rebuilding the Ovirt Engine System
>
> On Mon, Jan 6, 2020 at 12:00 AM Bob Franzke  wrote:
> >
> > Thanks for the reply here. Still waiting on a server to rebuild this with. 
> > Should be here tomorrow. The engine was running on bare metal server, and 
> > was not a VM.
> >
> > In the mean time we had a few of the VMs go dark for some reason. I 
> > discovered the vdsm-client commands and tried figuring out what happened. 
> > Is there any way I can start a VM via command line on one of the VM hosts? 
> > Is the vdsm-client command the way to do this without a working engine?
>
> It is, in principle, but that's not supported and is risky - because the 
> engine will not know what you do.
>
> See also e.g.:
>
> https://www.ovirt.org/develop/release-management/features/integration/
> cockpit.html
>
> >
> > -Original Message-
> > From: Yedidyah Bar David (d...@redhat.com) 
> > Sent: Tuesday, December 24, 2019 1:50 AM
> > To: Bob Franzke 
> > Cc: users 
> > Subject: [ovirt-users] Re: OVirt Engine Server Died - Steps for 
> > Rebuilding the Ovirt Engine System
> >
> > On Mon, Dec 23, 2019 at 7:08 PM Bob Franzke  wrote:
> > >
> > > > Which nightly backups? Do they run engine-backup?
> > >
> > > Yes sorry. The backups are the backups created when running the 
> > > engine-backup script. So I have the files and the DB backed up and off 
> > > onto different storage. I just grabbed a copy of the entire /etc 
> > > directory as well just in case there was something needed in there that 
> > > is not included in the engine-backup solution.
> > >
> > > > In either case, assuming this is a production env, I suggest to first 
> > > > test on a separate env to see how it all looks like.
> > >
> > > This is a production environment. My plan is to get a new server ordered 
> > > and built, removing the old server from the equation (old server is old 
> > > and needs to be replaced anyway). Then rebuild the Ovirt bits and restore 
> > > the data from my backups.
> >
> > I assume, from your first post, that you refer to the host running the 
> > engine, and that this is a standalone engine, not hosted-engine.
> > Right? Meaning, it's running on bare-metal, not inside a VM managed by 
> > itself.
> >
> > For testing you can try stuff on an isolated VM somewhere, no need to 

[ovirt-users] ISO Upload

2020-01-06 Thread Christian Reiss

Hey folks,

I have a cluster setup as follows (among others)

/vms for vm storage. Attached as data domain.
/isos for, well, isos. Attached as iso domain.

When I try to upload a(ny) iso via the ovirt engine, I can only select 
the vms storage path, not the iso storage path. Both are green and 
active across the board.


Is my understanding correct that I can only upload and ISO to a data 
domain, then (once uploaded), move the iso file to the iso domain via 
the web-ui?


-Chris.


--
with kind regards,
mit freundlichen Gruessen,

Christian Reiss
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RTYFZI3JCS25NC2EIYFOZTDUJ5RLF6TK/


[ovirt-users] Re: VM auto start with Host?

2020-01-06 Thread m . skrzetuski
Clean shutdown of the host or VM? And what is clean? Is "shutdown -h now" 
clean? Is stopping the host via BIOS for the night clean?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CFIBGL7ALEMATWAPBX475VNYYZ44ABZQ/


[ovirt-users] Re: OVirt Engine Server Died - Steps for Rebuilding the Ovirt Engine System

2020-01-06 Thread Yedidyah Bar David
On Mon, Jan 6, 2020 at 4:19 PM Bob Franzke  wrote:
>
> So I am getting the impression that without a working ovirt engine, you are 
> sort of cooked from being able to control VMs such that your whole 
> organization can potentially come down to the availability of a single 
> machine? Is this really correct?

Correct.

This does not mean that the engine itself is necessarily critical - if
it's down, your VMs should still be ok. If _controlling_ VMs is
considered critical for you, then yes - you do need to make sure your
engine is alive and well.

> Are there HA options available for the engine server itself?

The standard option is using hosted-engine with several hosts - you
get HA out-of-the-box.

I also heard about people using standalone active/standby
clustering/HA solutions for the engine.

>
> -Original Message-
> From: Yedidyah Bar David (d...@redhat.com) 
> Sent: Monday, January 6, 2020 12:57 AM
> To: Bob Franzke 
> Cc: users 
> Subject: [ovirt-users] Re: OVirt Engine Server Died - Steps for Rebuilding 
> the Ovirt Engine System
>
> On Mon, Jan 6, 2020 at 12:00 AM Bob Franzke  wrote:
> >
> > Thanks for the reply here. Still waiting on a server to rebuild this with. 
> > Should be here tomorrow. The engine was running on bare metal server, and 
> > was not a VM.
> >
> > In the mean time we had a few of the VMs go dark for some reason. I 
> > discovered the vdsm-client commands and tried figuring out what happened. 
> > Is there any way I can start a VM via command line on one of the VM hosts? 
> > Is the vdsm-client command the way to do this without a working engine?
>
> It is, in principle, but that's not supported and is risky - because the 
> engine will not know what you do.
>
> See also e.g.:
>
> https://www.ovirt.org/develop/release-management/features/integration/cockpit.html
>
> >
> > -Original Message-
> > From: Yedidyah Bar David (d...@redhat.com) 
> > Sent: Tuesday, December 24, 2019 1:50 AM
> > To: Bob Franzke 
> > Cc: users 
> > Subject: [ovirt-users] Re: OVirt Engine Server Died - Steps for
> > Rebuilding the Ovirt Engine System
> >
> > On Mon, Dec 23, 2019 at 7:08 PM Bob Franzke  wrote:
> > >
> > > > Which nightly backups? Do they run engine-backup?
> > >
> > > Yes sorry. The backups are the backups created when running the 
> > > engine-backup script. So I have the files and the DB backed up and off 
> > > onto different storage. I just grabbed a copy of the entire /etc 
> > > directory as well just in case there was something needed in there that 
> > > is not included in the engine-backup solution.
> > >
> > > > In either case, assuming this is a production env, I suggest to first 
> > > > test on a separate env to see how it all looks like.
> > >
> > > This is a production environment. My plan is to get a new server ordered 
> > > and built, removing the old server from the equation (old server is old 
> > > and needs to be replaced anyway). Then rebuild the Ovirt bits and restore 
> > > the data from my backups.
> >
> > I assume, from your first post, that you refer to the host running the 
> > engine, and that this is a standalone engine, not hosted-engine.
> > Right? Meaning, it's running on bare-metal, not inside a VM managed by 
> > itself.
> >
> > For testing you can try stuff on an isolated VM somewhere, no need to wait 
> > for your new server to arrive.
> >
> > >
> > > I just more needed a quick set up steps to take here. From what I gather 
> > > I need to basically:
> > >
> > > 1. reinstall CentOS
> > > 2. Reconfigure storage (this server has several ISCSI LUNs its attached 
> > > to currently. I don’t know if they are required for this or what).
> >
> > I obviously have no idea what is your storage design and requirements, but 
> > this is largely a local matter, unrelated to the hosts that run VMs. The 
> > engine machine's storage is (normally) not used for that, only for the 
> > engine itself (and its db, etc.).
> >
> > > 3. Install PostGreSQL (maybe? Or does the ovirt engine script do
> > > this for you?) 3. Install Ovirt/run ovirt-engine script maybe?
> >
> > Add relevant repo, by installing relevant ovirt-releast* package (see the 
> > web site), and then 'yum install ovirt-engine' - this should grab for you 
> > postgresql etc.
> >
> > > 4. Restore DB and data
> >
> > Yes. Run basically 'engine-backup --mode=restore' and then 'engine-setup'. 
> > Please check the backup/restore documentation on the web site.
> > If your current engine used only defaults (meaning, engine+dwh+their DBs 
> > all on the engine machine, provisioned by engine-setup), then the restore 
> > command should be something like:
> >
> > engine-backup --mode=restore --file=your-backup-file
> > --provision-all-databases
> >
> > Again, please test on a test VM somewhere, and make sure it's isolated
> > - that it can't reach your hosts and start to manage them (unless that's 
> > what you want, of course).
> >
> > >
> > > I am not sure the details of the list outlined above 

[ovirt-users] Re: OVirt Engine Server Died - Steps for Rebuilding the Ovirt Engine System

2020-01-06 Thread Bob Franzke
So I am getting the impression that without a working ovirt engine, you are 
sort of cooked from being able to control VMs such that your whole organization 
can potentially come down to the availability of a single machine? Is this 
really correct? Are there HA options available for the engine server itself?

-Original Message-
From: Yedidyah Bar David (d...@redhat.com)  
Sent: Monday, January 6, 2020 12:57 AM
To: Bob Franzke 
Cc: users 
Subject: [ovirt-users] Re: OVirt Engine Server Died - Steps for Rebuilding the 
Ovirt Engine System

On Mon, Jan 6, 2020 at 12:00 AM Bob Franzke  wrote:
>
> Thanks for the reply here. Still waiting on a server to rebuild this with. 
> Should be here tomorrow. The engine was running on bare metal server, and was 
> not a VM.
>
> In the mean time we had a few of the VMs go dark for some reason. I 
> discovered the vdsm-client commands and tried figuring out what happened. Is 
> there any way I can start a VM via command line on one of the VM hosts? Is 
> the vdsm-client command the way to do this without a working engine?

It is, in principle, but that's not supported and is risky - because the engine 
will not know what you do.

See also e.g.:

https://www.ovirt.org/develop/release-management/features/integration/cockpit.html

>
> -Original Message-
> From: Yedidyah Bar David (d...@redhat.com) 
> Sent: Tuesday, December 24, 2019 1:50 AM
> To: Bob Franzke 
> Cc: users 
> Subject: [ovirt-users] Re: OVirt Engine Server Died - Steps for 
> Rebuilding the Ovirt Engine System
>
> On Mon, Dec 23, 2019 at 7:08 PM Bob Franzke  wrote:
> >
> > > Which nightly backups? Do they run engine-backup?
> >
> > Yes sorry. The backups are the backups created when running the 
> > engine-backup script. So I have the files and the DB backed up and off onto 
> > different storage. I just grabbed a copy of the entire /etc directory as 
> > well just in case there was something needed in there that is not included 
> > in the engine-backup solution.
> >
> > > In either case, assuming this is a production env, I suggest to first 
> > > test on a separate env to see how it all looks like.
> >
> > This is a production environment. My plan is to get a new server ordered 
> > and built, removing the old server from the equation (old server is old and 
> > needs to be replaced anyway). Then rebuild the Ovirt bits and restore the 
> > data from my backups.
>
> I assume, from your first post, that you refer to the host running the 
> engine, and that this is a standalone engine, not hosted-engine.
> Right? Meaning, it's running on bare-metal, not inside a VM managed by itself.
>
> For testing you can try stuff on an isolated VM somewhere, no need to wait 
> for your new server to arrive.
>
> >
> > I just more needed a quick set up steps to take here. From what I gather I 
> > need to basically:
> >
> > 1. reinstall CentOS
> > 2. Reconfigure storage (this server has several ISCSI LUNs its attached to 
> > currently. I don’t know if they are required for this or what).
>
> I obviously have no idea what is your storage design and requirements, but 
> this is largely a local matter, unrelated to the hosts that run VMs. The 
> engine machine's storage is (normally) not used for that, only for the engine 
> itself (and its db, etc.).
>
> > 3. Install PostGreSQL (maybe? Or does the ovirt engine script do 
> > this for you?) 3. Install Ovirt/run ovirt-engine script maybe?
>
> Add relevant repo, by installing relevant ovirt-releast* package (see the web 
> site), and then 'yum install ovirt-engine' - this should grab for you 
> postgresql etc.
>
> > 4. Restore DB and data
>
> Yes. Run basically 'engine-backup --mode=restore' and then 'engine-setup'. 
> Please check the backup/restore documentation on the web site.
> If your current engine used only defaults (meaning, engine+dwh+their DBs all 
> on the engine machine, provisioned by engine-setup), then the restore command 
> should be something like:
>
> engine-backup --mode=restore --file=your-backup-file 
> --provision-all-databases
>
> Again, please test on a test VM somewhere, and make sure it's isolated
> - that it can't reach your hosts and start to manage them (unless that's what 
> you want, of course).
>
> >
> > I am not sure the details of the list outlined above (what to run where, 
> > etc.). I am looking for consultants to help me out here as its clear I am a 
> > bit behind the curve on this one. So far not much has worked out on that 
> > front. Does the above list seem reasonable in terms of needed steps to get 
> > this going again?
>
> See above.
>
> For consultants, you might want to check:
>
> https://www.ovirt.org/community/user-stories/users-and-providers.html
>
> And/or post again to the list with a subject line that's more likely to 
> attract them ("Looking for an oVirt consultant...").
>
> Good luck and best regards,
>
> >
> >
> > -Original Message-
> > From: Yedidyah Bar David (d...@redhat.com) 
> > Sent: Sunday, 

[ovirt-users] Deleted gluster volume

2020-01-06 Thread Christian Reiss

Hey folks,

I accidently deleted a volume in the oVirt Engine, Storage -> Volumes. 
The gluster is still up on all three nodes, but the ovirt engine refuses 
to re-scan it. Or to re-introduce it.


How could I add the still valid and working (but deleted in the ovirt 
engine) gluster volume back to the engine?


It's a test setup. I am trying to break & repair things. Still I am 
unable to find any documentation on this "use case".


Thanks!

--
with kind regards,
mit freundlichen Gruessen,

Christian Reiss
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UW4AQ3DQCFYYYKTPJ4V5D54AJUVODO5K/


[ovirt-users] supervdsm failing during network_caps

2020-01-06 Thread Alan G
Hi,



I have issues with one host where supervdsm is failing in network_caps.



I see the following trace in the log.



MainProcess|jsonrpc/1::ERROR::2020-01-06 
03:01:05,558::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper) Error 
in network_caps

Traceback (most recent call last):

  File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line 98, in 
wrapper

    res = func(*args, **kwargs)

  File "/usr/lib/python2.7/site-packages/vdsm/network/api.py", line 56, in 
network_caps

    return netswitch.configurator.netcaps(compatibility=30600)

  File 
"/usr/lib/python2.7/site-packages/vdsm/network/netswitch/configurator.py", line 
317, in netcaps

    net_caps = netinfo(compatibility=compatibility)

  File 
"/usr/lib/python2.7/site-packages/vdsm/network/netswitch/configurator.py", line 
325, in netinfo

    _netinfo = netinfo_get(vdsmnets, compatibility)

  File "/usr/lib/python2.7/site-packages/vdsm/network/netinfo/cache.py", line 
150, in get

    return _stringify_mtus(_get(vdsmnets))

  File "/usr/lib/python2.7/site-packages/vdsm/network/netinfo/cache.py", line 
59, in _get

    ipaddrs = getIpAddrs()

  File "/usr/lib/python2.7/site-packages/vdsm/network/netinfo/addresses.py", 
line 72, in getIpAddrs

    for addr in nl_addr.iter_addrs():

  File "/usr/lib/python2.7/site-packages/vdsm/network/netlink/addr.py", line 
33, in iter_addrs

    with _nl_addr_cache(sock) as addr_cache:

  File "/usr/lib64/python2.7/contextlib.py", line 17, in __enter__

    return self.gen.next()

  File "/usr/lib/python2.7/site-packages/vdsm/network/netlink/__init__.py", 
line 92, in _cache_manager

    cache = cache_allocator(sock)

  File "/usr/lib/python2.7/site-packages/vdsm/network/netlink/libnl.py", line 
469, in rtnl_addr_alloc_cache

    raise IOError(-err, nl_geterror(err))

IOError: [Errno 16] Message sequence number mismatch



A restart of supervdsm will resolve the issue for a period, maybe 24 hours, 
then it will occur again. So I'm thinking it's  resource exhaustion or a leak 
of some kind?



Running 4.2.8.2 with VDSM at 4.20.46.



I've had a look through the bugzilla and can't find an exact match, closest was 
this one https://bugzilla.redhat.com/show_bug.cgi?id=1666123 which seems to be 
a RHV only fix.



Thanks,



Alan___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4YGTPGGNZJ3JT4Z6ZPIQOPPD73WRG72E/


[ovirt-users] Re: Cluster CPU type update

2020-01-06 Thread Eli Mesika
On Sun, Jan 5, 2020 at 4:24 PM Strahil  wrote:

> Just a  question from curiosity.
> Can't we just define second cluster  and move each VM from old cluster  to
> the  new one ?
>

You can
If you want the VMs to be run on the new cluster host you should

1) add the new cluster
2) have at least 1 running host in this cluster (assuming that it can run
all your VMs , if not, add more hosts)
3) shut all your VMs down
4) Put all your hosts in old cluster in Maintainance, by doing so, one of
your hosts in the new cluster will switch to "Contending" and then to "SPM"
5) After that, you can run all your VMs again and it should be run by the
hosts in the new cluster

Regards
Eli



> Best Regards,
> Strahil Nikkolov
> On Jan 5, 2020 12:29, Eli Mesika  wrote:
>
> This is the right way to do that IMO
>
> On Sat, Jan 4, 2020 at 3:09 AM Morris, Roy  wrote:
>
> Hello,
>
> I'm running a 4.3 cluster with 56 VMs all set to default cluster and the
> cluster itself is set to Intel Westmere Family. I am getting a new server
> and retiring 3 old ones which brings me to my question.
>
> When implementing the new host into the cluster and retiring the older
> hosts. I want to change the CPU type setting in the cluster default to
> Broadwell which is what my new 3 host cluster will be. To make this change,
> I assume the process would be to turn off all VMs, set cluster cpu type to
> broadwell, then turn on VMs to get the new CPU flags?
>
> Just wanted to reach out and see if someone else has had to go through
> this before and has any notes/tips.
>
> Best regards,
>
> Roy
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FKFGEOW5OI6A2U7X3TM57UUANL7UPGKW/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GG5753QVHKL4HRWNNH4WFQTBSROGTN7G/


[ovirt-users] Re: How do I do bridged networking?

2020-01-06 Thread m . skrzetuski
Hello Ales,

it did not work as described at first but now magically solved itself. I can 
see VMs as normal servers in my network. Thanks.

Kind regards
Skrzetuski
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HJPGHCRG52XBCRSQVNJTGI5FOV4K46ID/