[ovirt-users] Re: hyperconverged cluster - how to change the mount path?

2018-07-04 Thread Renout Gerrits
unsupported, make backups, use at your own risk etc...

you could update the db if you can't put the storage domain into maintenance
after that put your hosts into maintenance and out again to remount

find the id of the sd you want to update with:
  engine=# select * from storage_server_connections;

ensure you have to correct id, the following should point to the old mount
point:
  engine=# select connection from storage_server_connections where id='';

next update your db
  engine=# update storage_server_connections set connection=' wrote:

> Yeah, sorry that doesn’t work.
>
> I can’t set hosted_storage (storage domain where hosted engine runs on)
> into maintenance mode to being able to edit it.
>
>
>
> André
>
>
>
> *Von:* Gobinda Das [mailto:go...@redhat.com]
> *Gesendet:* Montag, 2. Juli 2018 09:00
> *An:* Alex K
> *Cc:* Liebe, André-Sebastian; users
> *Betreff:* Re: [ovirt-users] Re: hyperconverged cluster - how to change
> the mount path?
>
>
>
> You can do it by using "Manage Domain" option from Starage Domain.
>
>
>
> On Sun, Jul 1, 2018 at 7:02 PM, Alex K  wrote:
>
> The steps roughly would be to put that storage domain in maintenance then
> edit/redefine it. You have the option to set gluster mount point options
> for the redundancy part. No need to set dns round robin.
>
>
>
> Alex
>
>
>
> On Sun, Jul 1, 2018, 13:29 Liebe, André-Sebastian 
> wrote:
>
> Hi list,
>
> I'm looking for an advice how to change the mount point of the
> hosted_storage due to a hostname change.
>
> When I set up our hyperconverged lab cluster (host1, host2, host3) I
> populated the mount path with host3:/hosted_storage which wasn't very
> clever as it brings in a single point of failure (i.e. when host3 is down).
> So I thought adding a round robin dns/hosts entry (i.e. gluster1) for host
> 1 to 3 and changing the mount path would be a better idea. But the mount
> path entry is locked in web gui and I couldn't find any hint how to change
> it manually (in database, shared and local configuration) in a consistent
> way without risking the cluster.
> So, is there a step by step guide how to achieve this without reinstalling
> (from backup)?
>
>
> Sincerely
>
> André-Sebastian Liebe
> Technik / Innovation
>
> gematik
> Gesellschaft für Telematikanwendungen der Gesundheitskarte mbH
> Friedrichstraße 136
> 10117 Berlin
> Telefon: +49 30 40041-197
> Telefax: +49 30 40041-111
> E-Mail:  andre.li...@gematik.de
> www.gematik.de
> ___
> Amtsgericht Berlin-Charlottenburg HRB 96351 B
> Geschäftsführer: Alexander Beyer
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/B2R6G3VCK545RKT5BMAQ5EXO4ZFJSMFG/
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/QKNPBUXPIHNYN2NT63KUCYZOBZO5HUOL/
>
>
>
>
>
> --
>
> Thanks,
>
> Gobinda
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/SQJGVJLSRHAXTX6EFU4Z6GPO5IN565CD/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5CN675JLGV6JPPR6W2TXFMZ562ZYPMJJ/


Re: [ovirt-users] Importing Libvirt Kvm Vms to oVirt Status: Released in oVirt 4.2 using ssh - Failed to communicate with the external provider

2018-02-09 Thread Renout Gerrits
Hi Maoz,

You should not be using the engine and not the root user for the ssh keys.
The actions are delegated to a host and the vdsm user. So you should set-up
ssh keys for the vdsm user on one or all of the hosts (remember to select
this host as proxy host in the gui). Probably the documentation should be
updated to make this more clear.

1. Make the keygen for vdsm user:

   # sudo -u vdsm ssh-keygen

2.Do the first login to confirm the fingerprints using "yes":

   # sudo -u vdsm ssh r...@xxx.xxx.xxx.xxx

3. Then copy the key to the KVm host running the vm:

   # sudo -u vdsm ssh-copy-id r...@xxx.xxx.xxx.xxx

4. Now verify is vdsm can login without password or not:

   # sudo -u vdsm ssh r...@xxx.xxx.xxx.xxx


On Thu, Feb 8, 2018 at 3:12 PM, Petr Kotas  wrote:

> You can generate one :). There are different guides for different
> platforms.
>
> The link I sent is the good start on where to put the keys and how to set
> it up.
>
> Petr
>
> On Thu, Feb 8, 2018 at 3:09 PM, maoz zadok  wrote:
>
>> Using the command line on the engine machine (as root) works fine. I
>> don't use ssh key from the agent GUI but the authentication section (with
>> root user and password),
>> I think that it's a bug, I manage to migrate with TCP but I just want to
>> let you know.
>>
>> is it possible to use ssh-key from the agent GUI? how can I get the key?
>>
>> On Thu, Feb 8, 2018 at 2:51 PM, Petr Kotas  wrote:
>>
>>> Hi Maoz,
>>>
>>> it looks like cannot connect due to wrong setup of ssh keys. Which linux
>>> are you using?
>>> The guide for setting the ssh connection to  libvirt is here:
>>> https://wiki.libvirt.org/page/SSHSetup
>>>
>>> May it helps?
>>>
>>> Petr
>>>
>>> On Wed, Feb 7, 2018 at 10:53 PM, maoz zadok  wrote:
>>>
 Hello there,

 I'm following https://www.ovirt.org/develop/
 release-management/features/virt/KvmToOvirt/ guide in order to import
 VMS from Libvirt to oVirt using ssh.
  URL:  "qemu+ssh://host1.example.org/system"

 and get the following error:
 Failed to communicate with the external provider, see log for
 additional details.


 *oVirt agent log:*

 *- Failed to retrieve VMs information from external server
 qemu+ssh://XXX.XXX.XXX.XXX/system*
 *- VDSM XXX command GetVmsNamesFromExternalProviderVDS failed: Cannot
 recv data: Host key verification failed.: Connection reset by peer*



 *remote host sshd DEBUG log:*
 *Feb  7 16:38:29 XXX sshd[110005]: Connection from XXX.XXX.XXX.147 port
 48148 on XXX.XXX.XXX.123 port 22*
 *Feb  7 16:38:29 XXX sshd[110005]: debug1: Client protocol version 2.0;
 client software version OpenSSH_7.4*
 *Feb  7 16:38:29 XXX sshd[110005]: debug1: match: OpenSSH_7.4 pat
 OpenSSH* compat 0x0400*
 *Feb  7 16:38:29 XXX sshd[110005]: debug1: Local version string
 SSH-2.0-OpenSSH_7.4*
 *Feb  7 16:38:29 XXX sshd[110005]: debug1: Enabling compatibility mode
 for protocol 2.0*
 *Feb  7 16:38:29 XXX sshd[110005]: debug1: SELinux support disabled
 [preauth]*
 *Feb  7 16:38:29 XXX sshd[110005]: debug1: permanently_set_uid: 74/74
 [preauth]*
 *Feb  7 16:38:29 XXX sshd[110005]: debug1: list_hostkey_types:
 ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 
 [preauth]*
 *Feb  7 16:38:29 XXX sshd[110005]: debug1: SSH2_MSG_KEXINIT sent
 [preauth]*
 *Feb  7 16:38:29 XXX sshd[110005]: debug1: SSH2_MSG_KEXINIT received
 [preauth]*
 *Feb  7 16:38:29 XXX sshd[110005]: debug1: kex: algorithm:
 curve25519-sha256 [preauth]*
 *Feb  7 16:38:29 XXX sshd[110005]: debug1: kex: host key algorithm:
 ecdsa-sha2-nistp256 [preauth]*
 *Feb  7 16:38:29 XXX sshd[110005]: debug1: kex: client->server cipher:
 chacha20-poly1...@openssh.com  MAC:
  compression: none [preauth]*
 *Feb  7 16:38:29 XXX sshd[110005]: debug1: kex: server->client cipher:
 chacha20-poly1...@openssh.com  MAC:
  compression: none [preauth]*
 *Feb  7 16:38:29 XXX sshd[110005]: debug1: kex: curve25519-sha256
 need=64 dh_need=64 [preauth]*
 *Feb  7 16:38:29 XXX sshd[110005]: debug1: kex: curve25519-sha256
 need=64 dh_need=64 [preauth]*
 *Feb  7 16:38:29 XXX sshd[110005]: debug1: expecting
 SSH2_MSG_KEX_ECDH_INIT [preauth]*
 *Feb  7 16:38:29 XXX sshd[110005]: debug1: rekey after 134217728 blocks
 [preauth]*
 *Feb  7 16:38:29 XXX sshd[110005]: debug1: SSH2_MSG_NEWKEYS sent
 [preauth]*
 *Feb  7 16:38:29 XXX sshd[110005]: debug1: expecting SSH2_MSG_NEWKEYS
 [preauth]*
 *Feb  7 16:38:29 XXX sshd[110005]: Connection closed by XXX.XXX.XXX.147
 port 48148 [preauth]*
 *Feb  7 16:38:29 XXX sshd[110005]: debug1: do_cleanup [preauth]*
 *Feb  7 16:38:29 XXX sshd[110005]: debug1: do_cleanup*
 *Feb  7 16:38:29 XXX sshd[110005]: 

Re: [ovirt-users] Unable to change compatibility version due to hosted engine

2016-12-15 Thread Renout Gerrits
Thanks, will give that a go

Karma++ :)



On Thu, Dec 15, 2016 at 12:13 PM, Simone Tiraboschi <stira...@redhat.com>
wrote:

>
>
> On Thu, Dec 15, 2016 at 12:04 PM, Renout Gerrits <m...@renout.nl> wrote:
>
>> Hi Simone,
>>
>> Do you mean the following?
>>
>> - create a new cluster with version 3.6.
>> - Migrate HE to new cluster
>> - Shutdown all VM's in old cluster
>> - change compatibility version of old cluster to 3.6
>> - migrate HE back to old old cluster
>>
>> In this case the old cluster still thinks that the HE is running in it
>> due to the ChangeVmCluster action that fails. From what I see this is fixed
>> in 4.02: https://bugzilla.redhat.com/show_bug.cgi?id=1351533
>> But I can't to upgrade 4 yet. Do you know if this fix has been back
>> ported to 3.6?
>>
>
> Yes, it has been backported to 3.6.9:
> https://gerrit.ovirt.org/#/c/63377/2
>
> Another option is just to add a new hosted-engine host to the new 3.6
> cluster and restart the engine VM there from the hosted-engine CLI.
>
> For me it's hard to just try as I will need a maintenance window to
>> shutdown all vm's.
>>
>> Or do you mean something completely different?
>>
>> Thanks,
>> Renout
>>
>> On Thu, Dec 15, 2016 at 11:01 AM, Simone Tiraboschi <stira...@redhat.com>
>> wrote:
>>
>>>
>>>
>>> On Thu, Dec 15, 2016 at 10:33 AM, Renout Gerrits <m...@renout.nl> wrote:
>>>
>>>> Hi All,
>>>>
>>>> We have an environment which we want to upgrade to ovirt 4.0. This was
>>>> initially installed at 3.5, then upgraded to 3.6.
>>>> Problem we're facing is that for an upgrade to 4.0 a compatibility
>>>> version of 3.6 is required.
>>>> When changing the cluster compatibility version of the 'Default'
>>>> cluster from 3.5 to 3.6 we get the error in the gui: "Cannot change cluster
>>>> compatibility version when a VM is active. please shutdown all VMs in the
>>>> cluster."
>>>> Even when we shutdown all vm's, except for the Hosted Engine we get
>>>> this error.
>>>> On the hosts a 'vdsClient -s 0 list' is done which will return the HE.
>>>> In the engine logs we have the following error: "2016-12-08
>>>> 13:00:18,139 WARN  [org.ovirt.engine.core.bll.st
>>>> orage.UpdateStoragePoolCommand] (default task-25) [77a50037]
>>>> CanDoAction of action 'UpdateStoragePool' failed for user admin@internal.
>>>> Reasons: VAR__TYPE__STORAGE__POOL,VAR__ACTION__UPDATE,$ClustersList
>>>> Default,ERROR_CANNOT_UPDATE_STORAGE_POOL_COMPATIBILITY_VERSI
>>>> ON_BIGGER_THAN_CLUSTERS"
>>>>
>>>> So problem would be that the HE is in the Default cluster. But how does
>>>> one change the compatibility version when the HE is down?
>>>> I've tried shutting down the engine, changing the version in the DB:
>>>> "UPDATE vds_groups SET compatibility_version='3.6';" and starting the
>>>> engine again.
>>>>
>>>> When I do that and try to start a VM:
>>>> 2016-12-09T13:30:21.346740Z qemu-kvm: warning: CPU(s) not present in
>>>> any NUMA nodes: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
>>>> 2016-12-09T13:30:21.346883Z qemu-kvm: warning: All CPU(s) up to maxcpus
>>>> should be described in NUMA config
>>>> 2016-12-09T13:30:21.355699Z qemu-kvm: "-memory 'slots|maxmem'" is not
>>>> supported by: rhel6.5.0
>>>>
>>>> So that change was rolled back to compatibilty 3.5. After that we we're
>>>> able to start vm's again.
>>>> Please note that all hosts and HE are EL7.
>>>>
>>>> To me this doesn't seem like a strange set-up or upgrade path. Would it
>>>> be possible to start the HE in another cluster than Default or is there a
>>>> way to bypass the vdsClient list check?
>>>> What is the recommended way of upgrading the HE in this case?
>>>>
>>>
>>> Please take a look here:
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1364557
>>>
>>>
>>>>
>>>> Kind regards,
>>>> Renout
>>>>
>>>> ___
>>>> Users mailing list
>>>> Users@ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>
>>>>
>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to change compatibility version due to hosted engine

2016-12-15 Thread Renout Gerrits
Hi Simone,

Do you mean the following?

- create a new cluster with version 3.6.
- Migrate HE to new cluster
- Shutdown all VM's in old cluster
- change compatibility version of old cluster to 3.6
- migrate HE back to old old cluster

In this case the old cluster still thinks that the HE is running in it due
to the ChangeVmCluster action that fails. From what I see this is fixed in
4.02: https://bugzilla.redhat.com/show_bug.cgi?id=1351533
But I can't to upgrade 4 yet. Do you know if this fix has been back ported
to 3.6? For me it's hard to just try as I will need a maintenance window to
shutdown all vm's.

Or do you mean something completely different?

Thanks,
Renout

On Thu, Dec 15, 2016 at 11:01 AM, Simone Tiraboschi <stira...@redhat.com>
wrote:

>
>
> On Thu, Dec 15, 2016 at 10:33 AM, Renout Gerrits <m...@renout.nl> wrote:
>
>> Hi All,
>>
>> We have an environment which we want to upgrade to ovirt 4.0. This was
>> initially installed at 3.5, then upgraded to 3.6.
>> Problem we're facing is that for an upgrade to 4.0 a compatibility
>> version of 3.6 is required.
>> When changing the cluster compatibility version of the 'Default' cluster
>> from 3.5 to 3.6 we get the error in the gui: "Cannot change cluster
>> compatibility version when a VM is active. please shutdown all VMs in the
>> cluster."
>> Even when we shutdown all vm's, except for the Hosted Engine we get this
>> error.
>> On the hosts a 'vdsClient -s 0 list' is done which will return the HE.
>> In the engine logs we have the following error: "2016-12-08 13:00:18,139
>> WARN  [org.ovirt.engine.core.bll.storage.UpdateStoragePoolCommand]
>> (default task-25) [77a50037] CanDoAction of action 'UpdateStoragePool'
>> failed for user admin@internal. Reasons: 
>> VAR__TYPE__STORAGE__POOL,VAR__ACTION__UPDATE,$ClustersList
>> Default,ERROR_CANNOT_UPDATE_STORAGE_POOL_COMPATIBILITY_VERSI
>> ON_BIGGER_THAN_CLUSTERS"
>>
>> So problem would be that the HE is in the Default cluster. But how does
>> one change the compatibility version when the HE is down?
>> I've tried shutting down the engine, changing the version in the DB:
>> "UPDATE vds_groups SET compatibility_version='3.6';" and starting the
>> engine again.
>>
>> When I do that and try to start a VM:
>> 2016-12-09T13:30:21.346740Z qemu-kvm: warning: CPU(s) not present in any
>> NUMA nodes: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
>> 2016-12-09T13:30:21.346883Z qemu-kvm: warning: All CPU(s) up to maxcpus
>> should be described in NUMA config
>> 2016-12-09T13:30:21.355699Z qemu-kvm: "-memory 'slots|maxmem'" is not
>> supported by: rhel6.5.0
>>
>> So that change was rolled back to compatibilty 3.5. After that we we're
>> able to start vm's again.
>> Please note that all hosts and HE are EL7.
>>
>> To me this doesn't seem like a strange set-up or upgrade path. Would it
>> be possible to start the HE in another cluster than Default or is there a
>> way to bypass the vdsClient list check?
>> What is the recommended way of upgrading the HE in this case?
>>
>
> Please take a look here:
> https://bugzilla.redhat.com/show_bug.cgi?id=1364557
>
>
>>
>> Kind regards,
>> Renout
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] ovirtsdk4 and cloud-init

2016-08-31 Thread Renout Gerrits
I have some troubles starting a VM with cloud-init via the ovirtsdk4.


vm = vms_service.list(search=vm_name)[0]
vm_service = vms_service.vm_service(vm.id)

vm_service.start(
types.Action(
use_cloud_init=True,
),
types.Vm(
types.Initialization(
regenerate_ssh_keys=True,
host_name=vm_fqdn,
nic_configurations=[
types.NicConfiguration(
boot_protocol=types.BootProtocol.STATIC,
name='eth0',
on_boot=True,
ip=types.Ip(
address=vm_address,
netmask=vm_netmask,
gateway=vm_gateway,
),
),
],
),
),
)


Which will result in:
Traceback (most recent call last):
  File "./create_vm.py", line 94, in 
gateway=vm_gateway,
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py", line
18655, in start
Writer.write_boolean(writer, 'async', async)
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/writer.py", line 52,
in write_boolean
return writer.write_element(name, Writer.render_boolean(value))
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/writer.py", line 44,
in render_boolean
raise TypeError('The \'value\' parameter must be a boolean')
TypeError: The 'value' parameter must be a boolean

To be honest I don't have clue where it's going wrong or where to look.
>From what I can see there are two values which must be a boolean and they
are.

Does anybody know what's going wrong or has a working snippet?

Thanks,
Renout
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt-ha-agent

2016-08-26 Thread Renout Gerrits
Depends on your systemd configuration. ovirt-ha-agent and broker daemon's
both log to stdout and it's own logfile. All messages to stdout will go to
journald and be forwarded to /var/log/messages (ForwardToSyslog=yes in
/etc/systemd/journald.conf I think).
So the ovirt-ha-agent doesn't log to /var/log/messages, journald does. if
it should log to stdout is another discussion, but maybe there's good
reason for that, backwards compatibility, don't know.

An easy fix is redirecting the output of the daemon to /dev/null. in
/usr/lib/systemd/system/ovirt-ha-agent.service add StandardOutput=null to
the [service] section.

Renout

On Thu, Aug 25, 2016 at 10:39 PM, David Gossage  wrote:

> This service seems to be logging to both /var/log/messages
> and /var/log/ovirt-hosted-engine-ha/agent.log
>
> Anything that may be causing that?  Centos7 ovirt 3.6.7
>
> MainThread::INFO::2016-08-25 15:38:36,912::ovf_store::109::
> ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
> Extracting Engine VM OVF from the OVF_STORE
> MainThread::INFO::2016-08-25 15:38:36,976::ovf_store::116::
> ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
> OVF_STORE volume path: /rhev/data-center/mnt/glusterSD/ccgl1.gl.local:
> HOST1/6a0bca4a-a1be-47d3-be51-64c6277d1f0f/images/c12c8000-
> 0373-419b-963b-98b04adca760/fb6e2509-4786-433d-868f-a6303dd69cca
> MainThread::INFO::2016-08-25 15:38:37,097::config::226::
> ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.
> config::(refresh_local_conf_file) Found an OVF for HE VM, trying to
> convert
> MainThread::INFO::2016-08-25 15:38:37,102::config::231::
> ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.
> config::(refresh_local_conf_file) Got vm.conf from OVF_STORE
> MainThread::INFO::2016-08-25 15:38:37,200::hosted_engine::
> 462::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 3400)
> MainThread::INFO::2016-08-25 15:38:37,201::hosted_engine::
> 467::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host ccovirt1.carouselchecks.local (id: 1, score: 3400)
> MainThread::INFO::2016-08-25 15:38:47,346::hosted_engine::
> 613::ovirt_hosted_engine_ha.agent.hosted_engine.
> HostedEngine::(_initialize_vdsm) Initializing VDSM
> MainThread::INFO::2016-08-25 15:38:47,439::hosted_engine::
> 658::ovirt_hosted_engine_ha.agent.hosted_engine.
> HostedEngine::(_initialize_storage_images) Connecting the storage
> MainThread::INFO::2016-08-25 15:38:47,454::storage_server::
> 218::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
> Connecting storage server
> MainThread::INFO::2016-08-25 15:38:47,618::storage_server::
> 222::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
> Connecting storage server
> Aug 25 15:38:40 ccovirt3 ovirt-ha-broker: INFO:ovirt_hosted_engine_ha.
> broker.listener.ConnectionHandler:Connection closed
>
> Aug 25 15:38:40 ccovirt3 ovirt-ha-broker: INFO:mem_free.MemFree:memFree:
> 16466
> Aug 25 15:38:40 ccovirt3 ovirt-ha-broker: 
> INFO:engine_health.CpuLoadNoEngine:VM
> not on this host
> Aug 25 15:38:45 ccovirt3 ovirt-ha-broker: INFO:mgmt_bridge.MgmtBridge:Found
> bridge ovirtmgmt with ports
> Aug 25 15:38:45 ccovirt3 ovirt-ha-broker: 
> INFO:cpu_load_no_engine.EngineHealth:VM
> not on this host
> Aug 25 15:38:45 ccovirt3 ovirt-ha-broker: 
> INFO:cpu_load_no_engine.EngineHealth:System
> load total=0.1022, engine=0., non-engine=0.1022
> Aug 25 15:38:47 ccovirt3 ovirt-ha-agent: INFO:ovirt_hosted_engine_ha.
> agent.hosted_engine.HostedEngine:Initializing VDSM
> Aug 25 15:38:47 ccovirt3 ovirt-ha-agent: INFO:ovirt_hosted_engine_ha.
> agent.hosted_engine.HostedEngine:Connecting the storage
> Aug 25 15:38:47 ccovirt3 ovirt-ha-agent: INFO:ovirt_hosted_engine_ha.
> lib.storage_server.StorageServer:Connecting storage server
> Aug 25 15:38:47 ccovirt3 ovirt-ha-agent: INFO:ovirt_hosted_engine_ha.
> lib.storage_server.StorageServer:Connecting storage server
>
> *David Gossage*
> *Carousel Checks Inc. | System Administrator*
> *Office* 708.613.2284
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Enable Gluster service during hosted-engine deploy

2016-08-25 Thread Renout Gerrits
Thanks for the link to the bug!

On Thu, Aug 25, 2016 at 11:45 AM, knarra <kna...@redhat.com> wrote:

> On 08/25/2016 01:20 PM, Renout Gerrits wrote:
>
> Hi all,
>
> Is there a way to enable the Gluster Service for a cluster in the hosted
> engine deploy?
> In the config append you make the gluster service available with:
> OVESETUP_CONFIG/applicationMode=str:both. But how would one enable it?
> I would like to enable it automatically. Now I change it afterwards with
> API,
>
> There is no automatic way as of now to do this. It can either be done via
> API or UI.
>
> but after the change I have to put the hosts into maintenance and activate
> them again. It would seems there must be a better way to do this.
>
> This is an issue as of now. There is a bug logged to track this
> https://bugzilla.redhat.com/show_bug.cgi?id=1313497
>
> Hope this helps !!
>
>
> Thanks,
> Renout
>
>
> ___
> Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Enable Gluster service during hosted-engine deploy

2016-08-25 Thread Renout Gerrits
Hi all,

Is there a way to enable the Gluster Service for a cluster in the hosted
engine deploy?
In the config append you make the gluster service available with:
OVESETUP_CONFIG/applicationMode=str:both. But how would one enable it?
I would like to enable it automatically. Now I change it afterwards with
API, but after the change I have to put the hosts into maintenance and
activate them again. It would seems there must be a better way to do this.

Thanks,
Renout
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] cloud_init not apply when v create from API

2016-03-29 Thread Renout Gerrits
In the more recent versions you have to use 'use_cloud_init=True' in the
api, which isn't described in most documentation yet. Maybe thats the
reason it isn't working?

Here's a working snippet:

vm = api.vms.get(name=vm_name)

action = params.Action(
use_cloud_init=True,
vm=params.VM(
  initialization=params.Initialization(
regenerate_ssh_keys=True,
host_name=vm_fqdn,
nic_configurations=params.GuestNicsConfiguration(
  nic_configuration=[
params.GuestNicConfiguration(
  name="eth0",
  boot_protocol="static",
  on_boot=True,
  ip=params.IP(
address=vm_address,
netmask=vm_netmask,
gateway=vm_gateway,
),
  ),
],
  ),
),
  ),
)

vm.start(action)


On Tue, Mar 29, 2016 at 2:03 PM, Arpit Makhiyaviya 
wrote:

> Hello,
> we are using ovirt api with json data format.
> we have create vm from template and i want to set ip,macaddress,user and
> password for that we are using cloud_init for that we it can't set any
> options.
>
>
> Regards,
> *Arpit Makhiyaviya*
> Software Engineer
> +91-79-40038284
> +91-971-437-6669
> 
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] vdsm lvm filter

2015-06-11 Thread Renout Gerrits
Hi All,

I've got a setup with with ovirt and an equallogic iscsi. Im using the dell
hit drivers. Install all good, after a reboot the storage won't come up.
From the vdsm logs i can see the volume groups can't be found. in the lvm
vgs command the following filter is used: [ '\''r|.*|'\'' ] .
If I change the LVMCONF_TEMPLATE in /usr/share/vdsm/storage/lvm.py and add
the filter [ a|^/dev/eql/ovirt.*| ], the volume group is found and
storage will be attached.

How is the lvm filter constructed? And how can i make sure my volume groups
are found without editing /usr/share/vdsm/storage/lvm.py?

==snippet vdsm.log 1==
Thread-75::DEBUG::2015-06-11
13:25:03,496::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
/sbin/lvm vgs --config ' devices { preferred_names = [^/dev/mapper/]
ignore_sus
pended_devices=1 write_cache_state=0 disable_after_error_count=3
obtain_device_list_from_udev=1 filter = [ '\''r|.*|'\'' ] }  global {
 locking_type=1  prioritise_write_locks=1
wait_for_locks=1  use_lvmetad=0 }  backup {  retain_min = 50  retain_days =
0 } ' --noheadings --units b --nosuffix --separator '|'
--ignoreskippedcluster -o uuid,name,attr,size,
free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
9d551570-ef74-45b7-ba86-a908f1231ca8 (cwd None)
Thread-75::DEBUG::2015-06-11
13:25:03,825::lvm::291::Storage.Misc.excCmd::(cmd) FAILED: err = '
 Volume group 9d551570-ef74-45b7-ba86-a908f1231ca8 not found\n  Skipping
volum
e group 9d551570-ef74-45b7-ba86-a908f1231ca8\n'; rc = 5
Thread-75::WARNING::2015-06-11
13:25:03,828::lvm::376::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] ['
 Volume group 9d551570-ef74-45b7-ba86-a908f1231ca8 not found', '  Skipp
ing volume group 9d551570-ef74-45b7-ba86-a908f1231ca8']
==

==snippet vdsm.log 2==
storageRefresh::DEBUG::2015-06-11
13:37:13,602::lvm::292::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
/sbin/lvm lvs --config ' devices { preferred_names = [^/dev/mapper/]
ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
obtain_device_list_from_udev=0 filter = [ a|^/dev/eql/ovirt.*| ] filter =
[ '\''r|.*|'\'' ] }  global {  locking_type=1  prioritise_write_locks=1
 wait_for_locks=1  use_lvmetad=0 }  backup {  retain_min = 50  retain_days
= 0 } ' --noheadings --units b --nosuffix --separator '|'
--ignoreskippedcluster -o
uuid,name,vg_name,attr,size,seg_start_pe,devices,tags (cwd None)
storageRefresh::DEBUG::2015-06-11
13:37:13,978::lvm::292::Storage.Misc.excCmd::(cmd) SUCCESS: err = '
 WARNING: Ignoring duplicate config node: filter (seeking filter)\n  Found
duplicate PV FeScSs3Umv5wXtZCRvauf17sLaeQnS7e: using
/dev/mapper/eql-0-af1ff6-8865608d7-3c400636c28555dd_b not
/dev/mapper/eql-0-af1ff6-8865608d7-3c400636c28555dd_a\n  Found duplicate PV
FeScSs3Umv5wXtZCRvauf17sLaeQnS7e: using
/dev/mapper/eql-0-af1ff6-8865608d7-3c400636c28555dd-ovirt-storage02 not
/dev/mapper/eql-0-af1ff6-8865608d7-3c400636c28555dd_b\n  Found duplicate PV
hyyv4hWZiKT0UDdoDEyPZ5XrHc31hlTs: using
/dev/mapper/eql-0-af1ff6-3cb5608d7-c1a0063645b5559b_b not
/dev/mapper/eql-0-af1ff6-3cb5608d7-c1a0063645b5559b_a\n  Found duplicate PV
hyyv4hWZiKT0UDdoDEyPZ5XrHc31hlTs: using
/dev/mapper/eql-0-af1ff6-3cb5608d7-c1a0063645b5559b-ovirt-storage01 not
/dev/mapper/eql-0-af1ff6-3cb5608d7-c1a0063645b5559b_b\n'; rc = 0
==

# rpm -qa |grep vdsm
vdsm-jsonrpc-4.16.14-0.el6.noarch
vdsm-xmlrpc-4.16.14-0.el6.noarch
vdsm-python-zombiereaper-4.16.14-0.el6.noarch
vdsm-python-4.16.14-0.el6.noarch
vdsm-cli-4.16.14-0.el6.noarch
vdsm-4.16.14-0.el6.x86_64
vdsm-yajsonrpc-4.16.14-0.el6.noarch

Thanks!
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users