[ovirt-users] Ovf from vm.initialization.configuration.data

2018-07-20 Thread Carlos Rodrigues
Hello,

I'm working on development of script to backup and restore ovirt VMs.

I saw some examples from sdk (https://github.com/oVirt/ovirt-engine-sdk
/tree/master/sdk/examples) to backup and restore VMs, but for backup
ovf VM i'm using the vm.initialization.configuration.data and this ovf
it keeps the snapshots information and don't have disks ids reference.

I would like to known if we have some method to get the ovf with disks
ids?

Regards,

-- 
Carlos Rodrigues

Engenheiro de Software Sénior

Eurotux Informática, S.A. | www.eurotux.com
(t) +351 253 680 300 (m) +351 911 926 110
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JMUMU3TA5KNFQVGFMQU4SKNFHVFP5YUJ/


[ovirt-users] Re: oVirt Metrics

2018-06-15 Thread Carlos Rodrigues
this is the result of execution of following commands:
[root@openshift-ied ~]# oc project kube-service-catalogNow using
project "kube-service-catalog" on server "https://openshift-ied.install
.etux:8443".[root@openshift-ied ~]# oc get
podsNAME   READY STATUSRESTARTS   AGEapiser
ver-h6rg41/1   Running   0  1hcontroller-
manager-d8vkq   1/1   Running   5  3h[root@openshift-ied
~]# oc describe pod apiserver-h6rg4 Name:   apiserver-
h6rg4Namespace:  kube-service-catalogNode:   openshift-
ied.install.etux/10.10.4.248Start Time: Fri, 15 Jun 2018 14:16:50
+0100Labels: app=apiservercontroller-revision-
hash=296530938pod-template-
generation=2Annotations:ca_hash=63a4fb42b7ebe23182afbfd0146c520b4d0
bb4cdopenshift.io/scc=hostmount-
anyuidStatus: RunningIP: 10.128.0.20Controlled
By:  DaemonSet/apiserverContainers:  apiserver:Container
ID:  docker://1eb20cb1b8235c72cb8016a3587a087ae2f954834ec4d259d71bf10e5
b25d034Image: docker.io/openshift/origin-service-
catalog:v3.9.0Image ID:  docker-
pullable://docker.io/openshift/origin-service-
catalog@sha256:4c8fa186fce466c8b35afbbd715207d47369cb92b6710faa4a70
7fb038a5Port:  6443/TCPCommand:  /usr/bin/service-
catalogArgs:  apiserver  --storage-type  etcd  
--secure-port  6443  --etcd-servers  https://openshift-ied.
install.etux:23799  --etcd-
cafile  /etc/origin/master/master.etcd-ca.crt  --etcd-
certfile  /etc/origin/master/master.etcd-client.crt  --etcd-
keyfile  /etc/origin/master/master.etcd-client.key  -
v  3  --cors-allowed-origins  localhost  --admission-
control  KubernetesNamespaceLifecycle,DefaultServicePlan,ServiceBin
dingsLifecycle,ServicePlanChangeValidator,BrokerAuthSarCheck  
--feature-
gates  OriginatingIdentity=trueState:  Running  Sta
rted:  Fri, 15 Jun 2018 14:16:58
+0100Ready:  TrueRestart
Count:  0Environment:Mounts:  /etc/origin/master
from etcd-host-cert (ro)  /var/run/kubernetes-service-catalog from
apiserver-ssl (ro)  /var/run/secrets/kubernetes.io/serviceaccount
from service-catalog-apiserver-token-ptkqz (ro)Conditions: 
Type   Status  InitializedTrue   Ready  True  
PodScheduled   True Volumes:  apiserver-ssl:Type:Secret (a
volume populated by a Secret)SecretName:  apiserver-
sslOptional:false  etcd-host-cert:Type:  HostPath
(bare host directory
volume)Path:  /etc/origin/masterHostPathType:data-
dir:Type:EmptyDir (a temporary directory that shares a pod's
lifetime)Medium:service-catalog-apiserver-token-
ptkqz:Type:Secret (a volume populated by a
Secret)SecretName:  service-catalog-apiserver-token-
ptkqzOptional:falseQoS Class:   BestEffortNode-
Selectors:  node-
role.kubernetes.io/master=trueTolerations: node.kubernetes.io/disk-
pressure:NoSchedule node.kubernetes.io/memory-
pressure:NoSchedule node.kubernetes.io/not-
ready:NoExecute node.kubernetes.io/unreachable:NoExecut
eEvents: 
Type Reason Age   From 
Message   -- --
--   --- 
Normal   SuccessfulMountVolume  1hkubelet, openshift-
ied.install.etux  MountVolume.SetUp succeeded for volume "etcd-host-
cert"  Normal   SuccessfulMountVolume  1hkubelet,
openshift-ied.install.etux  MountVolume.SetUp succeeded for volume
"data-dir"  Normal   SuccessfulMountVolume  1hkubelet,
openshift-ied.install.etux  MountVolume.SetUp succeeded for volume
"service-catalog-apiserver-token-ptkqz" 
Normal   SuccessfulMountVolume  1hkubelet, openshift-
ied.install.etux  MountVolume.SetUp succeeded for volume "apiserver-
ssl"  Normal   Pulled 1hkubelet,
openshift-ied.install.etux  Container image
"docker.io/openshift/origin-service-catalog:v3.9.0" already present on
machine  Normal   Created1hkubelet,
openshift-ied.install.etux  Created container 
Normal   Started1hkubelet, openshift-
ied.install.etux  Started container  Warning  DNSConfigForming   4m
(x62 over 1h)  kubelet, openshift-ied.install.etux  Search Line limits
were exceeded, some search paths have been omitted, the applied search
line is: kube-service-catalog.svc.cluster.local svc.cluster.local
cluster.local install.etux bofh.etux dmz.etux[root@openshift-ied ~]# oc
describe apiserverthe server doesn't have a resource type "apiserver"
Regards,Carlos Rod

[ovirt-users] Re: oVirt Metrics

2018-06-15 Thread Carlos Rodrigues
Hi,

I'm behind a proxy. After configure docker to use proxy, the following
command run well:

sudo docker pull docker.io/openshift/origin-pod:v3.9.0

Thank you.

Regards,
Carlos Rodrigues

On Thu, 2018-06-14 at 13:43 -0600, Rich Megginson wrote:
> This is a different error than the one described in the links below:
> 
>  "5m  1h   128 
> webconsole-84466b9d97-s4x28.153776d1bf88b3a4 Pod   
> Warning   FailedCreatePodSandBox   kubelet, openshift-
> ied.install.etux   
> Failed create pod sandbox: rpc error: code = Unknown desc = failed 
> pulling image \"docker.io/openshift/origin-pod:v3.9.0\": Get 
> https://registry-1.docker.io/v2/: net/http: request canceled while 
> waiting for connection (Client.Timeout exceeded while awaiting
> headers)"
> 
> Is it possible that there is some sort of networking issue that you 
> cannot access https://registry-1.docker.io?
> 
> from the machine - can you do
> 
> curl -vs https://registry-1.docker.io
> 
> can you do
> 
> sudo docker pull docker.io/openshift/origin-pod:v3.9.0
> 
> ?
> 
> On 06/14/2018 03:18 AM, Carlos Rodrigues wrote:
> > Hi, i still get the same error as you can see in attachment.
> > 
> > I also send in attachment the ansible log result from run the
> > following
> > command:
> > 
> > 
> > ANSIBLE_LOG_PATH=/tmp/ansible.log ansible-playbook -vvv -e
> > @/root/vars.yaml -i /root/ansible-inventory-origin-39-aio
> > playbooks/deploy_cluster.yml
> > 
> > 
> > Regards,
> > Carlos Rodrigues
> > 
> > 
> > On Tue, 2018-06-12 at 15:11 +0100, Carlos Rodrigues wrote:
> > > Thank you Rich, w'll try this workaround and tell you later.
> > > 
> > > Regards,
> > > Carlos Rodrigues
> > > 
> > > On Tue, 2018-06-12 at 07:54 -0600, Rich Megginson wrote:
> > > > Sorry, did not mean to send an internal link to an external
> > > > list/address.
> > > > 
> > > > On 06/12/2018 07:52 AM, Rich Megginson wrote:
> > > > > http://post-office.corp.redhat.com/archives/aos-devel/2018-Ju
> > > > > ne/m
> > > > > sg
> > > > > 00195.html
> > > > > 
> > > > > 
> > > > > "
> > > > > It smells like https://access.redhat.com/solutions/3480921 /
> > > > > https://bugzilla.redhat.com/show_bug.cgi?id=1583500 to me.
> > > > > "
> > > > > 
> > > > > I think the workaround is to add
> > > > > 
> > > > > oreg_url=registry.access.redhat.com/openshift3/ose-
> > > > > ${component}:${version}
> > > > > 
> > > > > 
> > > > > to your inventory for OCP external and
> > > > > 
> > > > > oreg_url=brew-pulp-
> > > > > docker01.web.prod.ext.phx2.redhat.com:/openshift3/ose-
> > > > > ${component}:${version}
> > > > > 
> > > > > 
> > > > > for OCP internal and
> > > > > 
> > > > > oreg_url=docker.io/openshift/origin-${component}:${version}
> > > > > 
> > > > > for Origin
> > > > > 
> > > > > On 06/12/2018 02:47 AM, Shirly Radco wrote:
> > > > > > 
> > > > > > -- 
> > > > > > 
> > > > > > SHIRLY RADCO
> > > > > > 
> > > > > > BI SeNIOR SOFTWARE ENGINEER
> > > > > > 
> > > > > > Red Hat Israel <https://www.redhat.com/>
> > > > > > 
> > > > > > <https://red.ht/sig>
> > > > > > TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
> > > > > > 
> > > > > > 
> > > > > > On Tue, Jun 12, 2018 at 11:43 AM, Shirly Radco  > > > > > at.c
> > > > > > om
> > > > > >   
> > > > > > <mailto:sra...@redhat.com>> wrote:
> > > > > > 
> > > > > >  Hi Rich,
> > > > > > 
> > > > > >  Are you families with this OpenShift installation
> > > > > > issue?
> > > > > > 
> > > > > > familiar*
> > > > > > 
> > > > > > 
> > > > > >  Best,
> > > > > > 
> > > > > >  --
> > > > > > 
> > > > > >  SHIRLY RADCO
> > > >

[ovirt-users] Re: oVirt Metrics

2018-06-12 Thread Carlos Rodrigues
Thank you Rich, w'll try this workaround and tell you later.

Regards,
Carlos Rodrigues

On Tue, 2018-06-12 at 07:54 -0600, Rich Megginson wrote:
> Sorry, did not mean to send an internal link to an external
> list/address.
> 
> On 06/12/2018 07:52 AM, Rich Megginson wrote:
> > http://post-office.corp.redhat.com/archives/aos-devel/2018-June/msg
> > 00195.html 
> > 
> > 
> > "
> > It smells like https://access.redhat.com/solutions/3480921 /
> > https://bugzilla.redhat.com/show_bug.cgi?id=1583500 to me.
> > "
> > 
> > I think the workaround is to add
> > 
> > oreg_url=registry.access.redhat.com/openshift3/ose-
> > ${component}:${version} 
> > 
> > 
> > to your inventory for OCP external and
> > 
> > oreg_url=brew-pulp-
> > docker01.web.prod.ext.phx2.redhat.com:/openshift3/ose-
> > ${component}:${version} 
> > 
> > 
> > for OCP internal and
> > 
> > oreg_url=docker.io/openshift/origin-${component}:${version}
> > 
> > for Origin
> > 
> > On 06/12/2018 02:47 AM, Shirly Radco wrote:
> > > 
> > > 
> > > -- 
> > > 
> > > SHIRLY RADCO
> > > 
> > > BI SeNIOR SOFTWARE ENGINEER
> > > 
> > > Red Hat Israel <https://www.redhat.com/>
> > > 
> > > <https://red.ht/sig>
> > > TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
> > > 
> > > 
> > > On Tue, Jun 12, 2018 at 11:43 AM, Shirly Radco  > >  
> > > <mailto:sra...@redhat.com>> wrote:
> > > 
> > > Hi Rich,
> > > 
> > > Are you families with this OpenShift installation issue?
> > > 
> > > familiar*
> > > 
> > > 
> > > Best,
> > > 
> > > --
> > > 
> > > SHIRLY RADCO
> > > 
> > > BI SeNIOR SOFTWARE ENGINEER
> > > 
> > > Red Hat Israel <https://www.redhat.com/>
> > > 
> > > <https://red.ht/sig>
> > > TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
> > > 
> > > 
> > > On Mon, Jun 11, 2018 at 4:20 PM, Carlos Rodrigues
> > > mailto:c...@eurotux.com>> wrote:
> > > 
> > > Hi,
> > > 
> > > i'm trying to install oVirt Metrics following the
> > > installation
> > > guide (h
> > > ttps://www.ovirt.org/develop/release-
> > > <http://www.ovirt.org/develop/release->
> > > management/features/metrics/metrics-store-installation/)
> > > but i
> > > have
> > > some issues running deploy cluster from here
> > > https://www.ovirt.org/deve
> > > lop/release-management/features/metrics/setting-up-viaq-logging/
> > > <https://www.ovirt.org/develop/release-management/features/metric
> > > s/setting-up-viaq-logging/>
> > > 
> > > cd /usr/share/ansible/openshift-ansible
> > > # (or wherever you cloned the git repo if using git)
> > > ANSIBLE_LOG_PATH=/tmp/ansible.log ansible-playbook -vvv
> > > -e
> > > @/root/vars.yaml -i /root/ansible-inventory-origin-39-aio
> > > playbooks/deploy_cluster.yml
> > > 
> > > I fails on Web console installation:
> > > 
> > > 2018-06-06 19:48:24,020 p=17586 u=root | [DEPRECATION
> > > WARNING]: Using
> > > tests as filters is deprecated. Instead of using
> > > `result|version_compare` instead use `result is
> > > version_compare`. This
> > > feature
> > >  will be removed in version 2.9.
> > > Deprecation warnings can be disabled by setting
> > > deprecation_warnings=False in ansible.cfg.
> > > 2018-06-06 19:48:24,135 p=17586 u=root |  Using module
> > > file
> > > /usr/lib/python2.7/site-
> > > packages/ansible/modules/commands/command.py
> > > 2018-06-06 19:48:27,093 p=17586 u=root |  fatal:
> > > [localhost]:
> > > FAILED!
> > > => {
> > > "changed": true,
> > > "cmd": [
> > > "oc",
> > > "logs",
> > > "deployment/webconsole",
> > > "--tail=50",
> > > "--config=/tmp/console

[ovirt-users] Re: oVirt Metrics

2018-06-11 Thread Carlos Rodrigues
Hi,

i'm trying to install oVirt Metrics following the installation guide (h
ttps://www.ovirt.org/develop/release-
management/features/metrics/metrics-store-installation/) but i have
some issues running deploy cluster from here https://www.ovirt.org/deve
lop/release-management/features/metrics/setting-up-viaq-logging/

cd /usr/share/ansible/openshift-ansible
# (or wherever you cloned the git repo if using git)
ANSIBLE_LOG_PATH=/tmp/ansible.log ansible-playbook -vvv -e
@/root/vars.yaml -i /root/ansible-inventory-origin-39-aio
playbooks/deploy_cluster.yml

I fails on Web console installation:

2018-06-06 19:48:24,020 p=17586 u=root |  [DEPRECATION WARNING]: Using
tests as filters is deprecated. Instead of using
`result|version_compare` instead use `result is version_compare`. This
feature
 will be removed in version 2.9. 
Deprecation warnings can be disabled by setting
deprecation_warnings=False in ansible.cfg.
2018-06-06 19:48:24,135 p=17586 u=root |  Using module file
/usr/lib/python2.7/site-packages/ansible/modules/commands/command.py
2018-06-06 19:48:27,093 p=17586 u=root |  fatal: [localhost]: FAILED!
=> {
"changed": true, 
"cmd": [
"oc", 
"logs", 
"deployment/webconsole", 
"--tail=50", 
"--config=/tmp/console-ansible-C8XDuW/admin.kubeconfig", 
"-n", 
"openshift-web-console"
], 
"delta": "0:00:01.567694", 
"end": "2018-06-06 19:48:26.706407", 
"invocation": {
"module_args": {
"_raw_params": "oc logs deployment/webconsole --tail=50 --
config=/tmp/console-ansible-C8XDuW/admin.kubeconfig -n openshift-web-
console", 
"_uses_shell": false, 
"chdir": null, 
"creates": null, 
"executable": null, 
"removes": null, 
"stdin": null, 
"warn": true
}

}, 
"msg": "non-zero return code", 
"rc": 1, 
"start": "2018-06-06 19:48:25.138713", 
"stderr": "Error from server (BadRequest): container \"webconsole\"
in pod \"webconsole-84466b9d97-s4x28\" is waiting to start:
ContainerCreating", 
"stderr_lines": [
"Error from server (BadRequest): container \"webconsole\" in
pod \"webconsole-84466b9d97-s4x28\" is waiting to start:
ContainerCreating"
], 
"stdout": "", 
"stdout_lines": []
}
2018-06-06 19:48:27,097 p=17586 u=root |  ...ignoring

Regards,
Carlos Rodrigues

On Mon, 2018-06-11 at 13:30 +0300, Shirly Radco wrote:
> Dear users,
> 
> I would love to get some feedback if someone has tried to install and
> use the oVirt metrics store, released in 4.2, for collecting metrics
> and logs, based on Elasticsearch, Kibana, Collectd and Fluentd on top
> of OpenShift.
> https://www.ovirt.org/develop/release-management/features/metrics/met
> rics-store/
> 
> How did the installation go? Are you actively using it?
> And any other feedback would be much appreciated.
> 
> Best regards, 
> --
> SHIRLY RADCO
> BI SENIOR SOFTWARE ENGINEER
> Red Hat Israel
>   TRIED. TESTED. TRUSTED.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/communit
> y-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/3G2M3Q35UQZLOHDRAEBMX2INPDAQCOHO/
-- 
Carlos Rodrigues

Engenheiro de Software Sénior

Eurotux Informática, S.A. | www.eurotux.com
(t) +351 253 680 300 (m) +351 911 926 110
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/43PIG73CSXRPYCLETWLYEWAIURRM3R4Q/


Re: [ovirt-users] Clear name_server table entries

2018-02-08 Thread Carlos Rodrigues
Hi,

Many thanks.

Cheers


On Thu, 2018-02-08 at 10:54 +0200, Michael Burman wrote:
> Hi
> 
> Yes you may delete the entries, make sure you are not deleting
> name_server of hosts that already running in your engine(if you have
> such).
> Deleting the multiple entries in the DB + removing the duplicated
> name servers in /etc/resolv.conf should work around this bug, which
> was decided to close as WONTFIX. Just re-add the the server to
> engine. 
> 
> Cheers)
> 
> On Wed, Feb 7, 2018 at 7:24 PM, Carlos Rodrigues <c...@eurotux.com>
> wrote:
> > Hi,
> > 
> > I'm getting the following problem:
> > 
> > https://bugzilla.redhat.com/show_bug.cgi?id=1530944#c3
> > 
> > and after fix DNS entries no /etc/resolv.conf on host, i have to
> > many
> > entries on name_server table:
> > 
> > engine=# select count(*) from name_server;
> >  count
> > ---
> >  31401
> > (1 row)
> > 
> > I would like to know if may i delete this entries?
> > 
> > Best regards,
> > 
> > --
> > Carlos Rodrigues
> > 
> > Engenheiro de Software Sénior
> > 
> > Eurotux Informática, S.A. | www.eurotux.com
> > (t) +351 253 680 300 (m) +351 911 926 110
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> 
> 
> 
-- 
Carlos Rodrigues

Engenheiro de Software Sénior

Eurotux Informática, S.A. | www.eurotux.com
(t) +351 253 680 300 (m) +351 911 926 110
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Clear name_server table entries

2018-02-07 Thread Carlos Rodrigues
Hi,

I'm getting the following problem:

https://bugzilla.redhat.com/show_bug.cgi?id=1530944#c3

and after fix DNS entries no /etc/resolv.conf on host, i have to many
entries on name_server table:

engine=# select count(*) from name_server;
 count 
---
 31401
(1 row)

I would like to know if may i delete this entries?

Best regards,

-- 
Carlos Rodrigues

Engenheiro de Software Sénior

Eurotux Informática, S.A. | www.eurotux.com
(t) +351 253 680 300 (m) +351 911 926 110
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] HostedEngine with HA

2016-11-07 Thread Carlos Rodrigues
roker.log <==
Thread-1007::INFO::2016-11-07 18:17:21,407::ping::52::ping.Ping::(action) 
Successfully pinged 10.10.4.254
Thread-297080::INFO::2016-11-07 
18:17:21,622::listener::134::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(setup)
 Connection established
Thread-297080::INFO::2016-11-07 
18:17:21,676::listener::186::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle)
 Connection closed
Thread-297081::INFO::2016-11-07 
18:17:21,676::listener::134::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(setup)
 Connection established
Thread-297081::INFO::2016-11-07 
18:17:21,678::listener::186::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle)
 Connection closed
Thread-297082::INFO::2016-11-07 
18:17:21,678::listener::134::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(setup)
 Connection established
Thread-297082::INFO::2016-11-07 
18:17:21,680::listener::186::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle)
 Connection closed
Thread-1010::ERROR::2016-11-07 
18:17:23,596::cpu_load_no_engine::156::cpu_load_no_engine.EngineHealth::(update_stat_file)
 Failed to getVmStats: 'pid'
Thread-1010::INFO::2016-11-07 
18:17:23,597::cpu_load_no_engine::121::cpu_load_no_engine.EngineHealth::(calculate_load)
 System load total=0.0732, engine=0., non-engine=0.0732

Can you help me to understand what goes wrong?

Regards,
Carlos Rodrigues

On Tue, 2016-08-23 at 14:29 +0100, Carlos Rodrigues wrote:
> On Tue, 2016-08-23 at 14:16 +0200, Simone Tiraboschi wrote:
> > > > On Mon, Aug 22, 2016 at 1:10 PM, Carlos Rodrigues <c...@eurotux.com
>
> > wrote:
> > > 
> > > On Fri, 2016-08-19 at 11:50 +0100, Carlos Rodrigues wrote:
> > > > 
> > > > On Fri, 2016-08-19 at 12:24 +0200, Simone Tiraboschi wrote:
> > > > > 
> > > > > 
> > > > > > > > > > On Fri, Aug 19, 2016 at 12:07 PM, Carlos Rodrigues 
> > > > > > > > > > <cmar@euro
tu
> > > > > x.co
> > > > > m>
> > > > > wrote:
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > > On Fri, 2016-08-19 at 10:47 +0100, Carlos Rodrigues wrote:
> > > > > > > 
> > > > > > > 
> > > > > > > 
> > > > > > > > > > > > > > On Fri, 2016-08-19 at 11:36 +0200, Simone Tiraboschi
wrote:
> > > > > > > > 
> > > > > > > > 
> > > > > > > > 
> > > > > > > > 
> > > > > > > > 
> > > > > > > > 
> > > > > > > > > > > > > > > > On Fri, Aug 19, 2016 at 11:29 AM, Carlos 
> > > > > > > > > > > > > > > > Rodrigues
<cmar@
> > > > > > > > euro
> > > > > > > > tu
> > > > > > > > x.co
> > > > > > > > m>
> > > > > > > > wrote:
> > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > After night, the OVF_STORE it was created:
> > > > > > > > > 
> > > > > > > > 
> > > > > > > > It's quite strange that it got so long but now it looks
> > > > > > > > fine.
> > > > > > > > 
> > > > > > > > > > > > > > > > If the ISO_DOMAIN that I see in your screenshot 
> > > > > > > > > > > > > > > > is
served
> > > > > > > > by
> > > > > > > > the
> > > > > > > > > > > > > > > > engine VM itself, I suggest to remove it and 
> > > > > > > > > > > > > > > > export
from
> > > > > > > > an
> > > > > > > > external
> > > > > > > > server.
> > > > > > > > > > > > > > > > Serving the ISO storage domain from the engine 
> > > > > > > > > > > > > > > > VM
itself
> > > > > > > > is
> > > > > > > > not
> > > > > > > > a
> > > > > > > > good idea since when the engine VM is down you can
> > > > > > > > experiment
> > > > &g

Re: [ovirt-users] HostedEngine with HA

2016-08-23 Thread Carlos Rodrigues
On Tue, 2016-08-23 at 14:16 +0200, Simone Tiraboschi wrote:
> On Mon, Aug 22, 2016 at 1:10 PM, Carlos Rodrigues <c...@eurotux.com>
> wrote:
> > 
> > On Fri, 2016-08-19 at 11:50 +0100, Carlos Rodrigues wrote:
> > > 
> > > On Fri, 2016-08-19 at 12:24 +0200, Simone Tiraboschi wrote:
> > > > 
> > > > 
> > > > On Fri, Aug 19, 2016 at 12:07 PM, Carlos Rodrigues <cmar@eurotu
> > > > x.co
> > > > m>
> > > > wrote:
> > > > > 
> > > > > 
> > > > > 
> > > > > On Fri, 2016-08-19 at 10:47 +0100, Carlos Rodrigues wrote:
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > > On Fri, 2016-08-19 at 11:36 +0200, Simone Tiraboschi wrote:
> > > > > > > 
> > > > > > > 
> > > > > > > 
> > > > > > > 
> > > > > > > 
> > > > > > > 
> > > > > > > On Fri, Aug 19, 2016 at 11:29 AM, Carlos Rodrigues <cmar@
> > > > > > > euro
> > > > > > > tu
> > > > > > > x.co
> > > > > > > m>
> > > > > > > wrote:
> > > > > > > > 
> > > > > > > > 
> > > > > > > > 
> > > > > > > > 
> > > > > > > > After night, the OVF_STORE it was created:
> > > > > > > > 
> > > > > > > 
> > > > > > > It's quite strange that it got so long but now it looks
> > > > > > > fine.
> > > > > > > 
> > > > > > > If the ISO_DOMAIN that I see in your screenshot is served
> > > > > > > by
> > > > > > > the
> > > > > > > engine VM itself, I suggest to remove it and export from
> > > > > > > an
> > > > > > > external
> > > > > > > server.
> > > > > > > Serving the ISO storage domain from the engine VM itself
> > > > > > > is
> > > > > > > not
> > > > > > > a
> > > > > > > good idea since when the engine VM is down you can
> > > > > > > experiment
> > > > > > > long
> > > > > > > delays before getting the engine VM restarted due to the
> > > > > > > unavailable
> > > > > > > storage domain.
> > > > > > 
> > > > > > Ok, thank you for advice.
> > > > > > 
> > > > > > Now, apparently is all ok. I'll do more tests with HA and
> > > > > > any
> > > > > > issue
> > > > > > i'll tell you.
> > > > > > 
> > > > > > Thank you for your support.
> > > > > > 
> > > > > > Regards,
> > > > > > Carlos Rodrigues
> > > > > > 
> > > > > 
> > > > > I shutdown the network of host with engine VM and i expected
> > > > > that
> > > > > other
> > > > > host fence the host and start engine VM but i don't see any
> > > > > fence
> > > > > action and the "free" host keep trying to start VM but get
> > > > > and
> > > > > error of
> > > > > sanlock
> > > > > 
> > > > > Aug 19 11:03:03 ied-blade11.install.eurotux.local kernel:
> > > > > qemu-
> > > > > kvm:
> > > > > sending ioctl 5326 to a partition!
> > > > > Aug 19 11:03:03 ied-blade11.install.eurotux.local kernel:
> > > > > qemu-
> > > > > kvm:
> > > > > sending ioctl 80200204 to a partition!
> > > > > Aug 19 11:03:03 ied-blade11.install.eurotux.local kvm[7867]:
> > > > > 1
> > > > > guest
> > > > > now active
> > > > > Aug 19 11:03:03 ied-blade11.install.eurotux.local
> > > > > sanlock[884]:
> > > > > 2016-
> > > > > 08-19 11:03:03+0100 1023 [903]: r3 paxos_acquire owner 1
> > > > > delta 1
> > > > > 9
> > > > > 245502 alive
> > > > > Aug 19 11:03:03 ied-blade11.install.eurotux.local
> > > > > sanlock[884]:
> > > > > 2016-
> > > > > 0

Re: [ovirt-users] HostedEngine with HA

2016-08-22 Thread Carlos Rodrigues
On Fri, 2016-08-19 at 11:50 +0100, Carlos Rodrigues wrote:
> On Fri, 2016-08-19 at 12:24 +0200, Simone Tiraboschi wrote:
> > 
> > On Fri, Aug 19, 2016 at 12:07 PM, Carlos Rodrigues <c...@eurotux.co
> > m>
> > wrote:
> > > 
> > > 
> > > On Fri, 2016-08-19 at 10:47 +0100, Carlos Rodrigues wrote:
> > > > 
> > > > 
> > > > On Fri, 2016-08-19 at 11:36 +0200, Simone Tiraboschi wrote:
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > On Fri, Aug 19, 2016 at 11:29 AM, Carlos Rodrigues <cmar@euro
> > > > > tu
> > > > > x.co
> > > > > m>
> > > > > wrote:
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > > After night, the OVF_STORE it was created:
> > > > > > 
> > > > > 
> > > > > It's quite strange that it got so long but now it looks fine.
> > > > > 
> > > > > If the ISO_DOMAIN that I see in your screenshot is served by
> > > > > the
> > > > > engine VM itself, I suggest to remove it and export from an
> > > > > external
> > > > > server.
> > > > > Serving the ISO storage domain from the engine VM itself is
> > > > > not
> > > > > a
> > > > > good idea since when the engine VM is down you can experiment
> > > > > long
> > > > > delays before getting the engine VM restarted due to the
> > > > > unavailable
> > > > > storage domain.
> > > > 
> > > > Ok, thank you for advice.
> > > > 
> > > > Now, apparently is all ok. I'll do more tests with HA and any
> > > > issue
> > > > i'll tell you.
> > > > 
> > > > Thank you for your support.
> > > > 
> > > > Regards,
> > > > Carlos Rodrigues
> > > > 
> > > 
> > > I shutdown the network of host with engine VM and i expected that
> > > other
> > > host fence the host and start engine VM but i don't see any fence
> > > action and the "free" host keep trying to start VM but get and
> > > error of
> > > sanlock
> > > 
> > > Aug 19 11:03:03 ied-blade11.install.eurotux.local kernel: qemu-
> > > kvm:
> > > sending ioctl 5326 to a partition!
> > > Aug 19 11:03:03 ied-blade11.install.eurotux.local kernel: qemu-
> > > kvm:
> > > sending ioctl 80200204 to a partition!
> > > Aug 19 11:03:03 ied-blade11.install.eurotux.local kvm[7867]: 1
> > > guest
> > > now active
> > > Aug 19 11:03:03 ied-blade11.install.eurotux.local sanlock[884]:
> > > 2016-
> > > 08-19 11:03:03+0100 1023 [903]: r3 paxos_acquire owner 1 delta 1
> > > 9
> > > 245502 alive
> > > Aug 19 11:03:03 ied-blade11.install.eurotux.local sanlock[884]:
> > > 2016-
> > > 08-19 11:03:03+0100 1023 [903]: r3 acquire_token held error -243
> > > Aug 19 11:03:03 ied-blade11.install.eurotux.local sanlock[884]:
> > > 2016-
> > > 08-19 11:03:03+0100 1023 [903]: r3 cmd_acquire 2,9,7862
> > > acquire_token
> > > -243 lease owned by other host
> > > Aug 19 11:03:03 ied-blade11.install.eurotux.local libvirtd[1369]:
> > > resource busy: Failed to acquire lock: error -243
> > > Aug 19 11:03:03 ied-blade11.install.eurotux.local kernel:
> > > ovirtmgmt:
> > > port 2(vnet0) entered disabled state
> > > Aug 19 11:03:03 ied-blade11.install.eurotux.local kernel: device
> > > vnet0
> > > left promiscuous mode
> > > Aug 19 11:03:03 ied-blade11.install.eurotux.local kernel:
> > > ovirtmgmt:
> > > port 2(vnet0) entered disabled state
> > > Aug 19 11:03:03 ied-blade11.install.eurotux.local kvm[7885]: 0
> > > guests
> > > now active
> > > Aug 19 11:03:03 ied-blade11.install.eurotux.local systemd-
> > > machined[7863]: Machine qemu-4-HostedEngine terminated.
> > 
> > Maybe you hit this one:
> > https://bugzilla.redhat.com/show_bug.cgi?id=1322849
> > 
> > 
> > Can you please check it as described in comment 28 and eventually
> > apply the workaround in comment 18?
> > 
> 
> Apparently the host-id its ok. I don't need to apply the workaround.
> 

Any other suggestion? 
I check that second host doens't fence the failed host and maybe this
cause the lock of hosted

Re: [ovirt-users] HostedEngine with HA

2016-08-19 Thread Carlos Rodrigues
On Fri, 2016-08-19 at 12:24 +0200, Simone Tiraboschi wrote:
> On Fri, Aug 19, 2016 at 12:07 PM, Carlos Rodrigues <c...@eurotux.com>
> wrote:
> > 
> > On Fri, 2016-08-19 at 10:47 +0100, Carlos Rodrigues wrote:
> > > 
> > > On Fri, 2016-08-19 at 11:36 +0200, Simone Tiraboschi wrote:
> > > > 
> > > > 
> > > > 
> > > > 
> > > > On Fri, Aug 19, 2016 at 11:29 AM, Carlos Rodrigues <cmar@eurotu
> > > > x.co
> > > > m>
> > > > wrote:
> > > > > 
> > > > > 
> > > > > After night, the OVF_STORE it was created:
> > > > > 
> > > > 
> > > > It's quite strange that it got so long but now it looks fine.
> > > > 
> > > > If the ISO_DOMAIN that I see in your screenshot is served by
> > > > the
> > > > engine VM itself, I suggest to remove it and export from an
> > > > external
> > > > server.
> > > > Serving the ISO storage domain from the engine VM itself is not
> > > > a
> > > > good idea since when the engine VM is down you can experiment
> > > > long
> > > > delays before getting the engine VM restarted due to the
> > > > unavailable
> > > > storage domain.
> > > 
> > > Ok, thank you for advice.
> > > 
> > > Now, apparently is all ok. I'll do more tests with HA and any
> > > issue
> > > i'll tell you.
> > > 
> > > Thank you for your support.
> > > 
> > > Regards,
> > > Carlos Rodrigues
> > > 
> > 
> > I shutdown the network of host with engine VM and i expected that
> > other
> > host fence the host and start engine VM but i don't see any fence
> > action and the "free" host keep trying to start VM but get and
> > error of
> > sanlock
> > 
> > Aug 19 11:03:03 ied-blade11.install.eurotux.local kernel: qemu-kvm:
> > sending ioctl 5326 to a partition!
> > Aug 19 11:03:03 ied-blade11.install.eurotux.local kernel: qemu-kvm:
> > sending ioctl 80200204 to a partition!
> > Aug 19 11:03:03 ied-blade11.install.eurotux.local kvm[7867]: 1
> > guest
> > now active
> > Aug 19 11:03:03 ied-blade11.install.eurotux.local sanlock[884]:
> > 2016-
> > 08-19 11:03:03+0100 1023 [903]: r3 paxos_acquire owner 1 delta 1 9
> > 245502 alive
> > Aug 19 11:03:03 ied-blade11.install.eurotux.local sanlock[884]:
> > 2016-
> > 08-19 11:03:03+0100 1023 [903]: r3 acquire_token held error -243
> > Aug 19 11:03:03 ied-blade11.install.eurotux.local sanlock[884]:
> > 2016-
> > 08-19 11:03:03+0100 1023 [903]: r3 cmd_acquire 2,9,7862
> > acquire_token
> > -243 lease owned by other host
> > Aug 19 11:03:03 ied-blade11.install.eurotux.local libvirtd[1369]:
> > resource busy: Failed to acquire lock: error -243
> > Aug 19 11:03:03 ied-blade11.install.eurotux.local kernel:
> > ovirtmgmt:
> > port 2(vnet0) entered disabled state
> > Aug 19 11:03:03 ied-blade11.install.eurotux.local kernel: device
> > vnet0
> > left promiscuous mode
> > Aug 19 11:03:03 ied-blade11.install.eurotux.local kernel:
> > ovirtmgmt:
> > port 2(vnet0) entered disabled state
> > Aug 19 11:03:03 ied-blade11.install.eurotux.local kvm[7885]: 0
> > guests
> > now active
> > Aug 19 11:03:03 ied-blade11.install.eurotux.local systemd-
> > machined[7863]: Machine qemu-4-HostedEngine terminated.
> 
> Maybe you hit this one:
> https://bugzilla.redhat.com/show_bug.cgi?id=1322849
> 
> 
> Can you please check it as described in comment 28 and eventually
> apply the workaround in comment 18?
> 

Apparently the host-id its ok. I don't need to apply the workaround.


> 
> 
> > 
> > > 
> > > > 
> > > > > 
> > > > > Regards,
> > > > > Carlos Rodrigues
> > > > > 
> > > > > On Fri, 2016-08-19 at 08:29 +0200, Simone Tiraboschi wrote:
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > > On Thu, Aug 18, 2016 at 6:38 PM, Carlos Rodrigues <cmar@eur
> > > > > > otux
> > > > > > .c
> > > > > > om> wrote:
> > > > > > > 
> > > > > > > 
> > > > > &g

Re: [ovirt-users] HostedEngine with HA

2016-08-19 Thread Carlos Rodrigues
On Fri, 2016-08-19 at 10:47 +0100, Carlos Rodrigues wrote:
> On Fri, 2016-08-19 at 11:36 +0200, Simone Tiraboschi wrote:
> > 
> > 
> > 
> > On Fri, Aug 19, 2016 at 11:29 AM, Carlos Rodrigues <c...@eurotux.co
> > m>
> > wrote:
> > > 
> > > After night, the OVF_STORE it was created:
> > > 
> > 
> > It's quite strange that it got so long but now it looks fine.
> > 
> > If the ISO_DOMAIN that I see in your screenshot is served by the
> > engine VM itself, I suggest to remove it and export from an
> > external
> > server.
> > Serving the ISO storage domain from the engine VM itself is not a
> > good idea since when the engine VM is down you can experiment long
> > delays before getting the engine VM restarted due to the
> > unavailable
> > storage domain.
> 
> Ok, thank you for advice.
> 
> Now, apparently is all ok. I'll do more tests with HA and any issue
> i'll tell you.
> 
> Thank you for your support.
> 
> Regards,
> Carlos Rodrigues
> 

I shutdown the network of host with engine VM and i expected that other
host fence the host and start engine VM but i don't see any fence
action and the "free" host keep trying to start VM but get and error of
sanlock

Aug 19 11:03:03 ied-blade11.install.eurotux.local kernel: qemu-kvm:
sending ioctl 5326 to a partition!
Aug 19 11:03:03 ied-blade11.install.eurotux.local kernel: qemu-kvm:
sending ioctl 80200204 to a partition!
Aug 19 11:03:03 ied-blade11.install.eurotux.local kvm[7867]: 1 guest
now active
Aug 19 11:03:03 ied-blade11.install.eurotux.local sanlock[884]: 2016-
08-19 11:03:03+0100 1023 [903]: r3 paxos_acquire owner 1 delta 1 9
245502 alive
Aug 19 11:03:03 ied-blade11.install.eurotux.local sanlock[884]: 2016-
08-19 11:03:03+0100 1023 [903]: r3 acquire_token held error -243
Aug 19 11:03:03 ied-blade11.install.eurotux.local sanlock[884]: 2016-
08-19 11:03:03+0100 1023 [903]: r3 cmd_acquire 2,9,7862 acquire_token
-243 lease owned by other host
Aug 19 11:03:03 ied-blade11.install.eurotux.local libvirtd[1369]:
resource busy: Failed to acquire lock: error -243
Aug 19 11:03:03 ied-blade11.install.eurotux.local kernel: ovirtmgmt:
port 2(vnet0) entered disabled state
Aug 19 11:03:03 ied-blade11.install.eurotux.local kernel: device vnet0
left promiscuous mode
Aug 19 11:03:03 ied-blade11.install.eurotux.local kernel: ovirtmgmt:
port 2(vnet0) entered disabled state
Aug 19 11:03:03 ied-blade11.install.eurotux.local kvm[7885]: 0 guests
now active
Aug 19 11:03:03 ied-blade11.install.eurotux.local systemd-
machined[7863]: Machine qemu-4-HostedEngine terminated.


> > 
> >  
> > > 
> > > 
> > > 
> > > 
> > > Regards,
> > > Carlos Rodrigues
> > > 
> > > On Fri, 2016-08-19 at 08:29 +0200, Simone Tiraboschi wrote:
> > > > 
> > > > 
> > > > 
> > > > On Thu, Aug 18, 2016 at 6:38 PM, Carlos Rodrigues <cmar@eurotux
> > > > .c
> > > > om> wrote:
> > > > > 
> > > > > On Thu, 2016-08-18 at 17:45 +0200, Simone Tiraboschi wrote:
> > > > > > 
> > > > > > On Thu, Aug 18, 2016 at 5:43 PM, Carlos Rodrigues <cmar@eur
> > > > > > ot
> > > > > > ux.com> wrote:
> > > > > > > 
> > > > > > > 
> > > > > > > I increase hosted_engine disk space to 160G. How do i
> > > > > > > force
> > > > > > > to create
> > > > > > > OVF_STORE.
> > > > > > 
> > > > > > I think that restarting the engine on the engine VM will
> > > > > > trigger it
> > > > > > although I'm not sure that it was a size issue.
> > > > > > 
> > > > > 
> > > > > I found to OVF_STORE on another storage domain with "Domain
> > > > > Type" "Data (Master)"
> > > > > 
> > > > > 
> > > > 
> > > > Each storage domain has its own OVF_STORE volumes; you should
> > > > get
> > > > them also on the hosted-engine storage domain.
> > > > Not really sure about how to trigger it again; adding Roy here.
> > > > 
> > > > 
> > > > 
> > > >  
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > > 
> > > > > > > 
> > > > > > > 
> > > > > > > Regards,
> > > &

Re: [ovirt-users] HostedEngine with HA

2016-08-19 Thread Carlos Rodrigues
On Fri, 2016-08-19 at 11:36 +0200, Simone Tiraboschi wrote:
> 
> 
> On Fri, Aug 19, 2016 at 11:29 AM, Carlos Rodrigues <c...@eurotux.com>
> wrote:
> > After night, the OVF_STORE it was created:
> > 
> 
> It's quite strange that it got so long but now it looks fine.
> 
> If the ISO_DOMAIN that I see in your screenshot is served by the
> engine VM itself, I suggest to remove it and export from an external
> server.
> Serving the ISO storage domain from the engine VM itself is not a
> good idea since when the engine VM is down you can experiment long
> delays before getting the engine VM restarted due to the unavailable
> storage domain.

Ok, thank you for advice.

Now, apparently is all ok. I'll do more tests with HA and any issue
i'll tell you.

Thank you for your support.

Regards,
Carlos Rodrigues

>  
> > 
> > 
> > 
> > Regards,
> > Carlos Rodrigues
> > 
> > On Fri, 2016-08-19 at 08:29 +0200, Simone Tiraboschi wrote:
> > > 
> > > 
> > > On Thu, Aug 18, 2016 at 6:38 PM, Carlos Rodrigues <cmar@eurotux.c
> > > om> wrote:
> > > > On Thu, 2016-08-18 at 17:45 +0200, Simone Tiraboschi wrote:
> > > > > On Thu, Aug 18, 2016 at 5:43 PM, Carlos Rodrigues <cmar@eurot
> > > > > ux.com> wrote:
> > > > > > 
> > > > > > I increase hosted_engine disk space to 160G. How do i force
> > > > > > to create
> > > > > > OVF_STORE.
> > > > > 
> > > > > I think that restarting the engine on the engine VM will
> > > > > trigger it
> > > > > although I'm not sure that it was a size issue.
> > > > > 
> > > > 
> > > > I found to OVF_STORE on another storage domain with "Domain
> > > > Type" "Data (Master)"
> > > > 
> > > > 
> > > 
> > > Each storage domain has its own OVF_STORE volumes; you should get
> > > them also on the hosted-engine storage domain.
> > > Not really sure about how to trigger it again; adding Roy here.
> > > 
> > > 
> > > 
> > >  
> > > > 
> > > > 
> > > > 
> > > > > > 
> > > > > > Regards,
> > > > > > Carlos Rodrigues
> > > > > > 
> > > > > > On Thu, 2016-08-18 at 12:14 +0100, Carlos Rodrigues wrote:
> > > > > > > 
> > > > > > > On Thu, 2016-08-18 at 12:34 +0200, Simone Tiraboschi
> > > > > > > wrote:
> > > > > > > > 
> > > > > > > > 
> > > > > > > > On Thu, Aug 18, 2016 at 12:11 PM, Carlos Rodrigues  > > > > > > > r...@eurotux.co
> > > > > > > > m>
> > > > > > > > wrote:
> > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > On Thu, 2016-08-18 at 11:53 +0200, Simone Tiraboschi
> > > > > > > > > wrote:
> > > > > > > > > > 
> > > > > > > > > > 
> > > > > > > > > > 
> > > > > > > > > > 
> > > > > > > > > > 
> > > > > > > > > > On Thu, Aug 18, 2016 at 11:50 AM, Carlos Rodrigues
> > > > > > > > > > <cmar@eurotu
> > > > > > > > > > x.
> > > > > > > > > > com>
> > > > > > > > > > wrote:
> > > > > > > > > > > 
> > > > > > > > > > > 
> > > > > > > > > > > 
> > > > > > > > > > > On Thu, 2016-08-18 at 11:42 +0200, Simone
> > > > > > > > > > > Tiraboschi wrote:
> > > > > > > > > > > > 
> > > > > > > > > > > > 
> > > > > > > > > > > > 
> > > > > > > > > > > > On Thu, Aug 18, 2016 at 11:25 AM, Carlos
> > > > > > > > > > > > Rodrigues <cmar@eu
> > > > > > > > > > > > ro
> > > > > > > > > > > > tux.
> > > > > > > > > > > > com> wrote:
> > > > > &g

Re: [ovirt-users] HostedEngine with HA

2016-08-18 Thread Carlos Rodrigues
On Thu, 2016-08-18 at 17:45 +0200, Simone Tiraboschi wrote:
> > > On Thu, Aug 18, 2016 at 5:43 PM, Carlos Rodrigues <c...@eurotux.com>
wrote:
> > 
> > > > I increase hosted_engine disk space to 160G. How do i force to
create
> > OVF_STORE.
> 
> I think that restarting the engine on the engine VM will trigger it
> although I'm not sure that it was a size issue.
> 

I found to OVF_STORE on another storage domain with "Domain Type" "Data
(Master)"




> > 
> > Regards,
> > Carlos Rodrigues
> > 
> > On Thu, 2016-08-18 at 12:14 +0100, Carlos Rodrigues wrote:
> > > 
> > > On Thu, 2016-08-18 at 12:34 +0200, Simone Tiraboschi wrote:
> > > > 
> > > > 
> > > > > > > > On Thu, Aug 18, 2016 at 12:11 PM, Carlos Rodrigues
<c...@eurotux.co
> > > > m>
> > > > wrote:
> > > > > 
> > > > > 
> > > > > 
> > > > > On Thu, 2016-08-18 at 11:53 +0200, Simone Tiraboschi wrote:
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > > > > > > > > On Thu, Aug 18, 2016 at 11:50 AM, Carlos Rodrigues
<cmar@eurotu
> > > > > > x.
> > > > > > com>
> > > > > > wrote:
> > > > > > > 
> > > > > > > 
> > > > > > > 
> > > > > > > > > > > > > > On Thu, 2016-08-18 at 11:42 +0200, Simone Tiraboschi
wrote:
> > > > > > > > 
> > > > > > > > 
> > > > > > > > 
> > > > > > > > > > > > > > > > On Thu, Aug 18, 2016 at 11:25 AM, Carlos 
> > > > > > > > > > > > > > > > Rodrigues
<cmar@eu
> > > > > > > > ro
> > > > > > > > tux.
> > > > > > > > com> wrote:
> > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > On Thu, 2016-08-18 at 11:04 +0200, Simone Tiraboschi
> > > > > > > > > wrote:
> > > > > > > > > > 
> > > > > > > > > > 
> > > > > > > > > > 
> > > > > > > > > > 
> > > > > > > > > > On Thu, Aug 18, 2016 at 10:36 AM, Carlos Rodrigues
> > > > > > > > > > <cmar@
> > > > > > > > > > euro
> > > > > > > > > > tux.com>
> > > > > > > > > > wrote:
> > > > > > > > > > > 
> > > > > > > > > > > 
> > > > > > > > > > > 
> > > > > > > > > > > 
> > > > > > > > > > > 
> > > > > > > > > > > > > > > > > > > > > > On Thu, 2016-08-18 at 10:27 +0200, 
> > > > > > > > > > > > > > > > > > > > > > Simone
Tiraboschi
> > > > > > > > > > > wrote:
> > > > > > > > > > > > 
> > > > > > > > > > > > 
> > > > > > > > > > > > 
> > > > > > > > > > > > 
> > > > > > > > > > > > 
> > > > > > > > > > > > > > > > > > > > > > > > On Thu, Aug 18, 2016 at 10:22 
> > > > > > > > > > > > > > > > > > > > > > > > AM, Carlos
Rodrigues
> > > > > > > > > > > > <cmar@
> > > > > > > > > > > > eurotux.
> > > > > > > > > > > > com>
> > > > > > > > > > > > wrote:
> > > > > > > > > > > > > 
> > > > > > > > > > > > > 
> > > > > > > > > > > > > 
> > > > > > > > > > > > > 
> > > > > > > > > > > > > 
> > > > > > > 

Re: [ovirt-users] HostedEngine with HA

2016-08-18 Thread Carlos Rodrigues
I increase hosted_engine disk space to 160G. How do i force to create
OVF_STORE.

Regards,
Carlos Rodrigues

On Thu, 2016-08-18 at 12:14 +0100, Carlos Rodrigues wrote:
> On Thu, 2016-08-18 at 12:34 +0200, Simone Tiraboschi wrote:
> > 
> > On Thu, Aug 18, 2016 at 12:11 PM, Carlos Rodrigues <c...@eurotux.co
> > m>
> > wrote:
> > > 
> > > 
> > > On Thu, 2016-08-18 at 11:53 +0200, Simone Tiraboschi wrote:
> > > > 
> > > > 
> > > > 
> > > > 
> > > > On Thu, Aug 18, 2016 at 11:50 AM, Carlos Rodrigues <cmar@eurotu
> > > > x.
> > > > com>
> > > > wrote:
> > > > > 
> > > > > 
> > > > > On Thu, 2016-08-18 at 11:42 +0200, Simone Tiraboschi wrote:
> > > > > > 
> > > > > > 
> > > > > > On Thu, Aug 18, 2016 at 11:25 AM, Carlos Rodrigues <cmar@eu
> > > > > > ro
> > > > > > tux.
> > > > > > com> wrote:
> > > > > > > 
> > > > > > > 
> > > > > > > 
> > > > > > > On Thu, 2016-08-18 at 11:04 +0200, Simone Tiraboschi
> > > > > > > wrote:
> > > > > > > > 
> > > > > > > > 
> > > > > > > > 
> > > > > > > > On Thu, Aug 18, 2016 at 10:36 AM, Carlos Rodrigues
> > > > > > > > <cmar@
> > > > > > > > euro
> > > > > > > > tux.com>
> > > > > > > > wrote:
> > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > On Thu, 2016-08-18 at 10:27 +0200, Simone Tiraboschi
> > > > > > > > > wrote:
> > > > > > > > > > 
> > > > > > > > > > 
> > > > > > > > > > 
> > > > > > > > > > 
> > > > > > > > > > On Thu, Aug 18, 2016 at 10:22 AM, Carlos Rodrigues
> > > > > > > > > > <cmar@
> > > > > > > > > > eurotux.
> > > > > > > > > > com>
> > > > > > > > > > wrote:
> > > > > > > > > > > 
> > > > > > > > > > > 
> > > > > > > > > > > 
> > > > > > > > > > > 
> > > > > > > > > > > 
> > > > > > > > > > > On Thu, 2016-08-18 at 08:54 +0200, Simone
> > > > > > > > > > > Tiraboschi
> > > > > > > > > > > wrote:
> > > > > > > > > > > > 
> > > > > > > > > > > > 
> > > > > > > > > > > > 
> > > > > > > > > > > > 
> > > > > > > > > > > > 
> > > > > > > > > > > > On Tue, Aug 16, 2016 at 12:53 PM, Carlos
> > > > > > > > > > > > Rodrigues  > > > > > > > > > > > mar@euro
> > > > > > > > > > > > tux.
> > > > > > > > > > > > com>
> > > > > > > > > > > > wrote:
> > > > > > > > > > > > > 
> > > > > > > > > > > > > 
> > > > > > > > > > > > > 
> > > > > > > > > > > > > 
> > > > > > > > > > > > > 
> > > > > > > > > > > > > 
> > > > > > > > > > > > > On Sun, 2016-08-14 at 14:22 +0300, Roy Golan
> > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > > 
> > > > > > > > > > > > > > 
> > > > > > > > > > > > > > 
> > > > > > > > > > > > > > 
> > > > > > > > > > > > > > 
> > > > > > > > > > > > > > 
> > > > > > > > > > > > > > 
> &

Re: [ovirt-users] HostedEngine with HA

2016-08-18 Thread Carlos Rodrigues
On Thu, 2016-08-18 at 11:42 +0200, Simone Tiraboschi wrote:
> > > On Thu, Aug 18, 2016 at 11:25 AM, Carlos Rodrigues <c...@eurotux.com>
wrote:
> > 
> > On Thu, 2016-08-18 at 11:04 +0200, Simone Tiraboschi wrote:
> > > 
> > > > > > On Thu, Aug 18, 2016 at 10:36 AM, Carlos Rodrigues
<c...@eurotux.com>
> > > wrote:
> > > > 
> > > > 
> > > > On Thu, 2016-08-18 at 10:27 +0200, Simone Tiraboschi wrote:
> > > > > 
> > > > > 
> > > > > > > > > > On Thu, Aug 18, 2016 at 10:22 AM, Carlos Rodrigues
<cmar@eurotux.
> > > > > com>
> > > > > wrote:
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > > On Thu, 2016-08-18 at 08:54 +0200, Simone Tiraboschi wrote:
> > > > > > > 
> > > > > > > 
> > > > > > > 
> > > > > > > > > > > > > > On Tue, Aug 16, 2016 at 12:53 PM, Carlos Rodrigues
<cmar@euro
> > > > > > > tux.
> > > > > > > com>
> > > > > > > wrote:
> > > > > > > > 
> > > > > > > > 
> > > > > > > > 
> > > > > > > > 
> > > > > > > > On Sun, 2016-08-14 at 14:22 +0300, Roy Golan wrote:
> > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > > > > > > > > > > On 12 August 2016 at 20:23, Carlos Rodrigues
<cmar@eurotu
> > > > > > > > > x.co
> > > > > > > > > m>
> > > > > > > > > wrote:
> > > > > > > > > > 
> > > > > > > > > > 
> > > > > > > > > > 
> > > > > > > > > > 
> > > > > > > > > > Hello,
> > > > > > > > > > 
> > > > > > > > > > > > > > > > > > > > I have one cluster with two hosts with 
> > > > > > > > > > > > > > > > > > > > power
management
> > > > > > > > > > correctly
> > > > > > > > > > > > > > > > > > > > configured and one virtual machine with
HostedEngine
> > > > > > > > > > over
> > > > > > > > > > shared
> > > > > > > > > > storage with FiberChannel.
> > > > > > > > > > 
> > > > > > > > > > > > > > > > > > > > When i shutdown the network of host with
HostedEngine
> > > > > > > > > > VM,  it
> > > > > > > > > > should be
> > > > > > > > > > > > > > > > > > > > possible the HostedEngine VM migrate 
> > > > > > > > > > > > > > > > > > > > automatically
to
> > > > > > > > > > another
> > > > > > > > > > host?
> > > > > > > > > > 
> > > > > > > > > migrate on which network?
> > > > > > > > > 
> > > > > > > > > > 
> > > > > > > > > > 
> > > > > > > > > > 
> > > > > > > > > > 
> > > > > > > > > > What is the expected behaviour on this HA scenario?
> > > > > > > > > 
> > > > > > > > > > > > > > > > > > After a few minutes your vm will be 
> > > > > > > > > > > > > > > > > > shutdown by the
High
> > > > > > > > > Availability
> > > > > > > > > > > > > > > > > > agent, as it can't see network, and started 
> > > > > > > > > > > > > > > > > > on
another
> > > > > > > > > host.
> > > > > > > > 
> > > > > > > > 
> > > > &

Re: [ovirt-users] HostedEngine with HA

2016-08-18 Thread Carlos Rodrigues
On Thu, 2016-08-18 at 11:04 +0200, Simone Tiraboschi wrote:
> On Thu, Aug 18, 2016 at 10:36 AM, Carlos Rodrigues <c...@eurotux.com>
> wrote:
> > 
> > On Thu, 2016-08-18 at 10:27 +0200, Simone Tiraboschi wrote:
> > > 
> > > On Thu, Aug 18, 2016 at 10:22 AM, Carlos Rodrigues <cmar@eurotux.
> > > com>
> > > wrote:
> > > > 
> > > > 
> > > > On Thu, 2016-08-18 at 08:54 +0200, Simone Tiraboschi wrote:
> > > > > 
> > > > > 
> > > > > On Tue, Aug 16, 2016 at 12:53 PM, Carlos Rodrigues <cmar@euro
> > > > > tux.
> > > > > com>
> > > > > wrote:
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > > On Sun, 2016-08-14 at 14:22 +0300, Roy Golan wrote:
> > > > > > > 
> > > > > > > 
> > > > > > > 
> > > > > > > 
> > > > > > > 
> > > > > > > On 12 August 2016 at 20:23, Carlos Rodrigues <cmar@eurotu
> > > > > > > x.co
> > > > > > > m>
> > > > > > > wrote:
> > > > > > > > 
> > > > > > > > 
> > > > > > > > 
> > > > > > > > Hello,
> > > > > > > > 
> > > > > > > > I have one cluster with two hosts with power management
> > > > > > > > correctly
> > > > > > > > configured and one virtual machine with HostedEngine
> > > > > > > > over
> > > > > > > > shared
> > > > > > > > storage with FiberChannel.
> > > > > > > > 
> > > > > > > > When i shutdown the network of host with HostedEngine
> > > > > > > > VM,  it
> > > > > > > > should be
> > > > > > > > possible the HostedEngine VM migrate automatically to
> > > > > > > > another
> > > > > > > > host?
> > > > > > > > 
> > > > > > > migrate on which network?
> > > > > > > 
> > > > > > > > 
> > > > > > > > 
> > > > > > > > 
> > > > > > > > What is the expected behaviour on this HA scenario?
> > > > > > > 
> > > > > > > After a few minutes your vm will be shutdown by the High
> > > > > > > Availability
> > > > > > > agent, as it can't see network, and started on another
> > > > > > > host.
> > > > > > 
> > > > > > 
> > > > > > I'm testing this scenario and after shutdown network, it
> > > > > > should
> > > > > > be
> > > > > > expected that agent shutdown ha and started on another
> > > > > > host,
> > > > > > but
> > > > > > after
> > > > > > couple minutes nothing happens and on host with network we
> > > > > > getting
> > > > > > the
> > > > > > following messages:
> > > > > > 
> > > > > > Aug 16 11:44:08 ied-blade11.install.eurotux.local ovirt-ha-
> > > > > > agent[2779]:
> > > > > > ovirt-ha-agent
> > > > > > ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.con
> > > > > > fig
> > > > > > ERROR
> > > > > > Unable to get vm.conf from OVF_STORE, falling back to
> > > > > > initial
> > > > > > vm.conf
> > > > > > 
> > > > > > I think the HA agent its trying to get vm configuration but
> > > > > > some
> > > > > > how it
> > > > > > can't get vm.conf to start VM.
> > > > > 
> > > > > No, this is a different issues.
> > > > > In 3.6 we added a feature to let the engine manage also the
> > > > > engine VM
> > > > > itself; ovirt-ha-agent will pickup the latest engine VM
> > > > > configuration
> > > > > from the OVF_STORE which is managed by the engine.
> > > > > If something goes wrong, ovirt-ha-agent could fallback to the
> > > > > initial
> > > > > (bootstrap time) vm.conf. Thi

Re: [ovirt-users] HostedEngine with HA

2016-08-18 Thread Carlos Rodrigues
On Thu, 2016-08-18 at 08:55 +0200, Simone Tiraboschi wrote:
> On Wed, Aug 17, 2016 at 11:06 AM, Carlos Rodrigues <c...@eurotux.com>
> wrote:
> > 
> > Anyone can help me to build HA on HostedEngine VM?
> > 
> > How can i guarantee that if host with HostedEngine VM goes down,
> > the
> > HostedEngine VM moves to another host?
> 
> it's in charge of ovirt-ha-agent running on your hosts.
> 
> Can you please post the output of
>  hosted-engine --vm-status
> ?

There is the output of both hosts:

[root@ied-blade11 ~]# hosted-engine --vm-status
/usr/lib/python2.7/site-
packages/ovirt_hosted_engine_ha/lib/storage_backends.py:15:
DeprecationWarning: vdscli uses xmlrpc. since ovirt 3.6 xmlrpc is
deprecated, please use vdsm.jsonrpcvdscli
  import vdsm.vdscli


--== Host 1 status ==--

Status up-to-date  : True
Hostname   : ied-blade13.install.eurotux.local
Host ID: 1
Engine status  : {"health": "good", "vm": "up",
"detail": "up"}
Score  : 3400
stopped: False
Local maintenance  : False
crc32  : a941698e
Host timestamp : 153049
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=153049 (Thu Aug 18 09:22:02 2016)
host-id=1
score=3400
maintenance=False
state=EngineUp
stopped=False


--== Host 2 status ==--

Status up-to-date  : True
Hostname   : ied-blade11.install.eurotux.local
Host ID: 2
Engine status  : {"reason": "vm not running on this
host", "health": "bad", "vm": "down", "detail": "unknown"}
Score  : 3400
stopped: False
Local maintenance  : False
crc32  : 8f79e0e7
Host timestamp : 143139
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=143139 (Thu Aug 18 09:22:12 2016)
host-id=2
score=3400
maintenance=False
state=EngineDown
stopped=False

[root@ied-blade13 ~]# hosted-engine --vm-status 
/usr/lib/python2.7/site-
packages/ovirt_hosted_engine_ha/lib/storage_backends.py:15:
DeprecationWarning: vdscli uses xmlrpc. since ovirt 3.6 xmlrpc is
deprecated, please use vdsm.jsonrpcvdscli
  import vdsm.vdscli


--== Host 1 status ==--

Status up-to-date  : True
Hostname   : ied-blade13.install.eurotux.local
Host ID: 1
Engine status  : {"health": "good", "vm": "up",
"detail": "up"}
Score  : 3400
stopped: False
Local maintenance  : False
crc32  : cdd022a2
Host timestamp : 153085
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=153085 (Thu Aug 18 09:22:38 2016)
host-id=1
score=3400
maintenance=False
state=EngineUp
stopped=False


--== Host 2 status ==--

Status up-to-date  : True
Hostname   : ied-blade11.install.eurotux.local
Host ID: 2
Engine status  : {"reason": "vm not running on this
host", "health": "bad", "vm": "down", "detail": "unknown"}
Score  : 3400
stopped: False
Local maintenance  : False
crc32  : fbbad961
Host timestamp : 143175
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=143175 (Thu Aug 18 09:22:48 2016)
host-id=2
    score=3400
maintenance=False
state=EngineDown
stopped=False



> 
> > 
> > Regards,
> > Carlos Rodrigues
> > 
> > On Tue, 2016-08-16 at 11:53 +0100, Carlos Rodrigues wrote:
> > > 
> > > On Sun, 2016-08-14 at 14:22 +0300, Roy Golan wrote:
> > > > 
> > > > 
> > > > 
> > > > 
> > > > On 12 August 2016 at 20:23, Carlos Rodrigues <c...@eurotux.com>
> > > > wrote:
> > > > > 
> > > > > 
> > > > > Hello,
> > > > > 

Re: [ovirt-users] HostedEngine with HA

2016-08-18 Thread Carlos Rodrigues
On Thu, 2016-08-18 at 08:54 +0200, Simone Tiraboschi wrote:
> On Tue, Aug 16, 2016 at 12:53 PM, Carlos Rodrigues <c...@eurotux.com>
> wrote:
> > 
> > On Sun, 2016-08-14 at 14:22 +0300, Roy Golan wrote:
> > > 
> > > 
> > > 
> > > On 12 August 2016 at 20:23, Carlos Rodrigues <c...@eurotux.com>
> > > wrote:
> > > > 
> > > > Hello,
> > > > 
> > > > I have one cluster with two hosts with power management
> > > > correctly
> > > > configured and one virtual machine with HostedEngine over
> > > > shared
> > > > storage with FiberChannel.
> > > > 
> > > > When i shutdown the network of host with HostedEngine VM,  it
> > > > should be
> > > > possible the HostedEngine VM migrate automatically to another
> > > > host?
> > > > 
> > > migrate on which network?
> > > 
> > > > 
> > > > What is the expected behaviour on this HA scenario?
> > > 
> > > After a few minutes your vm will be shutdown by the High
> > > Availability
> > > agent, as it can't see network, and started on another host.
> > 
> > 
> > I'm testing this scenario and after shutdown network, it should be
> > expected that agent shutdown ha and started on another host, but
> > after
> > couple minutes nothing happens and on host with network we getting
> > the
> > following messages:
> > 
> > Aug 16 11:44:08 ied-blade11.install.eurotux.local ovirt-ha-
> > agent[2779]:
> > ovirt-ha-agent
> > ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config
> > ERROR
> > Unable to get vm.conf from OVF_STORE, falling back to initial
> > vm.conf
> > 
> > I think the HA agent its trying to get vm configuration but some
> > how it
> > can't get vm.conf to start VM.
> 
> No, this is a different issues.
> In 3.6 we added a feature to let the engine manage also the engine VM
> itself; ovirt-ha-agent will pickup the latest engine VM configuration
> from the OVF_STORE which is managed by the engine.
> If something goes wrong, ovirt-ha-agent could fallback to the initial
> (bootstrap time) vm.conf. This will normally happen till you add your
> first regular storage domain and the engine imports the engine VM.

But i already have my first storage domain and storage engine domain
and already imported engine VM.

I'm using 4.0 version.

> 
> > 
> > Regards,
> > Carlos Rodrigues
> > 
> > 
> > > 
> > > > 
> > > > 
> > > > Regards,
> > > > 
> > > > --
> > > > Carlos Rodrigues
> > > > 
> > > > Engenheiro de Software Sénior
> > > > 
> > > > Eurotux Informática, S.A. | www.eurotux.com
> > > > (t) +351 253 680 300 (m) +351 911 926 110
> > > > 
> > > > ___
> > > > Users mailing list
> > > > Users@ovirt.org
> > > > http://lists.ovirt.org/mailman/listinfo/users
> > > > 
> > > 
> > --
> > Carlos Rodrigues
> > 
> > Engenheiro de Software Sénior
> > 
> > Eurotux Informática, S.A. | www.eurotux.com
> > (t) +351 253 680 300 (m) +351 911 926 110
> > 
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
-- 
Carlos Rodrigues 

Engenheiro de Software Sénior

Eurotux Informática, S.A. | www.eurotux.com
(t) +351 253 680 300 (m) +351 911 926 110

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] HostedEngine with HA

2016-08-17 Thread Carlos Rodrigues
Anyone can help me to build HA on HostedEngine VM?

How can i guarantee that if host with HostedEngine VM goes down, the
HostedEngine VM moves to another host?

Regards,
Carlos Rodrigues

On Tue, 2016-08-16 at 11:53 +0100, Carlos Rodrigues wrote:
> On Sun, 2016-08-14 at 14:22 +0300, Roy Golan wrote:
> > 
> > 
> > 
> > On 12 August 2016 at 20:23, Carlos Rodrigues <c...@eurotux.com>
> > wrote:
> > > 
> > > Hello,
> > > 
> > > I have one cluster with two hosts with power management correctly
> > > configured and one virtual machine with HostedEngine over shared
> > > storage with FiberChannel.
> > > 
> > > When i shutdown the network of host with HostedEngine VM,  it
> > > should be
> > > possible the HostedEngine VM migrate automatically to another
> > > host?
> > > 
> > migrate on which network? 
> >  
> > > 
> > > What is the expected behaviour on this HA scenario?
> > 
> > After a few minutes your vm will be shutdown by the High
> > Availability
> > agent, as it can't see network, and started on another host. 
> 
> 
> I'm testing this scenario and after shutdown network, it should be
> expected that agent shutdown ha and started on another host, but
> after
> couple minutes nothing happens and on host with network we getting
> the
> following messages:
> 
> Aug 16 11:44:08 ied-blade11.install.eurotux.local ovirt-ha-
> agent[2779]: 
> ovirt-ha-agent
> ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config ERROR
> Unable to get vm.conf from OVF_STORE, falling back to initial vm.conf
> 
> I think the HA agent its trying to get vm configuration but some how
> it
> can't get vm.conf to start VM.
> 
> Regards,
> Carlos Rodrigues
> 
> 
> > 
> > > 
> > > 
> > > Regards,
> > > 
> > > --
> > > Carlos Rodrigues 
> > > 
> > > Engenheiro de Software Sénior
> > > 
> > > Eurotux Informática, S.A. | www.eurotux.com
> > > (t) +351 253 680 300 (m) +351 911 926 110
> > > 
> > > ___
> > > Users mailing list
> > > Users@ovirt.org
> > > http://lists.ovirt.org/mailman/listinfo/users
> > > 
> > 
-- 
Carlos Rodrigues 

Engenheiro de Software Sénior

Eurotux Informática, S.A. | www.eurotux.com
(t) +351 253 680 300 (m) +351 911 926 110

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] HostedEngine with HA

2016-08-12 Thread Carlos Rodrigues
Hello,

I have one cluster with two hosts with power management correctly
configured and one virtual machine with HostedEngine over shared
storage with FiberChannel.

When i shutdown the network of host with HostedEngine VM,  it should be
possible the HostedEngine VM migrate automatically to another host?

What is the expected behaviour on this HA scenario?

Regards,

-- 
Carlos Rodrigues 

Engenheiro de Software Sénior

Eurotux Informática, S.A. | www.eurotux.com
(t) +351 253 680 300 (m) +351 911 926 110

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users