Re: [ovirt-users] Hosted engine setup question

2017-10-03 Thread Demeter Tibor
Dear Charles, 
Thank you for your reply. 

I don't want to make an another storage domain, I just want to do a 
detach-attach procedure with the existing. 

Also, I have an another question. Is it possible delete snapshots in 4.1 what 
were created in 3.5? How is safe this? 
I have some vm snapshot in the old system, but I don't want more outage with 
deleting them.The 3.5 does not support the live snapshot deleting, but the 4.1 
yes. 

Thanks, 

Tibor 

- 2017. okt.. 2., 19:55, Charles Kozler  írta: 

> I did a 3.6 to 4.1 like this. I moved all of my VMs to a new storage domain 
> (the
> other was hyperconverged gluster) and then took a full outage, shut down all 
> of
> my VMs, detached from 3.6, and imported on 4.1. I had no issues other than
> expected mac address changes, but I think you can manually override this in 
> the
> engine somewhere
> If you are worried, do it with one VM. Create a new storage domain that both
> clusters can "see", move one VM to the domain on 3.6, detach, and import to
> 3.1. Bring the VM up

> If it is Linux VM's older than systemd and using sysvinit, you will hit issues
> where your MAC address will change and udev will move it to eth# wherever # is
> the next available NIC in your VM host

> On Mon, Oct 2, 2017 at 12:54 PM, Demeter Tibor < [ mailto:tdeme...@itsmart.hu 
> |
> tdeme...@itsmart.hu ] > wrote:

>> Hi,
>> Can anyone answer my questions?

>> Thanks in advance,
>> R,

>> Tibor

>> - 2017. szept.. 19., 8:31, Demeter Tibor < [ mailto:tdeme...@itsmart.hu |
>> tdeme...@itsmart.hu ] > írta:

>>> - I have a productive ovirt cluster based on 3.5 series. This using a 
>>> shared nfs
>>> storage. Is it possible to migrate VMs from 3.5 to 4.1 with detach shared
>>> storage from the old cluster and attach it to the new cluster?
>>> - If yes what will happend with the VM properies? For example mac addresses,
>>> limits, etc. Those will be migrated or not?

>>> Thanks in advance,
>>> Regard

>>> Tibor

>>> ___
>>> Users mailing list
>>> [ mailto:Users@ovirt.org | Users@ovirt.org ]
>>> [ http://lists.ovirt.org/mailman/listinfo/users |
>>> http://lists.ovirt.org/mailman/listinfo/users ]

>> ___
>> Users mailing list
>> [ mailto:Users@ovirt.org | Users@ovirt.org ]
>> [ http://lists.ovirt.org/mailman/listinfo/users |
>> http://lists.ovirt.org/mailman/listinfo/users ]
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine setup question

2017-10-02 Thread Charles Kozler
I did a 3.6 to 4.1 like this. I moved all of my VMs to a new storage domain
(the other was hyperconverged gluster) and then took a full outage, shut
down all of my VMs, detached from 3.6, and imported on 4.1. I had no issues
other than expected mac address changes, but I think you can manually
override this in the engine somewhere

If you are worried, do it with one VM. Create a new storage domain that
both clusters can "see", move one VM to the domain on 3.6, detach, and
import to 3.1. Bring the VM up

If it is Linux VM's older than systemd and using sysvinit, you will hit
issues where your MAC address will change and udev will move it to eth#
wherever # is the next available NIC in your VM host

On Mon, Oct 2, 2017 at 12:54 PM, Demeter Tibor  wrote:

> Hi,
> Can anyone answer my questions?
>
> Thanks in advance,
> R,
>
> Tibor
>
> - 2017. szept.. 19., 8:31, Demeter Tibor  írta:
>
>
> - I have a productive ovirt cluster based on 3.5 series. This using a
> shared nfs storage.  Is it possible to migrate VMs from 3.5 to 4.1 with
> detach shared storage from the old cluster and attach it to the new
> cluster?
> - If yes what will happend with the VM properies? For example mac
> addresses, limits, etc. Those will be migrated or not?
>
> Thanks in advance,
> Regard
>
>
> Tibor
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine setup question

2017-10-02 Thread Demeter Tibor
Hi, 
Can anyone answer my questions? 

Thanks in advance, 
R, 

Tibor 

- 2017. szept.. 19., 8:31, Demeter Tibor  írta: 

> - I have a productive ovirt cluster based on 3.5 series. This using a shared 
> nfs
> storage. Is it possible to migrate VMs from 3.5 to 4.1 with detach shared
> storage from the old cluster and attach it to the new cluster?
> - If yes what will happend with the VM properies? For example mac addresses,
> limits, etc. Those will be migrated or not?

> Thanks in advance,
> Regard

> Tibor

> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted engine setup with Gluster fail

2017-08-28 Thread Kasturi Narra
can you please check if you have any additional disk in the system? If you
have additional disk in the system other than the disk which is being used
for root partition then you could specify the disk in the cockpit UI (i
hope you are using cockpit UI to do the installation) with no partitions on
that. That will take care of the installation and make your life easier as
cockpit + gdeploy would take care of configuring gluster bricks and volumes
for you.

On Mon, Aug 28, 2017 at 2:55 PM, Anzar Esmail Sainudeen <
an...@it.thumbay.com> wrote:

> Dear Nara,
>
>
>
> All the partitions, pv and vg are created automatically during the initial
> setup time.
>
>
>
> [root@ovirtnode1 ~]# vgs
>
>   VG  #PV #LV #SN Attr   VSize   VFree
>
>   onn   1  12   0 wz--n- 555.73g 14.93g
>
>
>
> All space are mounted to the below location, all free space are mounted in
> /.
>
>
>
> Filesystem  Size  Used Avail
> Use% Mounted on
>
> /dev/mapper/onn-ovirt--node--ng--4.1.4--0.20170728.0+1  513G  4.2G
> 483G   1% /
>
> devtmpfs 44G 0
> 44G   0% /dev
>
> tmpfs44G  4.0K
> 44G   1% /dev/shm
>
> tmpfs44G   33M
> 44G   1% /run
>
> tmpfs44G 0
> 44G   0% /sys/fs/cgroup
>
> /dev/sda2   976M  135M  774M
> 15% /boot
>
> /dev/mapper/onn-home976M  2.6M
> 907M   1% /home
>
> /dev/mapper/onn-tmp 2.0G  6.3M
> 1.8G   1% /tmp
>
> /dev/sda1   200M  9.5M
> 191M   5% /boot/efi
>
> /dev/mapper/onn-var  15G  1.8G   13G
> 13% /var
>
> /dev/mapper/onn-var--log7.8G  224M
> 7.2G   3% /var/log
>
> /dev/mapper/onn-var--log--audit 2.0G   44M
> 1.8G   3% /var/log/audit
>
> tmpfs   8.7G 0
> 8.7G   0% /run/user/0
>
>
>
> If we need any space we want to reduce the vg size and create new
> one.(This is correct)
>
>
>
>
>
> If the above step is complicated, can you please suggest to setup
> glusterfs datastore in ovirt
>
>
>
> Anzar Esmail Sainudeen
>
> Group Datacenter Incharge| IT Infra Division | Thumbay Group
>
> P.O Box : 4184 | Ajman | United Arab Emirates.
>
> Mobile: 055-8633699|Tel: 06 7431333 |  Extn :1303
>
> Email: an...@it.thumbay.com | Website: www.thumbay.com
>
> [image: cid:image001.jpg@01D18D9D.15A17620]
>
>
>
> Disclaimer: This message contains confidential information and is intended
> only for the individual named. If you are not the named addressee, you are
> hereby notified that disclosing, copying, distributing or taking any action
> in reliance on the contents of this e-mail is strictly prohibited. Please
> notify the sender immediately by e-mail if you have received this e-mail by
> mistake, and delete this material. Thumbay Group accepts no liability for
> errors or omissions in the contents of this message, which arise as a
> result of e-mail transmission.
>
>
>
> *From:* Kasturi Narra [mailto:kna...@redhat.com]
> *Sent:* Monday, August 28, 2017 1:14 PM
>
> *To:* Anzar Esmail Sainudeen
> *Cc:* users
> *Subject:* Re: [ovirt-users] hosted engine setup with Gluster fail
>
>
>
> yes, you can create. I do not see any problems there.
>
>
>
> May i know how these vgs are created ? If they are not created using
> gdeploy then you will have to create bricks manually from the new vg you
> have created.
>
>
>
> On Mon, Aug 28, 2017 at 2:10 PM, Anzar Esmail Sainudeen <
> an...@it.thumbay.com> wrote:
>
> Dear Nara,
>
>
>
> Thank you for your great reply.
>
>
>
> 1) can you please check if the disks what would be used for brick creation
> does not have labels or any partitions on them ?
>
>
>
> Yes I agreed there is no labels partition available, my doubt is it
> possible to create required bricks partition from available 406.7G  Linux
> LVM. Following are the physical volume and volume group information.
>
>
>
>
>
> [root@ovirtnode1 ~]# pvdisplay
>
>   --- Physical volume ---
>
>   PV Name   /dev/sda3
>
>   VG Name   onn
>
>   PV Size   555.73 GiB / not usable 2.00 MiB
>
>   Allocatable   yes
>
>   PE Size   4.00 MiB
>
>   Total PE  142267

Re: [ovirt-users] hosted engine setup with Gluster fail

2017-08-28 Thread Anzar Esmail Sainudeen
Dear Nara,

 

All the partitions, pv and vg are created automatically during the initial 
setup time.

 

[root@ovirtnode1 ~]# vgs

  VG  #PV #LV #SN Attr   VSize   VFree 

  onn   1  12   0 wz--n- 555.73g 14.93g

 

All space are mounted to the below location, all free space are mounted in /.

 

Filesystem  Size  Used Avail Use% 
Mounted on

/dev/mapper/onn-ovirt--node--ng--4.1.4--0.20170728.0+1  513G  4.2G  483G   1% /

devtmpfs 44G 0   44G   0% 
/dev

tmpfs44G  4.0K   44G   1% 
/dev/shm

tmpfs44G   33M   44G   1% 
/run

tmpfs44G 0   44G   0% 
/sys/fs/cgroup

/dev/sda2   976M  135M  774M  15% 
/boot

/dev/mapper/onn-home976M  2.6M  907M   1% 
/home

/dev/mapper/onn-tmp 2.0G  6.3M  1.8G   1% 
/tmp

/dev/sda1   200M  9.5M  191M   5% 
/boot/efi

/dev/mapper/onn-var  15G  1.8G   13G  13% 
/var

/dev/mapper/onn-var--log7.8G  224M  7.2G   3% 
/var/log

/dev/mapper/onn-var--log--audit 2.0G   44M  1.8G   3% 
/var/log/audit

tmpfs   8.7G 0  8.7G   0% 
/run/user/0

 

If we need any space we want to reduce the vg size and create new one.(This is 
correct)

 

 

If the above step is complicated, can you please suggest to setup glusterfs 
datastore in ovirt 

 

Anzar Esmail Sainudeen

Group Datacenter Incharge| IT Infra Division | Thumbay Group 

P.O Box : 4184 | Ajman | United Arab Emirates. 

Mobile: 055-8633699|Tel: 06 7431333 |  Extn :1303

Email: an...@it.thumbay.com <mailto:an...@it.thumbay.com>  | Website: 
www.thumbay.com <http://www.thumbay.com/> 



 

Disclaimer: This message contains confidential information and is intended only 
for the individual named. If you are not the named addressee, you are hereby 
notified that disclosing, copying, distributing or taking any action in 
reliance on the contents of this e-mail is strictly prohibited. Please notify 
the sender immediately by e-mail if you have received this e-mail by mistake, 
and delete this material. Thumbay Group accepts no liability for errors or 
omissions in the contents of this message, which arise as a result of e-mail 
transmission.

 

From: Kasturi Narra [mailto:kna...@redhat.com] 
Sent: Monday, August 28, 2017 1:14 PM
To: Anzar Esmail Sainudeen
Cc: users
Subject: Re: [ovirt-users] hosted engine setup with Gluster fail

 

yes, you can create. I do not see any problems there. 

 

May i know how these vgs are created ? If they are not created using gdeploy 
then you will have to create bricks manually from the new vg you have created.

 

On Mon, Aug 28, 2017 at 2:10 PM, Anzar Esmail Sainudeen <an...@it.thumbay.com 
<mailto:an...@it.thumbay.com> > wrote:

Dear Nara,

 

Thank you for your great reply.

 

1) can you please check if the disks what would be used for brick creation does 
not have labels or any partitions on them ?

 

Yes I agreed there is no labels partition available, my doubt is it possible to 
create required bricks partition from available 406.7G  Linux LVM. Following 
are the physical volume and volume group information.

 

 

[root@ovirtnode1 ~]# pvdisplay 

  --- Physical volume ---

  PV Name   /dev/sda3

  VG Name   onn

  PV Size   555.73 GiB / not usable 2.00 MiB

  Allocatable   yes 

  PE Size   4.00 MiB

  Total PE  142267

  Free PE   3823

  Allocated PE  138444

  PV UUID   v1eGGf-r1he-3XZt-JUOM-8XiT-iGkf-0xClUe

   

[root@ovirtnode1 ~]# vgdisplay 

  --- Volume group ---

  VG Name   onn

  System ID 

  Formatlvm2

  Metadata Areas1

  Metadata Sequence No  48

  VG Access read/write

  VG Status resizable

  MAX LV0

  Cur LV12

  Open LV   7

  Max PV0

  Cur PV1

  Act PV1

  VG Size   555.73 GiB

  PE Size   4.00 MiB

  Total PE  142267

  Alloc PE / Size   138444 / 540.80 GiB

  Free  PE / Size   3823 / 14.93 GiB

  VG UUID   nFfNXN-DcJt-bX1Q-UQ2U-07J5-ceT3-ULFtcy

   

 

I am thinking, to reduce the vg size and create new vg for gluster. Is it a 
good thinking.

   

 

 

Anzar Esmail Sainudeen

Group Datacenter Incharge| IT Infra Division | Thumbay Group 

P.O Box : 4184 | Ajman | United Arab Emirates. 

Mobile: 055-8633699|Tel: 06 7431333 |  Extn :1303

Email: an...@it.thumbay.com <mailto:an...

Re: [ovirt-users] hosted engine setup with Gluster fail

2017-08-28 Thread Kasturi Narra
yes, you can create. I do not see any problems there.

May i know how these vgs are created ? If they are not created using
gdeploy then you will have to create bricks manually from the new vg you
have created.

On Mon, Aug 28, 2017 at 2:10 PM, Anzar Esmail Sainudeen <
an...@it.thumbay.com> wrote:

> Dear Nara,
>
>
>
> Thank you for your great reply.
>
>
>
> 1) can you please check if the disks what would be used for brick creation
> does not have labels or any partitions on them ?
>
>
>
> Yes I agreed there is no labels partition available, my doubt is it
> possible to create required bricks partition from available 406.7G  Linux
> LVM. Following are the physical volume and volume group information.
>
>
>
>
>
> [root@ovirtnode1 ~]# pvdisplay
>
>   --- Physical volume ---
>
>   PV Name   /dev/sda3
>
>   VG Name   onn
>
>   PV Size   555.73 GiB / not usable 2.00 MiB
>
>   Allocatable   yes
>
>   PE Size   4.00 MiB
>
>   Total PE  142267
>
>   Free PE   3823
>
>   Allocated PE  138444
>
>   PV UUID   v1eGGf-r1he-3XZt-JUOM-8XiT-iGkf-0xClUe
>
>
>
> [root@ovirtnode1 ~]# vgdisplay
>
>   --- Volume group ---
>
>   VG Name   onn
>
>   System ID
>
>   Formatlvm2
>
>   Metadata Areas1
>
>   Metadata Sequence No  48
>
>   VG Access read/write
>
>   VG Status resizable
>
>   MAX LV0
>
>   Cur LV12
>
>   Open LV   7
>
>   Max PV0
>
>   Cur PV1
>
>   Act PV1
>
>   VG Size   555.73 GiB
>
>   PE Size   4.00 MiB
>
>   Total PE  142267
>
>   Alloc PE / Size   138444 / 540.80 GiB
>
>   Free  PE / Size   3823 / 14.93 GiB
>
>   VG UUID   nFfNXN-DcJt-bX1Q-UQ2U-07J5-ceT3-ULFtcy
>
>
>
>
>
> I am thinking, to reduce the vg size and create new vg for gluster. Is it
> a good thinking.
>
>
>
>
>
>
>
> Anzar Esmail Sainudeen
>
> Group Datacenter Incharge| IT Infra Division | Thumbay Group
>
> P.O Box : 4184 | Ajman | United Arab Emirates.
>
> Mobile: 055-8633699|Tel: 06 7431333 |  Extn :1303
>
> Email: an...@it.thumbay.com | Website: www.thumbay.com
>
> [image: cid:image001.jpg@01D18D9D.15A17620]
>
>
>
> Disclaimer: This message contains confidential information and is intended
> only for the individual named. If you are not the named addressee, you are
> hereby notified that disclosing, copying, distributing or taking any action
> in reliance on the contents of this e-mail is strictly prohibited. Please
> notify the sender immediately by e-mail if you have received this e-mail by
> mistake, and delete this material. Thumbay Group accepts no liability for
> errors or omissions in the contents of this message, which arise as a
> result of e-mail transmission.
>
>
>
> *From:* Kasturi Narra [mailto:kna...@redhat.com]
> *Sent:* Monday, August 28, 2017 9:48 AM
> *To:* Anzar Esmail Sainudeen
> *Cc:* users
> *Subject:* Re: [ovirt-users] hosted engine setup with Gluster fail
>
>
>
> Hi,
>
>
>
>If i understand right gdeploy script is failing at [1]. There could be
> two possible reasons why that would fail.
>
>
>
> 1) can you please check if the disks what would be used for brick creation
> does not have lables or any partitions on them ?
>
>
>
> 2) can you please check if the path [1] exists. If it does not can you
> please change the path of the script in gdeploy.conf file
> to /usr/share/gdeploy/scripts/grafton-sanity-check.sh
>
>
>
> [1] /usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh
>
>
>
> Thanks
>
> kasturi
>
>
>
> On Sun, Aug 27, 2017 at 6:52 PM, Anzar Esmail Sainudeen <
> an...@it.thumbay.com> wrote:
>
> Dear Team Ovirt,
>
>
>
> I am trying to deploy hosted engine setup with Gluster. Hosted engine
> setup was failed. Total number of host is 3 server
>
>
>
>
>
> PLAY [gluster_servers] **
> ***
>
>
>
> TASK [Run a shell script] **
> 
>
> fatal: [ovirtnode4.thumbaytechlabs.int]: FAILED! => {"failed": true,
> "msg": "The conditional check 'result.rc != 0' failed. The error was: error
> while evaluating conditional (result.rc != 0): 'dict object' has no
> attribute 'rc'"}
>
> fatal: [ovirtnode3.thumbaytechlabs.i

Re: [ovirt-users] hosted engine setup with Gluster fail

2017-08-28 Thread Anzar Esmail Sainudeen
Dear Nara,

 

Thank you for your great reply.

 

1) can you please check if the disks what would be used for brick creation does 
not have labels or any partitions on them ?

 

Yes I agreed there is no labels partition available, my doubt is it possible to 
create required bricks partition from available 406.7G  Linux LVM. Following 
are the physical volume and volume group information.

 

 

[root@ovirtnode1 ~]# pvdisplay 

  --- Physical volume ---

  PV Name   /dev/sda3

  VG Name   onn

  PV Size   555.73 GiB / not usable 2.00 MiB

  Allocatable   yes 

  PE Size   4.00 MiB

  Total PE  142267

  Free PE   3823

  Allocated PE  138444

  PV UUID   v1eGGf-r1he-3XZt-JUOM-8XiT-iGkf-0xClUe

   

[root@ovirtnode1 ~]# vgdisplay 

  --- Volume group ---

  VG Name   onn

  System ID 

  Formatlvm2

  Metadata Areas1

  Metadata Sequence No  48

  VG Access read/write

  VG Status resizable

  MAX LV0

  Cur LV12

  Open LV   7

  Max PV0

  Cur PV1

  Act PV1

  VG Size   555.73 GiB

  PE Size   4.00 MiB

  Total PE  142267

  Alloc PE / Size   138444 / 540.80 GiB

  Free  PE / Size   3823 / 14.93 GiB

  VG UUID   nFfNXN-DcJt-bX1Q-UQ2U-07J5-ceT3-ULFtcy

   

 

I am thinking, to reduce the vg size and create new vg for gluster. Is it a 
good thinking.

   

 

 

Anzar Esmail Sainudeen

Group Datacenter Incharge| IT Infra Division | Thumbay Group 

P.O Box : 4184 | Ajman | United Arab Emirates. 

Mobile: 055-8633699|Tel: 06 7431333 |  Extn :1303

Email: an...@it.thumbay.com <mailto:an...@it.thumbay.com>  | Website: 
www.thumbay.com <http://www.thumbay.com/> 



 

Disclaimer: This message contains confidential information and is intended only 
for the individual named. If you are not the named addressee, you are hereby 
notified that disclosing, copying, distributing or taking any action in 
reliance on the contents of this e-mail is strictly prohibited. Please notify 
the sender immediately by e-mail if you have received this e-mail by mistake, 
and delete this material. Thumbay Group accepts no liability for errors or 
omissions in the contents of this message, which arise as a result of e-mail 
transmission.

 

From: Kasturi Narra [mailto:kna...@redhat.com] 
Sent: Monday, August 28, 2017 9:48 AM
To: Anzar Esmail Sainudeen
Cc: users
Subject: Re: [ovirt-users] hosted engine setup with Gluster fail

 

Hi,

 

   If i understand right gdeploy script is failing at [1]. There could be two 
possible reasons why that would fail.

 

1) can you please check if the disks what would be used for brick creation does 
not have lables or any partitions on them ?

 

2) can you please check if the path [1] exists. If it does not can you please 
change the path of the script in gdeploy.conf file to 
/usr/share/gdeploy/scripts/grafton-sanity-check.sh

 

[1] /usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh

 

Thanks

kasturi

 

On Sun, Aug 27, 2017 at 6:52 PM, Anzar Esmail Sainudeen <an...@it.thumbay.com 
<mailto:an...@it.thumbay.com> > wrote:

Dear Team Ovirt,

 

I am trying to deploy hosted engine setup with Gluster. Hosted engine setup was 
failed. Total number of host is 3 server 

 

 

PLAY [gluster_servers] *

 

TASK [Run a shell script] **

fatal: [ovirtnode4.thumbaytechlabs.int <http://ovirtnode4.thumbaytechlabs.int> 
]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' 
failed. The error was: error while evaluating conditional (result.rc != 0): 
'dict object' has no attribute 'rc'"}

fatal: [ovirtnode3.thumbaytechlabs.int <http://ovirtnode3.thumbaytechlabs.int> 
]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' 
failed. The error was: error while evaluating conditional (result.rc != 0): 
'dict object' has no attribute 'rc'"}

fatal: [ovirtnode2.thumbaytechlabs.int <http://ovirtnode2.thumbaytechlabs.int> 
]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' 
failed. The error was: error while evaluating conditional (result.rc != 0): 
'dict object' has no attribute 'rc'"}

to retry, use: --limit @/tmp/tmp59G7Vc/run-script.retry

 

PLAY RECAP *

ovirtnode2.thumbaytechlabs.int <http://ovirtnode2.thumbaytechlabs.int>  : ok=0  
  changed=0unreachable=0failed=1   

ovirtnode3.thumbaytechlabs.int <http://ovirtnode3.thumbaytechlabs.int>  : ok=0  
  changed=0  

Re: [ovirt-users] hosted engine setup with Gluster fail

2017-08-27 Thread Kasturi Narra
Hi,

   If i understand right gdeploy script is failing at [1]. There could be
two possible reasons why that would fail.

1) can you please check if the disks what would be used for brick creation
does not have lables or any partitions on them ?

2) can you please check if the path [1] exists. If it does not can you
please change the path of the script in gdeploy.conf file
to /usr/share/gdeploy/scripts/grafton-sanity-check.sh

[1] /usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh

Thanks
kasturi

On Sun, Aug 27, 2017 at 6:52 PM, Anzar Esmail Sainudeen <
an...@it.thumbay.com> wrote:

> Dear Team Ovirt,
>
>
>
> I am trying to deploy hosted engine setup with Gluster. Hosted engine
> setup was failed. Total number of host is 3 server
>
>
>
>
>
> PLAY [gluster_servers] **
> ***
>
>
>
> TASK [Run a shell script] **
> 
>
> fatal: [ovirtnode4.thumbaytechlabs.int]: FAILED! => {"failed": true,
> "msg": "The conditional check 'result.rc != 0' failed. The error was: error
> while evaluating conditional (result.rc != 0): 'dict object' has no
> attribute 'rc'"}
>
> fatal: [ovirtnode3.thumbaytechlabs.int]: FAILED! => {"failed": true,
> "msg": "The conditional check 'result.rc != 0' failed. The error was: error
> while evaluating conditional (result.rc != 0): 'dict object' has no
> attribute 'rc'"}
>
> fatal: [ovirtnode2.thumbaytechlabs.int]: FAILED! => {"failed": true,
> "msg": "The conditional check 'result.rc != 0' failed. The error was: error
> while evaluating conditional (result.rc != 0): 'dict object' has no
> attribute 'rc'"}
>
> to retry, use: --limit @/tmp/tmp59G7Vc/run-script.retry
>
>
>
> PLAY RECAP 
> *
>
> ovirtnode2.thumbaytechlabs.int : ok=0changed=0unreachable=0
> failed=1
>
> ovirtnode3.thumbaytechlabs.int : ok=0changed=0unreachable=0
> failed=1
>
> ovirtnode4.thumbaytechlabs.int : ok=0changed=0unreachable=0
> failed=1
>
>
>
>
>
> Please note my finding.
>
>
>
> 1.Still I am doubt with bricks setup ares . because during the ovirt
> node setup time automatically create partition and mount all space. Please
> find below #fdisk –l output
>
> 2.
>
> [root@ovirtnode4 ~]# fdisk –l
>
>
>
> WARNING: fdisk GPT support is currently new, and therefore in an
> experimental phase. Use at your own discretion.
>
>
>
> Disk /dev/sda: 438.0 GB, 437998583808 bytes, 855465984 sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 512 bytes / 512 bytes
>
> Disk label type: gpt
>
>
>
>
>
> # Start  EndSize  TypeName
>
> 1 2048   411647200M  EFI System  EFI System Partition
>
> 2   411648  2508799  1G  Microsoft basic
>
>  3  2508800855463935  406.7G  Linux LVM
>
>
>
> Disk /dev/mapper/onn-swap: 25.4 GB, 25367150592 bytes, 49545216 sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 512 bytes / 512 bytes
>
>
>
>
>
> Disk /dev/mapper/onn-pool00_tmeta: 1073 MB, 1073741824 bytes, 2097152
> sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 512 bytes / 512 bytes
>
>
>
>
>
> Disk /dev/mapper/onn-pool00_tdata: 394.2 GB, 394159718400 bytes, 769843200
> sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 512 bytes / 512 bytes
>
>
>
>
>
> Disk /dev/mapper/onn-pool00-tpool: 394.2 GB, 394159718400 bytes, 769843200
> sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 131072 bytes / 262144 bytes
>
>
>
>
>
> Disk /dev/mapper/onn-ovirt--node--ng--4.1.4--0.20170728.0+1: 378.1 GB,
> 378053591040 bytes, 738385920 sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 131072 bytes / 262144 bytes
>
>
>
>
>
> Disk /dev/mapper/onn-pool00: 394.2 GB, 394159718400 bytes, 769843200
> sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 131072 bytes / 262144 bytes
>
>
>
>
>
> Disk /dev/mapper/onn-var: 16.1 GB, 16106127360 bytes, 31457280 sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 131072 bytes / 262144 bytes
>
>
>
>
>
> Disk /dev/mapper/onn-root: 378.1 GB, 378053591040 bytes, 738385920 sectors
>
> Units = sectors of 1 * 512 = 512 bytes
>
> Sector size (logical/physical): 512 bytes / 512 bytes
>
> I/O size (minimum/optimal): 131072 bytes / 262144 bytes
>
>
>
>
>
> 

Re: [ovirt-users] Hosted Engine Setup with the gluster bricks on the same disk as the OS

2017-06-18 Thread Mike DePaulo
On Thu, May 18, 2017 at 10:03 AM, Sachidananda URS  wrote:

> Hi,
>
> On Thu, May 18, 2017 at 7:08 PM, Sahina Bose  wrote:
>
>>
>>
>> On Thu, May 18, 2017 at 3:20 PM, Mike DePaulo 
>> wrote:
>>
>>> Well, I tried both of the following:
>>> 1. Having only a boot partition and a PV for the OS that does not take
>>> up the entire disk, and then specifying "sda" in Hosted Engine Setup.
>>> 2. Having not only a boot partition and a PV for the OS, but also an
>>> empty (and not formatted) /dev/sda3 PV that I created with fdisk.
>>> Then, specfying "sda3" in Hosted Engine Setup.
>>>
>>> Both attempts resulted in errors like this:
>>> failed: [centerpoint.ad.depaulo.org] (item=/dev/sda3) => {"failed":
>>> true, "failed_when_result": true, "item": "/dev/sda3", "msg": "
>>> Device /dev/sda3 not found (or ignored by filtering).\n", "rc": 5}
>>>
>>
>> Can you provide gdeploy logs? I think, it's at ~/.gdeploy/gdeploy.log
>>
>>
>>>
>>> It seems like having gluster bricks on the same disk as the OS doesn't
>>> work at all.
>>>
>>>
>
> Hi, /dev/sda3 should work, the error here is possibly due to filesystem
> signature.
>
> Can you please set wipefs=yes? For example
>
> [pv]
> action=create
> wipefs=yes
> devices=/dev/sda3
>
> -sac
>
>
Sorry for the long delay.

This worked. Thank you very much.

-Mike
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine Setup with the gluster bricks on the same disk as the OS

2017-05-18 Thread Sachidananda URS
Hi,

On Thu, May 18, 2017 at 7:08 PM, Sahina Bose  wrote:

>
>
> On Thu, May 18, 2017 at 3:20 PM, Mike DePaulo 
> wrote:
>
>> Well, I tried both of the following:
>> 1. Having only a boot partition and a PV for the OS that does not take
>> up the entire disk, and then specifying "sda" in Hosted Engine Setup.
>> 2. Having not only a boot partition and a PV for the OS, but also an
>> empty (and not formatted) /dev/sda3 PV that I created with fdisk.
>> Then, specfying "sda3" in Hosted Engine Setup.
>>
>> Both attempts resulted in errors like this:
>> failed: [centerpoint.ad.depaulo.org] (item=/dev/sda3) => {"failed":
>> true, "failed_when_result": true, "item": "/dev/sda3", "msg": "
>> Device /dev/sda3 not found (or ignored by filtering).\n", "rc": 5}
>>
>
> Can you provide gdeploy logs? I think, it's at ~/.gdeploy/gdeploy.log
>
>
>>
>> It seems like having gluster bricks on the same disk as the OS doesn't
>> work at all.
>>
>>

Hi, /dev/sda3 should work, the error here is possibly due to filesystem
signature.

Can you please set wipefs=yes? For example

[pv]
action=create
wipefs=yes
devices=/dev/sda3

-sac
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine Setup with the gluster bricks on the same disk as the OS

2017-05-18 Thread Sahina Bose
On Thu, May 18, 2017 at 3:20 PM, Mike DePaulo  wrote:

> Well, I tried both of the following:
> 1. Having only a boot partition and a PV for the OS that does not take
> up the entire disk, and then specifying "sda" in Hosted Engine Setup.
> 2. Having not only a boot partition and a PV for the OS, but also an
> empty (and not formatted) /dev/sda3 PV that I created with fdisk.
> Then, specfying "sda3" in Hosted Engine Setup.
>
> Both attempts resulted in errors like this:
> failed: [centerpoint.ad.depaulo.org] (item=/dev/sda3) => {"failed":
> true, "failed_when_result": true, "item": "/dev/sda3", "msg": "
> Device /dev/sda3 not found (or ignored by filtering).\n", "rc": 5}
>

Can you provide gdeploy logs? I think, it's at ~/.gdeploy/gdeploy.log


>
> It seems like having gluster bricks on the same disk as the OS doesn't
> work at all.
>
> I am going to buy separate OS SSDs.
>
> -Mike
>
> On Tue, May 9, 2017 at 6:22 AM, Mike DePaulo  wrote:
> > On Mon, May 8, 2017 at 9:00 AM, knarra  wrote:
> >> On 05/07/2017 04:48 PM, Mike DePaulo wrote:
> >>>
> >>> Hi. I am trying to follow this guide. Is it possible to use part of my
> >>> OS disk /dev/sda for the bricks?
> >>>
> >>> https://www.ovirt.org/blog/2017/04/up-and-running-with-
> ovirt-4-1-and-gluster-storage/
> >>>
> >>> I am using oVirt Node 4.1.1.1. I am aware of the manual partitioning
> >>> requirements. I am guessing I have to create an LV for the OS that
> >>> does not take up the entire disk during install, manually create a pv
> >>> like /dev/sda3 afterwards, and then run Hosted Engine Setup and
> >>> specify /sda3 rather than sdb?
> >>>
> >>> Thanks,
> >>> -Mike
> >>> ___
> >>> Users mailing list
> >>> Users@ovirt.org
> >>> http://lists.ovirt.org/mailman/listinfo/users
> >>
> >>
> >> Hi Mike,
> >>
> >> If you create gluster bricks on the same disk as OS it works but we
> do
> >> not recommend setting up gluster bricks on the same disk as the os. When
> >> user tries to create a gluster volume using by specifying the bricks
> from
> >> root partition it displays an error message "Bricks in root parition not
> >> recommended and use force at the end to create volume".
> >>
> >> Thanks
> >>
> >> kasturi
> >>
> >
> > Thank you very much. Is my process for doing this (listed in my
> > original email) correct though?
> >
> > -Mike
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine Setup with the gluster bricks on the same disk as the OS

2017-05-18 Thread Mike DePaulo
Well, I tried both of the following:
1. Having only a boot partition and a PV for the OS that does not take
up the entire disk, and then specifying "sda" in Hosted Engine Setup.
2. Having not only a boot partition and a PV for the OS, but also an
empty (and not formatted) /dev/sda3 PV that I created with fdisk.
Then, specfying "sda3" in Hosted Engine Setup.

Both attempts resulted in errors like this:
failed: [centerpoint.ad.depaulo.org] (item=/dev/sda3) => {"failed":
true, "failed_when_result": true, "item": "/dev/sda3", "msg": "
Device /dev/sda3 not found (or ignored by filtering).\n", "rc": 5}

It seems like having gluster bricks on the same disk as the OS doesn't
work at all.

I am going to buy separate OS SSDs.

-Mike

On Tue, May 9, 2017 at 6:22 AM, Mike DePaulo  wrote:
> On Mon, May 8, 2017 at 9:00 AM, knarra  wrote:
>> On 05/07/2017 04:48 PM, Mike DePaulo wrote:
>>>
>>> Hi. I am trying to follow this guide. Is it possible to use part of my
>>> OS disk /dev/sda for the bricks?
>>>
>>> https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4-1-and-gluster-storage/
>>>
>>> I am using oVirt Node 4.1.1.1. I am aware of the manual partitioning
>>> requirements. I am guessing I have to create an LV for the OS that
>>> does not take up the entire disk during install, manually create a pv
>>> like /dev/sda3 afterwards, and then run Hosted Engine Setup and
>>> specify /sda3 rather than sdb?
>>>
>>> Thanks,
>>> -Mike
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>> Hi Mike,
>>
>> If you create gluster bricks on the same disk as OS it works but we do
>> not recommend setting up gluster bricks on the same disk as the os. When
>> user tries to create a gluster volume using by specifying the bricks from
>> root partition it displays an error message "Bricks in root parition not
>> recommended and use force at the end to create volume".
>>
>> Thanks
>>
>> kasturi
>>
>
> Thank you very much. Is my process for doing this (listed in my
> original email) correct though?
>
> -Mike
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine Setup with the gluster bricks on the same disk as the OS

2017-05-09 Thread Mike DePaulo
On Mon, May 8, 2017 at 9:00 AM, knarra  wrote:
> On 05/07/2017 04:48 PM, Mike DePaulo wrote:
>>
>> Hi. I am trying to follow this guide. Is it possible to use part of my
>> OS disk /dev/sda for the bricks?
>>
>> https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4-1-and-gluster-storage/
>>
>> I am using oVirt Node 4.1.1.1. I am aware of the manual partitioning
>> requirements. I am guessing I have to create an LV for the OS that
>> does not take up the entire disk during install, manually create a pv
>> like /dev/sda3 afterwards, and then run Hosted Engine Setup and
>> specify /sda3 rather than sdb?
>>
>> Thanks,
>> -Mike
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
>
> Hi Mike,
>
> If you create gluster bricks on the same disk as OS it works but we do
> not recommend setting up gluster bricks on the same disk as the os. When
> user tries to create a gluster volume using by specifying the bricks from
> root partition it displays an error message "Bricks in root parition not
> recommended and use force at the end to create volume".
>
> Thanks
>
> kasturi
>

Thank you very much. Is my process for doing this (listed in my
original email) correct though?

-Mike
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine Setup with the gluster bricks on the same disk as the OS

2017-05-08 Thread knarra

On 05/07/2017 04:48 PM, Mike DePaulo wrote:

Hi. I am trying to follow this guide. Is it possible to use part of my
OS disk /dev/sda for the bricks?
https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4-1-and-gluster-storage/

I am using oVirt Node 4.1.1.1. I am aware of the manual partitioning
requirements. I am guessing I have to create an LV for the OS that
does not take up the entire disk during install, manually create a pv
like /dev/sda3 afterwards, and then run Hosted Engine Setup and
specify /sda3 rather than sdb?

Thanks,
-Mike
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Hi Mike,

If you create gluster bricks on the same disk as OS it works but we 
do not recommend setting up gluster bricks on the same disk as the os. 
When user tries to create a gluster volume using by specifying the 
bricks from root partition it displays an error message "Bricks in root 
parition not recommended and use force at the end to create volume".


Thanks

kasturi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine setup shooting dirty pool

2017-04-13 Thread Sahina Bose
On Wed, Apr 12, 2017 at 11:15 PM, Jamie Lawrence 
wrote:

>
> > On Apr 12, 2017, at 1:31 AM, Evgenia Tokar  wrote:
> >
> > Hi Jamie,
> >
> > Are you trying to setup hosted engine using the "hosted-engine --deploy"
> command, or are you trying to migrate existing he vm?
> >
> > For hosted engine setup you need to provide a clean storage domain,
> which is not a part of your 4.1 setup, this storage domain will be used for
> the hosted engine and will be visible in the UI once the deployment of the
> hosted engine is complete.
> > If your storage domain appears in the UI it means that it is already
> connected to the storage pool and is not "clean”.
>
> Hi Jenny,
>
> Thanks for the response.
>
> I’m using `hosted-engine —deploy`, yes. (Actually, the last few attempts
> have been with an answerfile, but the responses are the same.)
>
> I think I may have been unclear.  I understand that it wants an unmolested
> SD. There just doesn’t seem to be a path to provide that with an
> Ovirt-managed Gluster cluster.
>
> I guess my question is how to provide that with an Ovirt-managed gluster
> installation. Or a different way of asking, I guess, would be how do I make
> Ovirt/VDSM ignore a newly created gluster SD so that `hosted-engine` can
> pick it up? I don’t see any options to tell the Gluster cluster to not
> auto-discover or similar. So as soon as I create it, the non-hosted engine
> picks it up. This happens within seconds - I vainly tried to time it with
> running the installer.
>
> This is why I mentioned dismissing the idea of using another Gluster
> installation, unattached to Ovirt. That’s the only way I could think of to
> give it a clean pool. (I dismissed it because I can’t run this in
> production with that sort of dependency.)
>
> Do I need to take this Gluster cluster out of Ovirt control (delete the
> Gluster cluster from the Ovirt GUI, recreate outside of Ovirt manually),
> install on to that, and then re-associate it in the GUI or something
> similar?
>

The gluster cluster being detected in Ovirt does not make it a dirty
storage domain. It looks like the gluster volume was previously used as
storage domain and was not cleaned up? You can try mounting the gluster
volume and check if it has any content

I'm a bit confused about the setup though  - do you already have an
installation of oVirt engine that you use to manage the gluster hosts. Are
you deploying another engine (HE) that's managing the same hosts or using
gluster volume from another installation?


> -j
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine setup shooting dirty pool

2017-04-12 Thread Jamie Lawrence

> On Apr 12, 2017, at 1:31 AM, Evgenia Tokar  wrote:
> 
> Hi Jamie, 
> 
> Are you trying to setup hosted engine using the "hosted-engine --deploy" 
> command, or are you trying to migrate existing he vm? 
>  
> For hosted engine setup you need to provide a clean storage domain, which is 
> not a part of your 4.1 setup, this storage domain will be used for the hosted 
> engine and will be visible in the UI once the deployment of the hosted engine 
> is complete.
> If your storage domain appears in the UI it means that it is already 
> connected to the storage pool and is not "clean”.

Hi Jenny,

Thanks for the response.

I’m using `hosted-engine —deploy`, yes. (Actually, the last few attempts have 
been with an answerfile, but the responses are the same.)

I think I may have been unclear.  I understand that it wants an unmolested SD. 
There just doesn’t seem to be a path to provide that with an Ovirt-managed 
Gluster cluster.

I guess my question is how to provide that with an Ovirt-managed gluster 
installation. Or a different way of asking, I guess, would be how do I make 
Ovirt/VDSM ignore a newly created gluster SD so that `hosted-engine` can pick 
it up? I don’t see any options to tell the Gluster cluster to not auto-discover 
or similar. So as soon as I create it, the non-hosted engine picks it up. This 
happens within seconds - I vainly tried to time it with running the installer.

This is why I mentioned dismissing the idea of using another Gluster 
installation, unattached to Ovirt. That’s the only way I could think of to give 
it a clean pool. (I dismissed it because I can’t run this in production with 
that sort of dependency.)

Do I need to take this Gluster cluster out of Ovirt control (delete the Gluster 
cluster from the Ovirt GUI, recreate outside of Ovirt manually), install on to 
that, and then re-associate it in the GUI or something similar?

-j
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine setup shooting dirty pool

2017-04-12 Thread Evgenia Tokar
Hi Jamie,

Are you trying to setup hosted engine using the "hosted-engine --deploy"
command, or are you trying to migrate existing he vm?

For hosted engine setup you need to provide a clean storage domain, which
is not a part of your 4.1 setup, this storage domain will be used for the
hosted engine and will be visible in the UI once the deployment of the
hosted engine is complete.
If your storage domain appears in the UI it means that it is already
connected to the storage pool and is not "clean".

Thanks,
Jenny

On Wed, Apr 12, 2017 at 2:47 AM, Jamie Lawrence 
wrote:

> Or at least, refusing to mount a dirty pool.
>
> I have 4.1 set up, configured and functional, currently wired up with two
> VM hosts and three Gluster hosts. It is configured with a (temporary) NFS
> data storage domain, with the end-goal being two data domains on Gluster;
> one for the hosted engine, one for other VMs.
>
> The issue is that `hosted-engine` sees any gluster volumes offered as
> dirty. (I have been creating them via the command line  right before
> attempting the hosted-engine migration; there is nothing in them at that
> stage.)  I *think* what is happening is that ovirt-engine notices a newly
> created volume and has its way with the volume (visible in the GUI; the
> volume appears in the list), and the hosted-engine installer becomes upset
> about that. What I don’t know is what to do about it. Relevant log lines
> below. The installer almost sounds like it is asking me to remove the
> UUID-directory and whatnot, but I’m pretty sure that’s just going to leave
> me with two problems instead of fixing the first one. I’ve considered
> attempting to wire this together in the DB, which also seems like a great
> way to break things. I’ve even thought of using a Gluster installation that
> Ovirt knows nothing about, mainly as an experiment to see if it would even
> work, but decided it doesn’t matter, because I can’t deploy in that state
> anyway and it doesn’t actually get me any closer to getting this working.
>
> I noticed several bugs in the tracker seemingly related, but the bulk of
> those were for past versions and I saw nothing that seemed actionable from
> my end in the others.
>
> So, can anyone spare a clue as to what is going wrong, and what to do
> about that?
>
> -j
>
> - - - - ovirt-hosted-engine-setup.log - - - -
>
> 2017-04-11 16:14:39 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._storageServerConnection:408 connectStorageServer
> 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._storageServerConnection:475 {'status': {'message': 'Done',
> 'code': 0}, 'items': [{u'status': 0, u'id': u'890e82cf-5570-4507-a9bc-
> c610584dea6e'}]}
> 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._storageServerConnection:502 {'status': {'message': 'Done',
> 'code': 0}, 'items': [{u'status': 0, u'id': u'cd1a1bb6-e607-4e35-b815-
> 1fd88b84fe14'}]}
> 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._check_existing_pools:794 _check_existing_pools
> 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._check_existing_pools:795 getConnectedStoragePoolsList
> 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._check_existing_pools:797 {'status': {'message': 'Done', 'code':
> 0}}
> 2017-04-11 16:14:40 INFO otopi.plugins.gr_he_setup.storage.storage
> storage._misc:956 Creating Storage Domain
> 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._createStorageDomain:513 createStorageDomain
> 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._createStorageDomain:547 {'status': {'message': 'Done', 'code':
> 0}}
> 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._createStorageDomain:549 {'status': {'message': 'Done', 'code':
> 0}, u'mdasize': 0, u'mdathreshold': True, u'mdavalid': True, u'diskfree':
> u'321929216000', u'disktotal': u'321965260800', u'mdafree': 0}
> 2017-04-11 16:14:40 INFO otopi.plugins.gr_he_setup.storage.storage
> storage._misc:959 Creating Storage Pool
> 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._createFakeStorageDomain:553 createFakeStorageDomain
> 2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._createFakeStorageDomain:570 {'status': {'message': 'Done',
> 'code': 0}}
> 2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._createFakeStorageDomain:572 {'status': {'message': 'Done',
> 'code': 0}, u'mdasize': 0, u'mdathreshold': True, u'mdavalid': True,
> u'diskfree': u'1933930496', u'disktotal': u'2046640128', u'mdafree': 0}
> 2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._createStoragePool:587 createStoragePool
> 2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._createStoragePool:627 createStoragePool(args=[
> 

Re: [ovirt-users] hosted-Engine setup: hostname 'node01.example.com' doesn't uniquely match the interface selected for the management bridge

2016-07-14 Thread Yedidyah Bar David
On Tue, Jul 5, 2016 at 5:56 PM, mots  wrote:
> Hello,
>
> I'm trying to install Ovirt 4 on a new set of hosts. During "hosted-engine 
> --deploy" I get the following error: (personal information is replaced with 
> generic placeholders)
>
> [ INFO  ] Stage: Setup validation
> [ ERROR ] Failed to execute stage 'Setup validation': hostname 
> 'node01.example.com' doesn't uniquely match the interface 'ens802f1' selected 
> for the management bridge; it matches also interface with IP 
> set(['192.168.99.10']). Please make sure that the hostname got from the 
> interface for the management network resolves only there.
> [ INFO  ] Stage: Clean up
> [ INFO  ] Generating answer file 
> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160705144908.conf'
> [ INFO  ] Stage: Pre-termination
> [ INFO  ] Stage: Termination
> [ ERROR ] Hosted Engine deployment failed: this system is not reliable, 
> please check the issue, fix and redeploy
>   Log file is located at 
> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160705144711-tl98lx.log
>
> That IP "192.168.99.10" doesn't resolve to anything, because I haven't added 
> it to the DNS server. It's also not in /etc/hosts.
> It's just the IP for the storage network that doesn't use DNS at all.
>
> From the log:
>
> 2016-07-05 14:49:08 DEBUG otopi.plugins.gr_he_common.network.bridge 
> bridge._get_hostname_from_bridge_if:274 Network info: {'netmask': 
> u'255.255.255.0', 'ipaddr': u'192.168.10.194', 'gateway': u'192.168.10.2'}

Meaning the interface ens802f1 has address 192.168.10.194

> 2016-07-05 14:49:08 DEBUG otopi.plugins.gr_he_common.network.bridge 
> bridge._get_hostname_from_bridge_if:310 hostname: 'node01.example.com', 
> aliaslist: '[]', ipaddrlist: '['192.168.99.10', '192.168.10.194']'

This is the result of:

python -c 'import socket; print(socket.gethostbyaddr("192.168.10.194"));'

> 2016-07-05 14:49:08 DEBUG otopi.context context._executeMethod:142 method 
> exception
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/otopi/context.py", line 132, in 
> _executeMethod
> method['method']()
>   File 
> "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-common/network/bridge.py",
>  line 327, in _get_hostname_from_bridge_if
> o=other_ip,
> RuntimeError: hostname 'node01.example.comh' doesn't uniquely match the 
> interface 'ens802f1' selected for the management bridge; it matches also 
> interface with IP set(['192.168.99.10']). Please make sure that the hostname 
> got from the interface for the management network resolves only there.
> 2016-07-05 14:49:08 ERROR otopi.context context._executeMethod:151 Failed to 
> execute stage 'Setup validation': hostname 'node01.example.com' doesn't 
> uniquely match the interface 'ens802f1' selected for the management bridge; 
> it matches also interface with IP set(['192.168.99.10']). Please make sure 
> that the hostname got from the interface for the management network resolves 
> only there.
>
> The output for dig:
>
> [root@node01 ~]# dig node01.example.com
>
> ; <<>> DiG 9.9.4-RedHat-9.9.4-29.el7_2.3 <<>> node01.example.com
> ;; global options: +cmd
> ;; Got answer:
> ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 45269
> ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 2
>
> ;; OPT PSEUDOSECTION:
> ; EDNS: version: 0, flags:; udp: 4096
> ;; QUESTION SECTION:
> ;node01.example.com. IN  A
>
> ;; ANSWER SECTION:
> node01.example.com. 3600 IN  A   192.168.10.194
>
> ;; AUTHORITY SECTION:
> example.com   900 IN  NS  dns.example.com.
>
> ;; ADDITIONAL SECTION:
> dns.example.com. 900 IN  A   192.168.10.61
>
> ;; Query time: 3 msec
> ;; SERVER: 192.168.10.61#53(192.168.10.61)
> ;; WHEN: Die Jul 05 15:14:48 CEST 2016
> ;; MSG SIZE  rcvd: 110
>
> Output for nslookup:
>
> [root@node01 ~]# nslookup 192.168.99.10
> Server: 192.168.10.61
> Address:192.168.10.61#53
>
> ** server can't find 10.99.168.192.in-addr.arpa.: NXDOMAIN
>
> Why does the setup script think that my hostname resolves to 192.168.99.10?

Please run above python command and see for yourself.

Perhaps you have other means it uses for name resolution.

Check /etc/nsswitch.conf, getent, mdns, /etc/hosts, etc.

Best,
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine setup fails - system unreliable

2016-05-22 Thread Yedidyah Bar David
On Sat, May 21, 2016 at 8:47 AM, Bill Bill  wrote:
> I’ve tried over & over on fresh installs to setup the hosted engine VM
> however, each time, it fails. No clue as to what the problem is as it just
> says “this system is unreliable”.

Please post (a link to?) full logs. Depending on exact point/reason, which
can't be seen in current snippet, this should include at least full
hosted-engine-setup logs, and also perhaps engine (from engine vm) and vdsm
(from host) ones. Thanks.

>
>
>
> ///
>
>
>
> Log output:
>
>
>
> ///
>
>
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> OVEHOSTED_VM/nicUUID=str:'58a28a5e-5d0e-4ac3-835a-a1e9b0df6ae6'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> OVEHOSTED_VM/ovfArchive=unicode:'/usr/share/ovirt-engine-appliance/ovirt-engine-appliance-3.6-20160420.1.el7.centos.ova'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> OVEHOSTED_VM/subst=dict:'{'@CDROM@': '/tmp/tmpTyL8IW/seed.iso', '@SD_UUID@':
> '2455aa81-146f-4a6e-bd6c-c368fa1d10b8', '@CONSOLE_UUID@':
> 'bef503b1-4224-4d82-8acd-8b03d21ae60b', '@NAME@': 'HostedEngine',
> '@BRIDGE@': 'ovirtmgmt', '@CDROM_UUID@':
> '4f64e8ba-5253-4b9c-b1a7-bc550e22f097', '@MEM_SIZE@': 4096, '@NIC_UUID@':
> '58a28a5e-5d0e-4ac3-835a-a1e9b0df6ae6', '@BOOT_CDROM@': '', '@VCPUS@': '4',
> '@CPU_TYPE@': 'SandyBridge', '@VM_UUID@':
> '8cc30bbf-8f4b-4fce-ae4a-ffd476baf2b3', '@EMULATED_MACHINE@': 'pc',
> '@BOOT_PXE@': '', '@IMG_UUID@': '4148fb72-73f1-4d8f-8368-5b6e1ddb4e96',
> '@BOOT_DISK@': ',bootOrder:1', '@CONSOLE_TYPE@': 'vnc', '@MAC_ADDR@':
> '00:16:3e:41:21:db', '@SP_UUID@': '----',
> '@VOL_UUID@': '9c175329-7d1a-4b51-8218-8a2512305684'}'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> OVEHOSTED_VM/vmBoot=str:'disk'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> OVEHOSTED_VM/vmCDRom=NoneType:'None'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> OVEHOSTED_VM/vmMACAddr=str:'00:16:3e:41:21:db'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> OVEHOSTED_VM/vmMemSizeMB=int:'4096'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> OVEHOSTED_VM/vmUUID=str:'8cc30bbf-8f4b-4fce-ae4a-ffd476baf2b3'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> OVEHOSTED_VM/vmVCpus=str:'4'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> OVESETUP_CORE/offlinePackager=bool:'True'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> PACKAGER/dnfDisabledPlugins=list:'[]'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> PACKAGER/dnfExpireCache=bool:'True'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> PACKAGER/dnfRollback=bool:'True'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> PACKAGER/dnfpackagerEnabled=bool:'True'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> PACKAGER/keepAliveInterval=int:'30'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> PACKAGER/yumDisabledPlugins=list:'[]'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> PACKAGER/yumEnabledPlugins=list:'[]'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> PACKAGER/yumExpireCache=bool:'True'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> PACKAGER/yumRollback=bool:'True'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> PACKAGER/yumpackagerEnabled=bool:'False'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> SYSTEM/clockMaxGap=int:'5'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> SYSTEM/clockSet=bool:'False'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> SYSTEM/commandPath=str:'/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> SYSTEM/reboot=bool:'False'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> SYSTEM/rebootAllow=bool:'True'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:510 ENV
> SYSTEM/rebootDeferTime=int:'10'
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.dumpEnvironment:514
> ENVIRONMENT DUMP - END
>
> 2016-05-21 01:42:42 DEBUG otopi.context context._executeMethod:142 Stage
> pre-terminate METHOD otopi.plugins.otopi.dialog.cli.Plugin._pre_terminate
>
> 2016-05-21 01:42:42 DEBUG otopi.context context._executeMethod:148 condition
> False
>
> 2016-05-21 01:42:42 INFO otopi.context context.runSequence:427 Stage:
> Termination
>
> 2016-05-21 01:42:42 DEBUG otopi.context context.runSequence:431 STAGE
> terminate
>
> 2016-05-21 01:42:42 

Re: [ovirt-users] hosted-engine setup Gluster fails to execute

2016-05-03 Thread Langley, Robert
Outstanding! Thank you Sahina! Your assistance is much appreciated.
When I saw your email it reminded me that the versions were different and I was 
having an issue with reaching the internet from the engine storage servers. A 
DNS issue between my private DNS server and my organization's DNS server. I 
figured out a workaround and found that I had to install the ovirt release36 
rpm repository also. Since my host server had 3.7 (since it had the oVirt 
release36 rpm repository) and my engine storage would only upgrade to the 
latest 3.6 GlusterFS version. All are now at Glusterfs 3.7.11 and working. I 
got past the storage configuration.

From: Sahina Bose [mailto:sab...@redhat.com]
Sent: Monday, May 2, 2016 11:24 PM
To: Langley, Robert <robert.lang...@ventura.org>; users@ovirt.org
Subject: Re: [ovirt-users] hosted-engine setup Gluster fails to execute

Command that failed to execute from your hosted-engine node - "gluster volume 
info engine-vol --remote-host gsave0.engine.local"

Can you check the glusterfs versions on the hosted-engine node and 
gsave0.engine.local node - are they the same?
On 05/02/2016 11:08 PM, Langley, Robert wrote:
Hi Sahina,

Thank you for your response. Let me know if you'll need any of the log before 
the Storage Configuration section. I looked at this earlier and I was wondering 
why, after choosing to use GlusterFS, there is still reference to NFS (nfs.py)? 
I do believe NFS is disabled in my Gluster config for the engine cluster. 
-Robert

2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:SEND --== STORAGE CONFIGURATION 
==--
2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:SEND
2016-05-02 09:16:53 DEBUG otopi.context context._executeMethod:142 Stage 
customization METHOD 
otopi.plugins.ovirt_hosted_engine_setup.storage.storage.Plugin._early_customization
2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:SEND During customization use 
CTRL-D to abort.
2016-05-02 09:16:53 DEBUG 
otopi.plugins.ovirt_hosted_engine_setup.storage.storage 
storage._check_existing_pools:1100 _check_existing_pools
2016-05-02 09:16:53 DEBUG 
otopi.plugins.ovirt_hosted_engine_setup.storage.storage 
storage._check_existing_pools:1101 getConnectedStoragePoolsList
2016-05-02 09:16:53 DEBUG 
otopi.plugins.ovirt_hosted_engine_setup.storage.storage 
storage._check_existing_pools:1103 {'status': {'message': 'OK', 'code': 0}, 
'poollist': []}
2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human 
human.queryString:156 query OVEHOSTED_STORAGE_DOMAIN_TYPE
2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:SEND Please specify the storage 
you would like to use (glusterfs, iscsi, fc, nfs3, nfs4)[nfs3]:
2016-05-02 09:16:59 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:RECEIVEglusterfs
2016-05-02 09:16:59 DEBUG otopi.context context.dumpEnvironment:500 ENVIRONMENT 
DUMP - BEGIN
2016-05-02 09:16:59 DEBUG otopi.context context.dumpEnvironment:510 ENV 
OVEHOSTED_STORAGE/domainType=str:'glusterfs'
2016-05-02 09:16:59 DEBUG otopi.context context.dumpEnvironment:514 ENVIRONMENT 
DUMP - END
2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:142 Stage 
customization METHOD 
otopi.plugins.ovirt_hosted_engine_setup.storage.glusterfs.Plugin._customization
2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:142 Stage 
customization METHOD 
otopi.plugins.ovirt_hosted_engine_setup.storage.glusterfs.Plugin._brick_customization
2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:148 condition 
False
2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:142 Stage 
customization METHOD 
otopi.plugins.ovirt_hosted_engine_setup.storage.nfs.Plugin._customization
2016-05-02 09:16:59 INFO otopi.plugins.ovirt_hosted_engine_setup.storage.nfs 
nfs._customization:360 Please note that Replica 3 support is required for the 
shared storage.
2016-05-02 09:16:59 DEBUG otopi.plugins.otopi.dialog.human 
human.queryString:156 query OVEHOSTED_STORAGE_DOMAIN_CONNECTION
2016-05-02 09:16:59 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:SEND Please specify the full 
shared storage connection path to use (example: host:/path):
2016-05-02 09:17:22 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:RECEIVEgsave0.engine.local:/engine-vol
2016-05-02 09:17:22 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.nfs 
plugin.executeRaw:828 execute: ('/sbin/gluster', '--mode=script', '--xml', 
'volume', 'info', 'engine-vol', '--remote-host=gsave0.engine.local'), 
executable='None', cwd='None', env=None
2016-05-02 09:17:22 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.nfs 
plugin.executeRaw:878 execute-result: ('/sbin/gluster', '--mode=script', 
'--xml', 'volume', 'info', 'engine-vol', '--re

Re: [ovirt-users] hosted-engine setup Gluster fails to execute

2016-05-03 Thread Langley, Robert
Thank you Simone!

-Original Message-
From: Simone Tiraboschi [mailto:stira...@redhat.com] 
Sent: Tuesday, May 3, 2016 12:34 AM
To: Langley, Robert <robert.lang...@ventura.org>
Cc: users@ovirt.org
Subject: Re: [ovirt-users] hosted-engine setup Gluster fails to execute

On Mon, May 2, 2016 at 7:38 PM, Langley, Robert <robert.lang...@ventura.org> 
wrote:
> Hi Sahina,
>
>
>
> Thank you for your response. Let me know if you’ll need any of the log 
> before the Storage Configuration section. I looked at this earlier and 
> I was wondering why, after choosing to use GlusterFS, there is still 
> reference to NFS (nfs.py)?

It's just an implementation detail of hosted-engine-setup:
iSCSI an FC commands are define in blockd.py while commands for file based 
storage domains (NFS and gluster) are defined in nfs.py.

> I do believe NFS is disabled in my Gluster config for the engine 
> cluster. -Robert
>
>
>
> 2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:219 DIALOG:SEND --== STORAGE
> CONFIGURATION ==--
>
> 2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:219 DIALOG:SEND
>
> 2016-05-02 09:16:53 DEBUG otopi.context context._executeMethod:142 
> Stage customization METHOD 
> otopi.plugins.ovirt_hosted_engine_setup.storage.storage.Plugin._early_
> customization
>
> 2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:219 DIALOG:SEND During customization use
> CTRL-D to abort.
>
> 2016-05-02 09:16:53 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.storage
> storage._check_existing_pools:1100 _check_existing_pools
>
> 2016-05-02 09:16:53 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.storage
> storage._check_existing_pools:1101 getConnectedStoragePoolsList
>
> 2016-05-02 09:16:53 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.storage
> storage._check_existing_pools:1103 {'status': {'message': 'OK', 
> 'code': 0},
> 'poollist': []}
>
> 2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human
> human.queryString:156 query OVEHOSTED_STORAGE_DOMAIN_TYPE
>
> 2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:219 DIALOG:SEND Please specify the
> storage you would like to use (glusterfs, iscsi, fc, nfs3, nfs4)[nfs3]:
>
> 2016-05-02 09:16:59 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:219 DIALOG:RECEIVEglusterfs
>
> 2016-05-02 09:16:59 DEBUG otopi.context context.dumpEnvironment:500 
> ENVIRONMENT DUMP - BEGIN
>
> 2016-05-02 09:16:59 DEBUG otopi.context context.dumpEnvironment:510 
> ENV OVEHOSTED_STORAGE/domainType=str:'glusterfs'
>
> 2016-05-02 09:16:59 DEBUG otopi.context context.dumpEnvironment:514 
> ENVIRONMENT DUMP - END
>
> 2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:142 
> Stage customization METHOD 
> otopi.plugins.ovirt_hosted_engine_setup.storage.glusterfs.Plugin._cust
> omization
>
> 2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:142 
> Stage customization METHOD 
> otopi.plugins.ovirt_hosted_engine_setup.storage.glusterfs.Plugin._bric
> k_customization
>
> 2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:148 
> condition False
>
> 2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:142 
> Stage customization METHOD 
> otopi.plugins.ovirt_hosted_engine_setup.storage.nfs.Plugin._customizat
> ion
>
> 2016-05-02 09:16:59 INFO 
> otopi.plugins.ovirt_hosted_engine_setup.storage.nfs
> nfs._customization:360 Please note that Replica 3 support is required 
> for the shared storage.
>
> 2016-05-02 09:16:59 DEBUG otopi.plugins.otopi.dialog.human
> human.queryString:156 query OVEHOSTED_STORAGE_DOMAIN_CONNECTION
>
> 2016-05-02 09:16:59 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:219 DIALOG:SEND Please specify the full
> shared storage connection path to use (example: host:/path):
>
> 2016-05-02 09:17:22 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:219 DIALOG:RECEIVEgsave0.engine.local:/engine-vol
>
> 2016-05-02 09:17:22 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.nfs 
> plugin.executeRaw:828
> execute: ('/sbin/gluster', '--mode=script', '--xml', 'volume', 'info', 
> 'engine-vol', '--remote-host=gsave0.engine.local'), executable='None', 
> cwd='None', env=None
>
> 2016-05-02 09:17:22 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.nfs 
> plugin.executeRaw:878
> execute-result: ('/sbin/gluster', '--mode=script', '--xml', 'volume', 
> 'info', 'engine-vol', '--remote-host=gsave0.engine.local'), rc=2
>
> 2016-05-02 09:17:22 DEBUG
> otopi.plugins.o

Re: [ovirt-users] hosted engine setup failed for 10 minutes delay.. engine seems alive

2016-05-03 Thread Simone Tiraboschi
On Mon, May 2, 2016 at 9:15 PM, Gianluca Cecchi
 wrote:
>
>
> On Mon, May 2, 2016 at 8:39 PM, Gianluca Cecchi 
> wrote:
>>
>> On Mon, May 2, 2016 at 11:14 AM, Simone Tiraboschi 
>> wrote:
>>>
>>>
>>> >>>
>>> >>> Can you please check the entropy value on your host?
>>> >>>  cat /proc/sys/kernel/random/entropy_avail
>>> >>>
>>> >>
>>> >> I have not at hand now the server. I'll check soon and report
>>> >> Do you mean entropy of the physical server that will operate as
>>> >> hypervisor?
>>>
>>> On the hypervisor
>>>
>>> > That's a good question. Simone - do you know if we start the guest with
>>> > virtio-rng?
>>>
>>> AFAIK we are not.
>>>
>>
>> On the only existing hypervisor, just after booting and exiting global
>> maintenance, causing hosted engine to start, I have
>>
>> [root@ovirt01 ~]# uptime
>>  20:34:17 up 6 min,  1 user,  load average: 0.23, 0.20, 0.11
>>
>> [root@ovirt01 ~]# cat /proc/sys/kernel/random/entropy_avail
>> 3084
>>
>> BTW on the self hosted engine VM:
>> [root@ovirt ~]# uptime
>>  18:35:33 up 4 min,  1 user,  load average: 0.06, 0.25, 0.13
>>
>> [root@ovirt ~]# cat /proc/sys/kernel/random/entropy_avail
>> 14
>>
>> On the hypervisor:
>> [root@ovirt01 ~]# ps -ef | grep [q]emu | grep virtio-rng
>> [root@ovirt01 ~]#
>>
>> On engine VM:
>> [root@ovirt ~]# ll /dev/hwrng
>> ls: cannot access /dev/hwrng: No such file or directory
>> [root@ovirt ~]#
>>
>> [root@ovirt ~]# lsmod | grep virtio_rng
>> [root@ovirt ~]#
>>
>> May I change anything so that engine VM has virtio-rng enabled?
>>
>> Gianluca
>>
>>
>
> I verified very slow login time in webadmin after welcome page, with my
> configuration that is for now based on /etc/hosts.
> After reading a previous post, and having after about 30 minutes only 114 as
> entropy in hosted engine vm, I made this in engine VM:

Thanks for your report Gianluca,
adding virtio-rng or adding haveged daemon to the appliance is indeed
a good idea: could you please fill an RFE on bugzilla for that?

> yum install haveged
> systemctl enable haveged
>
> put host in global maintenance
> shutdown engine VM
> exit from maintenance
>
> engine VM starts and immediately I have:
>
> [root@ovirt ~]# uptime
>  19:05:10 up 0 min,  1 user,  load average: 0.68, 0.20, 0.07
>
> [root@ovirt ~]# cat /proc/sys/kernel/random/entropy_avail
> 1369
>
> And login in web admin page now almost immediate
>
> Inside the thread I read:
> http://lists.ovirt.org/pipermail/users/2016-April/038805.html
>
> it wasn't clear if I can edit the engine VM in webadmin (or other mean) and
> enable the random generator option or if the haveged way is the one to go
> with in case of self hosted engine
> Is there a list of what I can change (if any) and what not for the engine
> VM?
> For example I would like to change the time zone that is GMT now (I think
> inherited from the OVF of the appliance?)
>
> Thanks,
> Gianluca
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted-engine setup Gluster fails to execute

2016-05-03 Thread Simone Tiraboschi
On Mon, May 2, 2016 at 7:38 PM, Langley, Robert
 wrote:
> Hi Sahina,
>
>
>
> Thank you for your response. Let me know if you’ll need any of the log
> before the Storage Configuration section. I looked at this earlier and I was
> wondering why, after choosing to use GlusterFS, there is still reference to
> NFS (nfs.py)?

It's just an implementation detail of hosted-engine-setup:
iSCSI an FC commands are define in blockd.py while commands for file
based storage domains (NFS and gluster) are defined in nfs.py.

> I do believe NFS is disabled in my Gluster config for the
> engine cluster. -Robert
>
>
>
> 2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:219 DIALOG:SEND --== STORAGE
> CONFIGURATION ==--
>
> 2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:219 DIALOG:SEND
>
> 2016-05-02 09:16:53 DEBUG otopi.context context._executeMethod:142 Stage
> customization METHOD
> otopi.plugins.ovirt_hosted_engine_setup.storage.storage.Plugin._early_customization
>
> 2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:219 DIALOG:SEND During customization use
> CTRL-D to abort.
>
> 2016-05-02 09:16:53 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.storage
> storage._check_existing_pools:1100 _check_existing_pools
>
> 2016-05-02 09:16:53 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.storage
> storage._check_existing_pools:1101 getConnectedStoragePoolsList
>
> 2016-05-02 09:16:53 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.storage
> storage._check_existing_pools:1103 {'status': {'message': 'OK', 'code': 0},
> 'poollist': []}
>
> 2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human
> human.queryString:156 query OVEHOSTED_STORAGE_DOMAIN_TYPE
>
> 2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:219 DIALOG:SEND Please specify the
> storage you would like to use (glusterfs, iscsi, fc, nfs3, nfs4)[nfs3]:
>
> 2016-05-02 09:16:59 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:219 DIALOG:RECEIVEglusterfs
>
> 2016-05-02 09:16:59 DEBUG otopi.context context.dumpEnvironment:500
> ENVIRONMENT DUMP - BEGIN
>
> 2016-05-02 09:16:59 DEBUG otopi.context context.dumpEnvironment:510 ENV
> OVEHOSTED_STORAGE/domainType=str:'glusterfs'
>
> 2016-05-02 09:16:59 DEBUG otopi.context context.dumpEnvironment:514
> ENVIRONMENT DUMP - END
>
> 2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:142 Stage
> customization METHOD
> otopi.plugins.ovirt_hosted_engine_setup.storage.glusterfs.Plugin._customization
>
> 2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:142 Stage
> customization METHOD
> otopi.plugins.ovirt_hosted_engine_setup.storage.glusterfs.Plugin._brick_customization
>
> 2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:148 condition
> False
>
> 2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:142 Stage
> customization METHOD
> otopi.plugins.ovirt_hosted_engine_setup.storage.nfs.Plugin._customization
>
> 2016-05-02 09:16:59 INFO otopi.plugins.ovirt_hosted_engine_setup.storage.nfs
> nfs._customization:360 Please note that Replica 3 support is required for
> the shared storage.
>
> 2016-05-02 09:16:59 DEBUG otopi.plugins.otopi.dialog.human
> human.queryString:156 query OVEHOSTED_STORAGE_DOMAIN_CONNECTION
>
> 2016-05-02 09:16:59 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:219 DIALOG:SEND Please specify the full
> shared storage connection path to use (example: host:/path):
>
> 2016-05-02 09:17:22 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:219 DIALOG:RECEIVEgsave0.engine.local:/engine-vol
>
> 2016-05-02 09:17:22 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.nfs plugin.executeRaw:828
> execute: ('/sbin/gluster', '--mode=script', '--xml', 'volume', 'info',
> 'engine-vol', '--remote-host=gsave0.engine.local'), executable='None',
> cwd='None', env=None
>
> 2016-05-02 09:17:22 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.nfs plugin.executeRaw:878
> execute-result: ('/sbin/gluster', '--mode=script', '--xml', 'volume',
> 'info', 'engine-vol', '--remote-host=gsave0.engine.local'), rc=2
>
> 2016-05-02 09:17:22 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.nfs plugin.execute:936
> execute-output: ('/sbin/gluster', '--mode=script', '--xml', 'volume',
> 'info', 'engine-vol', '--remote-host=gsave0.engine.local') stdout:
>
>
>
>
>
> 2016-05-02 09:17:22 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.nfs plugin.execute:941
> execute-output: ('/sbin/gluster', '--mode=script', '--xml', 'volume',
> 'info', 'engine-vol', '--remote-host=gsave0.engine.local') stderr:
>
>
>
>
>
> 2016-05-02 09:17:22 DEBUG
> otopi.plugins.ovirt_hosted_engine_setup.storage.nfs nfs._customization:395
> exception
>
> Traceback (most recent call last):
>
>   File
> 

Re: [ovirt-users] hosted-engine setup Gluster fails to execute

2016-05-03 Thread Sahina Bose
Command that failed to execute from your hosted-engine node - "gluster 
volume info engine-vol --remote-host gsave0.engine.local"


Can you check the glusterfs versions on the hosted-engine node and 
gsave0.engine.local node - are they the same?


On 05/02/2016 11:08 PM, Langley, Robert wrote:


Hi Sahina,

Thank you for your response. Let me know if you’ll need any of the log 
before the Storage Configuration section. I looked at this earlier and 
I was wondering why, after choosing to use GlusterFS, there is still 
reference to NFS (nfs.py)? I do believe NFS is disabled in my Gluster 
config for the engine cluster. -Robert


2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:SEND --== STORAGE 
CONFIGURATION ==--


2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:SEND


2016-05-02 09:16:53 DEBUG otopi.context context._executeMethod:142 
Stage customization METHOD 
otopi.plugins.ovirt_hosted_engine_setup.storage.storage.Plugin._early_customization


2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:SEND During 
customization use CTRL-D to abort.


2016-05-02 09:16:53 DEBUG 
otopi.plugins.ovirt_hosted_engine_setup.storage.storage 
storage._check_existing_pools:1100 _check_existing_pools


2016-05-02 09:16:53 DEBUG 
otopi.plugins.ovirt_hosted_engine_setup.storage.storage 
storage._check_existing_pools:1101 getConnectedStoragePoolsList


2016-05-02 09:16:53 DEBUG 
otopi.plugins.ovirt_hosted_engine_setup.storage.storage 
storage._check_existing_pools:1103 {'status': {'message': 'OK', 
'code': 0}, 'poollist': []}


2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human 
human.queryString:156 query OVEHOSTED_STORAGE_DOMAIN_TYPE


2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:SEND Please specify the 
storage you would like to use (glusterfs, iscsi, fc, nfs3, nfs4)[nfs3]:


2016-05-02 09:16:59 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:RECEIVEglusterfs


2016-05-02 09:16:59 DEBUG otopi.context context.dumpEnvironment:500 
ENVIRONMENT DUMP - BEGIN


2016-05-02 09:16:59 DEBUG otopi.context context.dumpEnvironment:510 
ENV OVEHOSTED_STORAGE/domainType=str:'glusterfs'


2016-05-02 09:16:59 DEBUG otopi.context context.dumpEnvironment:514 
ENVIRONMENT DUMP - END


2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:142 
Stage customization METHOD 
otopi.plugins.ovirt_hosted_engine_setup.storage.glusterfs.Plugin._customization


2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:142 
Stage customization METHOD 
otopi.plugins.ovirt_hosted_engine_setup.storage.glusterfs.Plugin._brick_customization


2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:148 
condition False


2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:142 
Stage customization METHOD 
otopi.plugins.ovirt_hosted_engine_setup.storage.nfs.Plugin._customization


2016-05-02 09:16:59 INFO 
otopi.plugins.ovirt_hosted_engine_setup.storage.nfs 
nfs._customization:360 Please note that Replica 3 support is required 
for the shared storage.


2016-05-02 09:16:59 DEBUG otopi.plugins.otopi.dialog.human 
human.queryString:156 query OVEHOSTED_STORAGE_DOMAIN_CONNECTION


2016-05-02 09:16:59 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:SEND Please specify the 
full shared storage connection path to use (example: host:/path):


2016-05-02 09:17:22 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:RECEIVE gsave0.engine.local:/engine-vol


2016-05-02 09:17:22 DEBUG 
otopi.plugins.ovirt_hosted_engine_setup.storage.nfs 
plugin.executeRaw:828 execute: ('/sbin/gluster', '--mode=script', 
'--xml', 'volume', 'info', 'engine-vol', 
'--remote-host=gsave0.engine.local'), executable='None', cwd='None', 
env=None


2016-05-02 09:17:22 DEBUG 
otopi.plugins.ovirt_hosted_engine_setup.storage.nfs 
plugin.executeRaw:878 execute-result: ('/sbin/gluster', 
'--mode=script', '--xml', 'volume', 'info', 'engine-vol', 
'--remote-host=gsave0.engine.local'), rc=2


2016-05-02 09:17:22 DEBUG 
otopi.plugins.ovirt_hosted_engine_setup.storage.nfs plugin.execute:936 
execute-output: ('/sbin/gluster', '--mode=script', '--xml', 'volume', 
'info', 'engine-vol', '--remote-host=gsave0.engine.local') stdout:


2016-05-02 09:17:22 DEBUG 
otopi.plugins.ovirt_hosted_engine_setup.storage.nfs plugin.execute:941 
execute-output: ('/sbin/gluster', '--mode=script', '--xml', 'volume', 
'info', 'engine-vol', '--remote-host=gsave0.engine.local') stderr:


2016-05-02 09:17:22 DEBUG 
otopi.plugins.ovirt_hosted_engine_setup.storage.nfs 
nfs._customization:395 exception


Traceback (most recent call last):

  File 
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/storage/nfs.py", 
line 390, in _customization


check_space=False,

  File 

Re: [ovirt-users] hosted-engine setup Gluster fails to execute

2016-05-02 Thread Langley, Robert
Correction; I verified and on the Gluster Volume "engine-vol" nfs.disable is 
off. Not sure if that is significant or not.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted engine setup failed for 10 minutes delay.. engine seems alive

2016-05-02 Thread Gianluca Cecchi
On Mon, May 2, 2016 at 8:39 PM, Gianluca Cecchi 
wrote:

> On Mon, May 2, 2016 at 11:14 AM, Simone Tiraboschi 
> wrote:
>
>>
>> >>>
>> >>> Can you please check the entropy value on your host?
>> >>>  cat /proc/sys/kernel/random/entropy_avail
>> >>>
>> >>
>> >> I have not at hand now the server. I'll check soon and report
>> >> Do you mean entropy of the physical server that will operate as
>> hypervisor?
>>
>> On the hypervisor
>>
>> > That's a good question. Simone - do you know if we start the guest with
>> > virtio-rng?
>>
>> AFAIK we are not.
>>
>>
> On the only existing hypervisor, just after booting and exiting global
> maintenance, causing hosted engine to start, I have
>
> [root@ovirt01 ~]# uptime
>  20:34:17 up 6 min,  1 user,  load average: 0.23, 0.20, 0.11
>
> [root@ovirt01 ~]# cat /proc/sys/kernel/random/entropy_avail
> 3084
>
> BTW on the self hosted engine VM:
> [root@ovirt ~]# uptime
>  18:35:33 up 4 min,  1 user,  load average: 0.06, 0.25, 0.13
>
> [root@ovirt ~]# cat /proc/sys/kernel/random/entropy_avail
> 14
>
> On the hypervisor:
> [root@ovirt01 ~]# ps -ef | grep [q]emu | grep virtio-rng
> [root@ovirt01 ~]#
>
> On engine VM:
> [root@ovirt ~]# ll /dev/hwrng
> ls: cannot access /dev/hwrng: No such file or directory
> [root@ovirt ~]#
>
> [root@ovirt ~]# lsmod | grep virtio_rng
> [root@ovirt ~]#
>
> May I change anything so that engine VM has virtio-rng enabled?
>
> Gianluca
>
>
>
I verified very slow login time in webadmin after welcome page, with my
configuration that is for now based on /etc/hosts.
After reading a previous post, and having after about 30 minutes only 114
as entropy in hosted engine vm, I made this in engine VM:

yum install haveged
systemctl enable haveged

put host in global maintenance
shutdown engine VM
exit from maintenance

engine VM starts and immediately I have:

[root@ovirt ~]# uptime
 19:05:10 up 0 min,  1 user,  load average: 0.68, 0.20, 0.07

[root@ovirt ~]# cat /proc/sys/kernel/random/entropy_avail
1369

And login in web admin page now almost immediate

Inside the thread I read:
http://lists.ovirt.org/pipermail/users/2016-April/038805.html

it wasn't clear if I can edit the engine VM in webadmin (or other mean) and
enable the random generator option or if the haveged way is the one to go
with in case of self hosted engine
Is there a list of what I can change (if any) and what not for the engine
VM?
For example I would like to change the time zone that is GMT now (I think
inherited from the OVF of the appliance?)

Thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted engine setup failed for 10 minutes delay.. engine seems alive

2016-05-02 Thread Gianluca Cecchi
On Mon, May 2, 2016 at 11:14 AM, Simone Tiraboschi 
wrote:

>
> >>>
> >>> Can you please check the entropy value on your host?
> >>>  cat /proc/sys/kernel/random/entropy_avail
> >>>
> >>
> >> I have not at hand now the server. I'll check soon and report
> >> Do you mean entropy of the physical server that will operate as
> hypervisor?
>
> On the hypervisor
>
> > That's a good question. Simone - do you know if we start the guest with
> > virtio-rng?
>
> AFAIK we are not.
>
>
On the only existing hypervisor, just after booting and exiting global
maintenance, causing hosted engine to start, I have

[root@ovirt01 ~]# uptime
 20:34:17 up 6 min,  1 user,  load average: 0.23, 0.20, 0.11

[root@ovirt01 ~]# cat /proc/sys/kernel/random/entropy_avail
3084

BTW on the self hosted engine VM:
[root@ovirt ~]# uptime
 18:35:33 up 4 min,  1 user,  load average: 0.06, 0.25, 0.13

[root@ovirt ~]# cat /proc/sys/kernel/random/entropy_avail
14

On the hypervisor:
[root@ovirt01 ~]# ps -ef | grep [q]emu | grep virtio-rng
[root@ovirt01 ~]#

On engine VM:
[root@ovirt ~]# ll /dev/hwrng
ls: cannot access /dev/hwrng: No such file or directory
[root@ovirt ~]#

[root@ovirt ~]# lsmod | grep virtio_rng
[root@ovirt ~]#

May I change anything so that engine VM has virtio-rng enabled?

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted-engine setup Gluster fails to execute

2016-05-02 Thread Langley, Robert
Hi Sahina,

Thank you for your response. Let me know if you'll need any of the log before 
the Storage Configuration section. I looked at this earlier and I was wondering 
why, after choosing to use GlusterFS, there is still reference to NFS (nfs.py)? 
I do believe NFS is disabled in my Gluster config for the engine cluster. 
-Robert

2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:SEND --== STORAGE CONFIGURATION 
==--
2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:SEND
2016-05-02 09:16:53 DEBUG otopi.context context._executeMethod:142 Stage 
customization METHOD 
otopi.plugins.ovirt_hosted_engine_setup.storage.storage.Plugin._early_customization
2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:SEND During customization use 
CTRL-D to abort.
2016-05-02 09:16:53 DEBUG 
otopi.plugins.ovirt_hosted_engine_setup.storage.storage 
storage._check_existing_pools:1100 _check_existing_pools
2016-05-02 09:16:53 DEBUG 
otopi.plugins.ovirt_hosted_engine_setup.storage.storage 
storage._check_existing_pools:1101 getConnectedStoragePoolsList
2016-05-02 09:16:53 DEBUG 
otopi.plugins.ovirt_hosted_engine_setup.storage.storage 
storage._check_existing_pools:1103 {'status': {'message': 'OK', 'code': 0}, 
'poollist': []}
2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human 
human.queryString:156 query OVEHOSTED_STORAGE_DOMAIN_TYPE
2016-05-02 09:16:53 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:SEND Please specify the storage 
you would like to use (glusterfs, iscsi, fc, nfs3, nfs4)[nfs3]:
2016-05-02 09:16:59 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:RECEIVEglusterfs
2016-05-02 09:16:59 DEBUG otopi.context context.dumpEnvironment:500 ENVIRONMENT 
DUMP - BEGIN
2016-05-02 09:16:59 DEBUG otopi.context context.dumpEnvironment:510 ENV 
OVEHOSTED_STORAGE/domainType=str:'glusterfs'
2016-05-02 09:16:59 DEBUG otopi.context context.dumpEnvironment:514 ENVIRONMENT 
DUMP - END
2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:142 Stage 
customization METHOD 
otopi.plugins.ovirt_hosted_engine_setup.storage.glusterfs.Plugin._customization
2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:142 Stage 
customization METHOD 
otopi.plugins.ovirt_hosted_engine_setup.storage.glusterfs.Plugin._brick_customization
2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:148 condition 
False
2016-05-02 09:16:59 DEBUG otopi.context context._executeMethod:142 Stage 
customization METHOD 
otopi.plugins.ovirt_hosted_engine_setup.storage.nfs.Plugin._customization
2016-05-02 09:16:59 INFO otopi.plugins.ovirt_hosted_engine_setup.storage.nfs 
nfs._customization:360 Please note that Replica 3 support is required for the 
shared storage.
2016-05-02 09:16:59 DEBUG otopi.plugins.otopi.dialog.human 
human.queryString:156 query OVEHOSTED_STORAGE_DOMAIN_CONNECTION
2016-05-02 09:16:59 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:SEND Please specify the full 
shared storage connection path to use (example: host:/path):
2016-05-02 09:17:22 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:RECEIVEgsave0.engine.local:/engine-vol
2016-05-02 09:17:22 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.nfs 
plugin.executeRaw:828 execute: ('/sbin/gluster', '--mode=script', '--xml', 
'volume', 'info', 'engine-vol', '--remote-host=gsave0.engine.local'), 
executable='None', cwd='None', env=None
2016-05-02 09:17:22 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.nfs 
plugin.executeRaw:878 execute-result: ('/sbin/gluster', '--mode=script', 
'--xml', 'volume', 'info', 'engine-vol', '--remote-host=gsave0.engine.local'), 
rc=2
2016-05-02 09:17:22 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.nfs 
plugin.execute:936 execute-output: ('/sbin/gluster', '--mode=script', '--xml', 
'volume', 'info', 'engine-vol', '--remote-host=gsave0.engine.local') stdout:


2016-05-02 09:17:22 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.nfs 
plugin.execute:941 execute-output: ('/sbin/gluster', '--mode=script', '--xml', 
'volume', 'info', 'engine-vol', '--remote-host=gsave0.engine.local') stderr:


2016-05-02 09:17:22 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.nfs 
nfs._customization:395 exception
Traceback (most recent call last):
  File 
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/storage/nfs.py",
 line 390, in _customization
check_space=False,
  File 
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/storage/nfs.py",
 line 302, in _validateDomain
self._check_volume_properties(connection)
  File 
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/storage/nfs.py",
 line 179, in _check_volume_properties
raiseOnError=True
  File 

Re: [ovirt-users] hosted engine setup failed for 10 minutes delay.. engine seems alive

2016-05-02 Thread Simone Tiraboschi
On Mon, May 2, 2016 at 11:06 AM, Yedidyah Bar David  wrote:
> On Mon, May 2, 2016 at 11:48 AM, Gianluca Cecchi
>  wrote:
>> On Mon, May 2, 2016 at 9:58 AM, Simone Tiraboschi wrote:
>>>
>>>
>>>
>>> hosted-engine-setup creates a fresh VM and inject a cloud-init script
>>> to configure it and execute there engine-setup to configure the engine
>>> as needed.
>>> Since engine-setup is running on the engine VM triggered by
>>> cloud-init, hosted-engine-setup has no way to really control its
>>> process status so we simply gather its output with a timeout of 10
>>> minutes between each single output line.
>>> In nothing happens within 10 minutes (the value is easily
>>> customizable), hosted-engine-setup thinks that engine-setup is stuck.
>>
>>
>>
>> How can one customize the pre-set timeout?

To set 20 minutes you can pass this
OVEHOSTED_ENGINE/engineSetupTimeout=int:1200


>> Could it be better to ask the user at the end of timeout if he/she wants to
>> wait again, instead of directly fail?
>
> Perhaps, can you please open a bz?

+1

>>> So the issue we have to understood is why this simple command took
>>> more than 10 minutes in your env:
>>> 2016-04-30 17:56:57 DEBUG
>>> otopi.plugins.ovirt_engine_setup.ovirt_engine.config.aaajdbc
>>> plugin.executeRaw:828 execute: ('/usr/bin/ovirt-aaa-jdbc-tool',
>>> '--db-config=/etc/ovirt-engine/aaa/internal.properties', 'user',
>>> 'password-reset', 'admin', '--password=env:pass', '--force',
>>> '--password-valid-to=2216-03-13 17:56:57Z'), executable='None',
>>> cwd='None', env={'LANG': 'en_US.UTF-8', 'SHLVL': '1', 'PYTHONPATH':
>>> '/usr/share/ovirt-engine/setup/bin/..::', 'pass': '**FILTERED**',
>>> 'OVIRT_ENGINE_JAVA_HOME_FORCE': '1', 'PWD': '/',
>>> 'OVIRT_ENGINE_JAVA_HOME': u'/usr/lib/jvm/jre', 'PATH':
>>> '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin', 'OTOPI_LOGFILE':
>>>
>>> '/var/log/ovirt-engine/setup/ovirt-engine-setup-20160430175551-dttt2p.log',
>>> 'OVIRT_JBOSS_HOME': '/usr/share/ovirt-engine-wildfly',
>>> 'OTOPI_EXECDIR': '/'}
>>
>>
>>
>>
>> It seemed quite strange to me too (see below further info on this)
>>
>>>
>>> Can you please check the entropy value on your host?
>>>  cat /proc/sys/kernel/random/entropy_avail
>>>
>>
>> I have not at hand now the server. I'll check soon and report
>> Do you mean entropy of the physical server that will operate as hypervisor?

On the hypervisor

> That's a good question. Simone - do you know if we start the guest with
> virtio-rng?

AFAIK we are not.

> This is another case of [1], perhaps we should reopen it.
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1319827
>
>>
>>
>>>
>>> > As a last question how to clean up things in case I have to start from
>>> > scratch.
>>>
>>> I'd recommend to redeploy from scratch instead of trying fixing it
>>> but, before that, we need to understand the root issue.
>>>
>>
>> So, trying restart the setup with generated answer file I got:
>> 1) if VM still powered on, an error about this condition
>> 2) if VM powered down, an error abut storage domain already in place and
>> restart not supported in this condition.
>>
>> I was able to continue with these steps:
>>
>> a) remove what inside the partially setup self hosted engine storage domain
>> rm -rf /SHE_DOMAIN/*
>> cd SHE_DOMAIN
>> mklost+found
>>
>> b) reboot the hypervisor
>>
>> c) stop vdsmd
>>
>> d) start the setup again with the answer file
>> It seems all went well and this time strangely the step that took more than
>> 10 minutes before lasted less than 2 seconds
>>
>> I was then able to deploy storage and iso domains without problems and self
>> hosted engine domain correctly detected and imported too.
>> Created two CentOS VMs without problems (6.7 and 7.2).
>>
>> See below the full output of deploy command
>>
>>
>> [root@ovirt01 ~]# hosted-engine --deploy
>> --config-append=/var/lib/ovirt-hosted-engine-setup/answers/answers-20160430200654.conf
>> [ INFO  ] Stage: Initializing
>> [ INFO  ] Generating a temporary VNC password.
>> [ INFO  ] Stage: Environment setup
>>   Configuration files:
>> ['/var/lib/ovirt-hosted-engine-setup/answers/answers-20160430200654.conf']
>>   Log file:
>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160501014326-8frbxk.log
>>   Version: otopi-1.4.1 (otopi-1.4.1-1.el7.centos)
>> [ INFO  ] Hardware supports virtualization
>> [ INFO  ] Bridge ovirtmgmt already created
>> [ INFO  ] Stage: Environment packages setup
>> [ INFO  ] Stage: Programs detection
>> [ INFO  ] Stage: Environment setup
>> [ INFO  ] Stage: Environment customization
>>
>>   --== STORAGE CONFIGURATION ==--
>>
>>   During customization use CTRL-D to abort.
>> [ INFO  ] Installing on first host
>>
>>   --== SYSTEM CONFIGURATION ==--
>>
>>
>>   --== NETWORK CONFIGURATION ==--
>>
>>
>>   --== VM CONFIGURATION ==--
>>
>> [ INFO  ] Checking OVF archive content (could take a few minutes depending
>> on 

Re: [ovirt-users] hosted engine setup failed for 10 minutes delay.. engine seems alive

2016-05-02 Thread Yedidyah Bar David
On Mon, May 2, 2016 at 11:48 AM, Gianluca Cecchi
 wrote:
> On Mon, May 2, 2016 at 9:58 AM, Simone Tiraboschi wrote:
>>
>>
>>
>> hosted-engine-setup creates a fresh VM and inject a cloud-init script
>> to configure it and execute there engine-setup to configure the engine
>> as needed.
>> Since engine-setup is running on the engine VM triggered by
>> cloud-init, hosted-engine-setup has no way to really control its
>> process status so we simply gather its output with a timeout of 10
>> minutes between each single output line.
>> In nothing happens within 10 minutes (the value is easily
>> customizable), hosted-engine-setup thinks that engine-setup is stuck.
>
>
>
> How can one customize the pre-set timeout?
> Could it be better to ask the user at the end of timeout if he/she wants to
> wait again, instead of directly fail?

Perhaps, can you please open a bz?

>
>
>>
>> So the issue we have to understood is why this simple command took
>> more than 10 minutes in your env:
>> 2016-04-30 17:56:57 DEBUG
>> otopi.plugins.ovirt_engine_setup.ovirt_engine.config.aaajdbc
>> plugin.executeRaw:828 execute: ('/usr/bin/ovirt-aaa-jdbc-tool',
>> '--db-config=/etc/ovirt-engine/aaa/internal.properties', 'user',
>> 'password-reset', 'admin', '--password=env:pass', '--force',
>> '--password-valid-to=2216-03-13 17:56:57Z'), executable='None',
>> cwd='None', env={'LANG': 'en_US.UTF-8', 'SHLVL': '1', 'PYTHONPATH':
>> '/usr/share/ovirt-engine/setup/bin/..::', 'pass': '**FILTERED**',
>> 'OVIRT_ENGINE_JAVA_HOME_FORCE': '1', 'PWD': '/',
>> 'OVIRT_ENGINE_JAVA_HOME': u'/usr/lib/jvm/jre', 'PATH':
>> '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin', 'OTOPI_LOGFILE':
>>
>> '/var/log/ovirt-engine/setup/ovirt-engine-setup-20160430175551-dttt2p.log',
>> 'OVIRT_JBOSS_HOME': '/usr/share/ovirt-engine-wildfly',
>> 'OTOPI_EXECDIR': '/'}
>
>
>
>
> It seemed quite strange to me too (see below further info on this)
>
>>
>> Can you please check the entropy value on your host?
>>  cat /proc/sys/kernel/random/entropy_avail
>>
>
> I have not at hand now the server. I'll check soon and report
> Do you mean entropy of the physical server that will operate as hypervisor?

That's a good question. Simone - do you know if we start the guest with
virtio-rng?

This is another case of [1], perhaps we should reopen it.

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1319827

>
>
>>
>> > As a last question how to clean up things in case I have to start from
>> > scratch.
>>
>> I'd recommend to redeploy from scratch instead of trying fixing it
>> but, before that, we need to understand the root issue.
>>
>
> So, trying restart the setup with generated answer file I got:
> 1) if VM still powered on, an error about this condition
> 2) if VM powered down, an error abut storage domain already in place and
> restart not supported in this condition.
>
> I was able to continue with these steps:
>
> a) remove what inside the partially setup self hosted engine storage domain
> rm -rf /SHE_DOMAIN/*
> cd SHE_DOMAIN
> mklost+found
>
> b) reboot the hypervisor
>
> c) stop vdsmd
>
> d) start the setup again with the answer file
> It seems all went well and this time strangely the step that took more than
> 10 minutes before lasted less than 2 seconds
>
> I was then able to deploy storage and iso domains without problems and self
> hosted engine domain correctly detected and imported too.
> Created two CentOS VMs without problems (6.7 and 7.2).
>
> See below the full output of deploy command
>
>
> [root@ovirt01 ~]# hosted-engine --deploy
> --config-append=/var/lib/ovirt-hosted-engine-setup/answers/answers-20160430200654.conf
> [ INFO  ] Stage: Initializing
> [ INFO  ] Generating a temporary VNC password.
> [ INFO  ] Stage: Environment setup
>   Configuration files:
> ['/var/lib/ovirt-hosted-engine-setup/answers/answers-20160430200654.conf']
>   Log file:
> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160501014326-8frbxk.log
>   Version: otopi-1.4.1 (otopi-1.4.1-1.el7.centos)
> [ INFO  ] Hardware supports virtualization
> [ INFO  ] Bridge ovirtmgmt already created
> [ INFO  ] Stage: Environment packages setup
> [ INFO  ] Stage: Programs detection
> [ INFO  ] Stage: Environment setup
> [ INFO  ] Stage: Environment customization
>
>   --== STORAGE CONFIGURATION ==--
>
>   During customization use CTRL-D to abort.
> [ INFO  ] Installing on first host
>
>   --== SYSTEM CONFIGURATION ==--
>
>
>   --== NETWORK CONFIGURATION ==--
>
>
>   --== VM CONFIGURATION ==--
>
> [ INFO  ] Checking OVF archive content (could take a few minutes depending
> on archive size)
> [ INFO  ] Checking OVF XML content (could take a few minutes depending on
> archive size)
> [WARNING] OVF does not contain a valid image description, using default.
>   Enter root password that will be used for the engine appliance
> (leave it empty to skip):
>   Confirm appliance root password:
> 

Re: [ovirt-users] hosted engine setup failed for 10 minutes delay.. engine seems alive

2016-05-02 Thread Simone Tiraboschi
On Sat, Apr 30, 2016 at 10:59 PM, Gianluca Cecchi
 wrote:
> Hello,
> trying to deploy a self hosted engine on an Intel NUC6i5SYB with CentOS 7.2
> using oVirt 3.6.5 and appliance (picked up rpm is
> ovirt-engine-appliance-3.6-20160420.1.el7.centos.noarch)
>
> Near the end of the command
> hosted-engine --deploy
>
> I get
> ...
>   |- [ INFO  ] Initializing PostgreSQL
>   |- [ INFO  ] Creating PostgreSQL 'engine' database
>   |- [ INFO  ] Configuring PostgreSQL
>   |- [ INFO  ] Creating/refreshing Engine database schema
>   |- [ INFO  ] Creating/refreshing Engine 'internal' domain database
> schema
> [ ERROR ] Engine setup got stuck on the appliance
> [ ERROR ] Failed to execute stage 'Closing up': Engine setup is stalled on
> the appliance since 600 seconds ago. Please check its log on the appliance.
> [ INFO  ] Stage: Clean up
> [ INFO  ] Generating answer file
> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160430200654.conf'
> [ INFO  ] Stage: Pre-termination
> [ INFO  ] Stage: Termination
> [ ERROR ] Hosted Engine deployment failed: this system is not reliable,
> please check the issue, fix and redeploy
>
> On host log I indeed see the 10 minutes timeout:
>
> 2016-04-30 19:56:52 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:219 DIALOG:SEND |- [ INFO  ]
> Creating/refreshing Engine 'internal' domain database schema
> 2016-04-30 20:06:53 ERROR
> otopi.plugins.ovirt_hosted_engine_setup.engine.health health._closeup:140
> Engine setup got stuck on the appliance
>
> On engine I don't see any particular problem but a ten minutes delay in its
> log:
>
> 2016-04-30 17:56:57 DEBUG otopi.context context.dumpEnvironment:514
> ENVIRONMENT DUMP - END
> 2016-04-30 17:56:57 DEBUG otopi.context context._executeMethod:142 Stage
> misc METHOD
> otopi.plugins.ovirt_engine_setup.ovirt_engine.config.aaajdbc.Plugin._setupAdminPassword
> 2016-04-30 17:56:57 DEBUG
> otopi.plugins.ovirt_engine_setup.ovirt_engine.config.aaajdbc
> plugin.executeRaw:828 execute: ('/usr/bin/ovirt-aaa-jdbc-tool',
> '--db-config=/etc/ovirt-engine/aaa/internal.properties', 'user',
> 'password-reset', 'admin', '--password=env:pass', '--force',
> '--password-valid-to=2216-03-13 17:56:57Z'), executable='None', cwd='None',
> env={'LANG': 'en_US.UTF-8', 'SHLVL': '1', 'PYTHONPATH':
> '/usr/share/ovirt-engine/setup/bin/..::', 'pass': '**FILTERED**',
> 'OVIRT_ENGINE_JAVA_HOME_FORCE': '1', 'PWD': '/', 'OVIRT_ENGINE_JAVA_HOME':
> u'/usr/lib/jvm/jre', 'PATH':
> '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin', 'OTOPI_LOGFILE':
> '/var/log/ovirt-engine/setup/ovirt-engine-setup-20160430175551-dttt2p.log',
> 'OVIRT_JBOSS_HOME': '/usr/share/ovirt-engine-wildfly', 'OTOPI_EXECDIR': '/'}
> 2016-04-30 18:07:06 DEBUG
> otopi.plugins.ovirt_engine_setup.ovirt_engine.config.aaajdbc
> plugin.executeRaw:878 execute-result: ('/usr/bin/ovirt-aaa-jdbc-tool',
> '--db-config=/etc/ovirt-engine/aaa/internal.properties', 'user',
> 'password-reset', 'admin', '--password=env:pass', '--force',
> '--password-valid-to=2216-03-13 17:56:57Z'), rc=0
>
> and its last lines are:
>
> 2016-04-30 18:07:06 DEBUG
> otopi.plugins.ovirt_engine_setup.ovirt_engine.config.aaajdbc
> plugin.execute:936 execute-output: ('/usr/bin/ovirt-aaa-jdbc-tool',
> '--db-config=/etc/ovirt-engine/aaa/internal.properties', 'user',
> 'password-reset', 'admin', '--password=env:pass', '--force',
> '--password-valid-to=2216-03-13 17:56:57Z') stdout:
> updating user admin...
> user updated successfully

hosted-engine-setup creates a fresh VM and inject a cloud-init script
to configure it and execute there engine-setup to configure the engine
as needed.
Since engine-setup is running on the engine VM triggered by
cloud-init, hosted-engine-setup has no way to really control its
process status so we simply gather its output with a timeout of 10
minutes between each single output line.
In nothing happens within 10 minutes (the value is easily
customizable), hosted-engine-setup thinks that engine-setup is stuck.

So the issue we have to understood is why this simple command took
more than 10 minutes in your env:
2016-04-30 17:56:57 DEBUG
otopi.plugins.ovirt_engine_setup.ovirt_engine.config.aaajdbc
plugin.executeRaw:828 execute: ('/usr/bin/ovirt-aaa-jdbc-tool',
'--db-config=/etc/ovirt-engine/aaa/internal.properties', 'user',
'password-reset', 'admin', '--password=env:pass', '--force',
'--password-valid-to=2216-03-13 17:56:57Z'), executable='None',
cwd='None', env={'LANG': 'en_US.UTF-8', 'SHLVL': '1', 'PYTHONPATH':
'/usr/share/ovirt-engine/setup/bin/..::', 'pass': '**FILTERED**',
'OVIRT_ENGINE_JAVA_HOME_FORCE': '1', 'PWD': '/',
'OVIRT_ENGINE_JAVA_HOME': u'/usr/lib/jvm/jre', 'PATH':
'/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin', 'OTOPI_LOGFILE':
'/var/log/ovirt-engine/setup/ovirt-engine-setup-20160430175551-dttt2p.log',
'OVIRT_JBOSS_HOME': '/usr/share/ovirt-engine-wildfly',
'OTOPI_EXECDIR': '/'}

Can you please check 

Re: [ovirt-users] hosted-engine setup Gluster fails to execute

2016-05-01 Thread Sahina Bose
You will need to provide the hosted-engine setup log to see which 
gluster command failed to execute.


On 04/30/2016 10:10 PM, Langley, Robert wrote:


I’m attempting to host the engine within a GlusterFS Replica 3 storage 
volume.


During setup, after entering the server and volume, I’m receiving the 
message that ‘/sbin/gluster’ failed to execute.


Reviewing the gluster cmd log, it looks as though /sbin/gluster does 
execute.


I can successfully mount the volume on the host outside of the 
hosted-engine setup.


Any assistance would be appreciated.



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [hosted-engine] Setup broke host's network

2016-03-19 Thread Simone Tiraboschi
On Thu, Mar 17, 2016 at 11:34 AM, Wee Sritippho  wrote:

> Hi,
>
> I setup the host's network while installing CentOS 7 (GUI), so the network
> configuration is like this:
>
> eno1 --> bond0_slave1 --\
>  |--> bond0
> eno2 --> bond0_slave2 --/
>
> After I disabled NetworkManager and ran 'hosted-engine --deploy', the
> setup stuck at this line:
>
> [ INFO  ] Configuring the management bridge
>
> Then the ssh connection is lost. I accessed the console and found this
> line after the line above:
>
> [ ERROR ] Failed to execute stage 'Misc configuration': Connection to
> storage server failed
>

Hi Wee,
from the log it seams that the network configuration worked as expected.
Your issue was different: 'Connection to storage server failed' simply
means that your host lost its connection with the storage server in the
middle of the deployment.

>From the logs I saw that you tried to deploy on GlusterFS using the host
where you are deploying hosted-engine host also for gluster: this setup is
called hyper-converged but it's currently not supported, please wait for
the next major release to deploy in this scenario.

In the mean time you can deploy hosted-engine pointing it to a gluster
volume on other external hosts or with another storage type.



>
> And the network is kinda break. I have to 1. delete MASTER=bond0 and
> SLAVE=yes line in ifcfg-eno{1,2} config files 2. re-config ifcfg-bond0 to
> get static IP 3. turnoff and delete ovirtmgmt bridge 4. restart the network
> in order to make it live again.
>
> Did this network configuration really break the setup or it was something
> else?  If the network configuration is the cause, how can I proceed to
> install oVirt hosted-engine?
>
> I attached the answer file, installation log, vdsm.log and supervdsm.log
> with this email.
>
> Environment:
> - CentOS Linux release 7.2.1511 (Core)
> - ovirt-release36-003-1.noarch
> - ovirt-hosted-engine-setup-1.3.3.4-1.el7.centos.noarch
> - vdsm-4.17.23-1.el7.noarch
>
> Thank you,
> Wee
>
>
> ---
> ซอฟต์แวร์ Avast แอนตี้ไวรัสตรวจสอบหาไวรัสจากอีเมลนี้แล้ว
> https://www.avast.com/antivirus
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine setup - got " Failed to start service 'ovirt-ha-agent' "

2015-12-18 Thread Will Dennis
I yum updated my hosts, and it did update ovirt-hosted-engine-ha on all to 
1.3.3.5 (two of my hosts include the one I did the engine install on were 
previously 1.3.3.4, and the third one was 1.3.3.3 for some reason)
Shortly thereafter, I began getting ovirt-hosted-engine state machine emails, 
and when I checked the state of the ovirt-ha-[agent,broker] services, they were 
running. When I got the email saying “EngineStarting-EngineUp”, I checked the 
web UI, and it was available, and I could successfully log into the admin site 
:)

Thanks for your help, and onwards!
W.

On Dec 18, 2015, at 4:55 PM, Simone Tiraboschi 
> wrote:

Today we async released ovirt-hosted-engine-ha-1.3.3.5-1 that should fix it.
Can you please check if you are already with that?
If not please update it and manually restart ovirt-ha-broker and ovirt-ha-agent 
services, I'm quite confident the it should be enough.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine setup - got " Failed to start service 'ovirt-ha-agent' "

2015-12-18 Thread Simone Tiraboschi
On Fri, Dec 18, 2015 at 7:08 PM, Willard Dennis 
wrote:

> Hi all,
>
> Did a hosted engine setup using a Gluster storage domain, it went well
> until the end, where I got this error:
>
> [ INFO  ] Saving hosted-engine configuration on the shared storage domain
> [ INFO  ] Shutting down the engine VM
> [ INFO  ] Enabling and starting HA services
> [ ERROR ] Failed to execute stage 'Closing up': Failed to start service
> 'ovirt-ha-agent’
> [ INFO  ] Stage: Clean up
> [ INFO  ] Generating answer file
> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151218124259.conf'
> [ INFO  ] Stage: Pre-termination
> [ INFO  ] Stage: Termination
>
> Full screen output from setup run:
> http://pastebin.com/yWkppmjG
>
> What’s my move now? Hopefully the install can be salvaged….
>

It's hard to say without a detailed log but yesterday we found an issue
with HA services systemd unit files on Centos 7.2.

Today we async released ovirt-hosted-engine-ha-1.3.3.5-1 that should fix it.
Can you please check if you are already with that?
If not please update it and manually restart ovirt-ha-broker and
ovirt-ha-agent services, I'm quite confident the it should be enough.


>
> FYI, I have three hosts I’m using for oVirt; they are named
> “ovirt-node-[01,02,03]”
>
> Thanks,
> Will
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine setup using Gluster - what is proper param's for the storage domain names?

2015-12-17 Thread Sahina Bose



On 12/17/2015 10:32 PM, Willard Dennis wrote:

Hi all,

Doing the hosted engine setup on Gluster; am at the point of configuring the 
storage domain / datacenter names, and not sure of what my best move is here… 
Here’s what I’m seeing:

——
   --== STORAGE CONFIGURATION ==--

   During customization use CTRL-D to abort.
   Please specify the storage you would like to use (glusterfs, iscsi, 
fc, nfs3, nfs4)[nfs3]: glusterfs
[ INFO  ] Please note that Replica 3 support is required for the shared storage.
   Please specify the full shared storage connection path to use 
(example: host:/path): localhost:/engine
[WARNING] Due to several bugs in mount.glusterfs the validation of GlusterFS 
share cannot be reliable.
[ INFO  ] GlusterFS replica 3 Volume detected
[ INFO  ] Installing on first host
   Please provide storage domain name. [hosted_storage]:
   Local storage datacenter name is an internal name
   and currently will not be shown in engine's admin UI.
   Please enter local datacenter name [hosted_datacenter]:
——

Concerned about the "Local storage datacenter name is an internal name and 
currently will not be shown in engine's admin UI” message… I want to use a second 
distributed Gluster volume (name = “vmdata”) for VM storage if I can, and don’t want 
to mess up the install… What should I consider when setting names for the storage 
domain name and local datacenter names?


You can safely go with the defaults here.

To set up a second storage domain (using gluster volume) - once the 
engine VM is up and running, you can use the user interface to create 
the domain (vmdata).

Note: replica 3 gluster volume is recommended to use as storage domain



Thanks,
Will
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted-engine-setup in an ESXi VM fails

2015-11-19 Thread Yedidyah Bar David
On Thu, Nov 19, 2015 at 11:55 AM, Budur Nagaraju  wrote:
> HI
>
> Getting below error while doing a hosted engine setup, OS is running on
> ESXi6.0 version.
>
>
>
>
> [root@he ~]# hosted-engine --deploy
> [ INFO  ] Stage: Initializing
> [ INFO  ] Generating a temporary VNC password.
> [ INFO  ] Stage: Environment setup
>   Continuing will configure this host for serving as hypervisor and
> create a VM where you have to install oVirt Engine afterwards.
>   Are you sure you want to continue? (Yes, No)[Yes]: yes
>   Configuration files: []
>   Log file:
> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151119152307-cx1kgx.log
>   Version: otopi-1.3.2 (otopi-1.3.2-1.el6)
>   It has been detected that this program is executed through an SSH
> connection without using screen.
>   Continuing with the installation may lead to broken installation
> if the network connection fails.
>   It is highly recommended to abort the installation and run it
> inside a screen session using command "screen".
>   Do you want to continue anyway? (Yes, No)[No]: yes
> [ ERROR ] Failed to execute stage 'Environment setup': Hardware does not
> support virtualization

Not sure what needs to be configured in ESXi to let you run kvm inside it.

Can you start a normal kvm vm?

Changing the subject accordingly.

Best,
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted-engine-setup

2015-11-19 Thread Martin Sivak
Hello,

hosted-engine has to be executed on a physical host (or a nested host
with all the proper CPU flags) that supports KVM virtualization. That
means linux kernel and the vmx flag in /proc/cpuinfo iirc.

I am also adding Sandro who is the maintainer of the setup tool as he
might have some additional insights.


Best regards

--
Martin Sivak
SLA / oVirt

On Thu, Nov 19, 2015 at 10:55 AM, Budur Nagaraju  wrote:
> HI
>
> Getting below error while doing a hosted engine setup, OS is running on
> ESXi6.0 version.
>
>
>
>
> [root@he ~]# hosted-engine --deploy
> [ INFO  ] Stage: Initializing
> [ INFO  ] Generating a temporary VNC password.
> [ INFO  ] Stage: Environment setup
>   Continuing will configure this host for serving as hypervisor and
> create a VM where you have to install oVirt Engine afterwards.
>   Are you sure you want to continue? (Yes, No)[Yes]: yes
>   Configuration files: []
>   Log file:
> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151119152307-cx1kgx.log
>   Version: otopi-1.3.2 (otopi-1.3.2-1.el6)
>   It has been detected that this program is executed through an SSH
> connection without using screen.
>   Continuing with the installation may lead to broken installation
> if the network connection fails.
>   It is highly recommended to abort the installation and run it
> inside a screen session using command "screen".
>   Do you want to continue anyway? (Yes, No)[No]: yes
> [ ERROR ] Failed to execute stage 'Environment setup': Hardware does not
> support virtualization
> [ INFO  ] Stage: Clean up
> [ INFO  ] Generating answer file
> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151119152311.conf'
> [ INFO  ] Stage: Pre-termination
> [ INFO  ] Stage: Termination
> [root@he ~]#
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted-engine-setup

2015-11-19 Thread Simone Tiraboschi
On Thu, Nov 19, 2015 at 11:16 AM, Martin Sivak  wrote:

> Hello,
>
> hosted-engine has to be executed on a physical host (or a nested host
> with all the proper CPU flags) that supports KVM virtualization. That
> means linux kernel and the vmx flag in /proc/cpuinfo iirc.
>
> I am also adding Sandro who is the maintainer of the setup tool as he
> might have some additional insights.
>

In hosted-engine the engine will run as a VM so if your host is a virtual
machine too you are going to create a nested deployment. To create a nested
env you need to:
- enable nested virtualization on the external hypervisor, follow here as a
reference if you are using oVirt
http://www.ovirt.org/Vdsm_Developers#Running_Node_as_guest_-_Nested_KVM
- if you are using oVirt as your external hypervisor, disable no-mac-spoof
filter on the physical hypervisor otherwise your engine VM will no have
network connectivity at all. You can proceed following this instructions:
https://github.com/oVirt/vdsm/tree/master/vdsm_hooks/macspoof



>
>
> Best regards
>
> --
> Martin Sivak
> SLA / oVirt
>
> On Thu, Nov 19, 2015 at 10:55 AM, Budur Nagaraju 
> wrote:
> > HI
> >
> > Getting below error while doing a hosted engine setup, OS is running on
> > ESXi6.0 version.
> >
> >
> >
> >
> > [root@he ~]# hosted-engine --deploy
> > [ INFO  ] Stage: Initializing
> > [ INFO  ] Generating a temporary VNC password.
> > [ INFO  ] Stage: Environment setup
> >   Continuing will configure this host for serving as hypervisor
> and
> > create a VM where you have to install oVirt Engine afterwards.
> >   Are you sure you want to continue? (Yes, No)[Yes]: yes
> >   Configuration files: []
> >   Log file:
> >
> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151119152307-cx1kgx.log
> >   Version: otopi-1.3.2 (otopi-1.3.2-1.el6)
> >   It has been detected that this program is executed through an
> SSH
> > connection without using screen.
> >   Continuing with the installation may lead to broken
> installation
> > if the network connection fails.
> >   It is highly recommended to abort the installation and run it
> > inside a screen session using command "screen".
> >   Do you want to continue anyway? (Yes, No)[No]: yes
> > [ ERROR ] Failed to execute stage 'Environment setup': Hardware does not
> > support virtualization
> > [ INFO  ] Stage: Clean up
> > [ INFO  ] Generating answer file
> > '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151119152311.conf'
> > [ INFO  ] Stage: Pre-termination
> > [ INFO  ] Stage: Termination
> > [root@he ~]#
> >
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted-engine-setup

2015-11-19 Thread Sandro Bonazzola
On Thu, Nov 19, 2015 at 11:57 AM, Simone Tiraboschi 
wrote:

>
>
> On Thu, Nov 19, 2015 at 11:16 AM, Martin Sivak  wrote:
>
>> Hello,
>>
>> hosted-engine has to be executed on a physical host (or a nested host
>> with all the proper CPU flags) that supports KVM virtualization. That
>> means linux kernel and the vmx flag in /proc/cpuinfo iirc.
>>
>> I am also adding Sandro who is the maintainer of the setup tool as he
>> might have some additional insights.
>>
>
>
Simone already replied


> In hosted-engine the engine will run as a VM so if your host is a virtual
> machine too you are going to create a nested deployment. To create a nested
> env you need to:
> - enable nested virtualization on the external hypervisor, follow here as
> a reference if you are using oVirt
> http://www.ovirt.org/Vdsm_Developers#Running_Node_as_guest_-_Nested_KVM
> - if you are using oVirt as your external hypervisor, disable no-mac-spoof
> filter on the physical hypervisor otherwise your engine VM will no have
> network connectivity at all. You can proceed following this instructions:
> https://github.com/oVirt/vdsm/tree/master/vdsm_hooks/macspoof
>
>
>

+1


>
>>
>> Best regards
>>
>> --
>> Martin Sivak
>> SLA / oVirt
>>
>> On Thu, Nov 19, 2015 at 10:55 AM, Budur Nagaraju 
>> wrote:
>> > HI
>> >
>> > Getting below error while doing a hosted engine setup, OS is running on
>> > ESXi6.0 version.
>> >
>> >
>> >
>> >
>> > [root@he ~]# hosted-engine --deploy
>> > [ INFO  ] Stage: Initializing
>> > [ INFO  ] Generating a temporary VNC password.
>> > [ INFO  ] Stage: Environment setup
>> >   Continuing will configure this host for serving as hypervisor
>> and
>> > create a VM where you have to install oVirt Engine afterwards.
>> >   Are you sure you want to continue? (Yes, No)[Yes]: yes
>> >   Configuration files: []
>> >   Log file:
>> >
>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151119152307-cx1kgx.log
>> >   Version: otopi-1.3.2 (otopi-1.3.2-1.el6)
>> >   It has been detected that this program is executed through an
>> SSH
>> > connection without using screen.
>> >   Continuing with the installation may lead to broken
>> installation
>> > if the network connection fails.
>> >   It is highly recommended to abort the installation and run it
>> > inside a screen session using command "screen".
>> >   Do you want to continue anyway? (Yes, No)[No]: yes
>> > [ ERROR ] Failed to execute stage 'Environment setup': Hardware does not
>> > support virtualization
>> > [ INFO  ] Stage: Clean up
>> > [ INFO  ] Generating answer file
>> > '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151119152311.conf'
>> > [ INFO  ] Stage: Pre-termination
>> > [ INFO  ] Stage: Termination
>> > [root@he ~]#
>> >
>> >
>> > ___
>> > Users mailing list
>> > Users@ovirt.org
>> > http://lists.ovirt.org/mailman/listinfo/users
>> >
>>
>
>


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine-Setup issue additional host

2015-04-27 Thread Yedidyah Bar David
   
  /usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py,
   line 78, in
   connect
 raise
 
  BrokerConnectionError(error_msg)
 
  BrokerConnectionError:
 Failed to connect to
 broker, the number
 of
 errors has exceeded
 the limit (5)
  Apr 27 02:49:33 ovirt-node2.mgmt.asl.local libvirtd[1678]: metadata not
  found: Requested metadata element is not present
  
  
  
  -Ursprüngliche Nachricht-
  Von: Yedidyah Bar David [mailto:d...@redhat.com]
  Gesendet: Montag, 27. April 2015 09:46
  An: Sven Achtelik; Martin Sivak
  Cc: Roy Golan; users@ovirt.org
  Betreff: Re: AW: AW: [ovirt-users] Hosted Engine-Setup issue additional
  host
  
  - Original Message -
   From: Sven Achtelik sven.achte...@mailpool.us
   To: Yedidyah Bar David d...@redhat.com
   Cc: Roy Golan rgo...@redhat.com, users@ovirt.org
   Sent: Monday, April 27, 2015 10:34:13 AM
   Subject: AW: AW: [ovirt-users] Hosted Engine-Setup issue additional
   host
   
   Hi Did,
   
   results are
   ---
   [root@ovirt-node2 ~]# ls -l /dev/stdout lrwxrwxrwx 1 root root 15 Apr
   26 09:14 /dev/stdout - /proc/self/fd/1
   [root@ovirt-node2 ~]# echo test  /dev/stdout test
   ---
   Looks like everything is working fine.
  
  And it still fails with the same message when you restart ha daemons?
  
  Adding Martin.
  
  Weird.
  
   
   Sven
   
   
   -Ursprüngliche Nachricht-
   Von: Yedidyah Bar David [mailto:d...@redhat.com]
   Gesendet: Montag, 27. April 2015 08:57
   An: Sven Achtelik
   Cc: Roy Golan; users@ovirt.org
   Betreff: Re: AW: [ovirt-users] Hosted Engine-Setup issue additional
   host
   
   
   
   - Original Message -
From: Sven Achtelik sven.achte...@mailpool.us
To: Roy Golan rgo...@redhat.com, users@ovirt.org, Yedidyah Bar
David d...@redhat.com
Sent: Sunday, April 26, 2015 6:57:06 PM
Subject: AW: [ovirt-users] Hosted Engine-Setup issue additional host

On the node that fails to start the ha-broker and ha-agent I'm using:

ovirt-engine-sdk-python.noarch3.5.2.1-1.el7.centos
@ovirt-3.5-pre
ovirt-host-deploy.noarch1.3.1-1.el7
@ovirt-3.5
ovirt-hosted-engine-ha.noarch  1.2.5-1.el7.centos
@ovirt-3.5
ovirt-hosted-engine-setup.noarch1.2.3-1.el7.centos
@ovirt-3.5-pre
ovirt-release35.noarch003-1
@/ovirt-release35


Von: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] Im
Auftrag von Roy Golan
Gesendet: Sonntag, 26. April 2015 16:59
An: users@ovirt.org; Yedidyah Bar David
Betreff: Re: [ovirt-users] Hosted Engine-Setup issue additional host

On 04/26/2015 05:38 PM, Sven Achtelik wrote:
Hi All,

after a successful setup of hosted-engine on the first node I'm
having trouble completing it on an additional node. The Setup fails
with:
-
[ INFO  ] Waiting for the host to become operational in the engine.
This may take several minutes...
[ INFO  ] Still waiting for VDSM host to become operational...
[ INFO  ] The VDSM Host is now operational [ INFO  ] Enabling and
starting HA services [ ERROR ] Failed to execute stage 'Closing up':
Command '/bin/systemctl'
failed to execute
[ INFO  ] Stage: Clean up
[ INFO  ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20150426080028.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
-
After that the node is added to the cluster and is operational from
the GUI, but the hosted  engine broker and agent fail to start with
error
messages:
--
[root@ovirt-node2 ~]# systemctl status ovirt-ha-agent.service -l
ovirt-ha-agent.service - oVirt Hosted Engine High Availability
Monitoring Agent
   Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service;
   enabled)
   Active: failed (Result: exit-code) since Sun 2015-04-26 08:00:28
   CDT;
   20min ago
  Process: 5373 ExecStart=/usr/lib/systemd/systemd-ovirt-ha-agent start
  (code=exited, status=1/FAILURE)

Apr 26 08:00:28 ovirt-node2.mgmt.asl.local

Re: [ovirt-users] Hosted Engine-Setup issue additional host

2015-04-27 Thread Sven Achtelik
,
  line 78, in connect
raise

 BrokerConnectionError(error_msg)
BrokerConnectionError:
Failed to connect to
broker, the number of
errors has exceeded
the limit (5)
 Apr 27 02:49:33 ovirt-node2.mgmt.asl.local libvirtd[1678]: metadata
 not
 found: Requested metadata element is not present
 


 -Ursprüngliche Nachricht-
 Von: Yedidyah Bar David [mailto:d...@redhat.com]
 Gesendet: Montag, 27. April 2015 09:46
 An: Sven Achtelik; Martin Sivak
 Cc: Roy Golan; users@ovirt.org
 Betreff: Re: AW: AW: [ovirt-users] Hosted Engine-Setup issue
 additional host

 - Original Message -
  From: Sven Achtelik sven.achte...@mailpool.us
  To: Yedidyah Bar David d...@redhat.com
  Cc: Roy Golan rgo...@redhat.com, users@ovirt.org
  Sent: Monday, April 27, 2015 10:34:13 AM
  Subject: AW: AW: [ovirt-users] Hosted Engine-Setup issue additional
  host
 
  Hi Did,
 
  results are
  ---
  [root@ovirt-node2 ~]# ls -l /dev/stdout lrwxrwxrwx 1 root root 15
  Apr
  26 09:14 /dev/stdout - /proc/self/fd/1
  [root@ovirt-node2 ~]# echo test  /dev/stdout test
  ---
  Looks like everything is working fine.

 And it still fails with the same message when you restart ha daemons?

 Adding Martin.

 Weird.

 
  Sven
 
 
  -Ursprüngliche Nachricht-
  Von: Yedidyah Bar David [mailto:d...@redhat.com]
  Gesendet: Montag, 27. April 2015 08:57
  An: Sven Achtelik
  Cc: Roy Golan; users@ovirt.org
  Betreff: Re: AW: [ovirt-users] Hosted Engine-Setup issue additional
  host
 
 
 
  - Original Message -
   From: Sven Achtelik sven.achte...@mailpool.us
   To: Roy Golan rgo...@redhat.com, users@ovirt.org, Yedidyah
   Bar David d...@redhat.com
   Sent: Sunday, April 26, 2015 6:57:06 PM
   Subject: AW: [ovirt-users] Hosted Engine-Setup issue additional
   host
  
   On the node that fails to start the ha-broker and ha-agent I'm using:
  
   ovirt-engine-sdk-python.noarch3.5.2.1-1.el7.centos
   @ovirt-3.5-pre
   ovirt-host-deploy.noarch1.3.1-1.el7
   @ovirt-3.5
   ovirt-hosted-engine-ha.noarch  1.2.5-1.el7.centos
   @ovirt-3.5
   ovirt-hosted-engine-setup.noarch1.2.3-1.el7.centos
   @ovirt-3.5-pre
   ovirt-release35.noarch003-1
   @/ovirt-release35
  
  
   Von: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] Im
   Auftrag von Roy Golan
   Gesendet: Sonntag, 26. April 2015 16:59
   An: users@ovirt.org; Yedidyah Bar David
   Betreff: Re: [ovirt-users] Hosted Engine-Setup issue additional
   host
  
   On 04/26/2015 05:38 PM, Sven Achtelik wrote:
   Hi All,
  
   after a successful setup of hosted-engine on the first node I'm
   having trouble completing it on an additional node. The Setup fails with:
   -
   [ INFO  ] Waiting for the host to become operational in the engine.
   This may take several minutes...
   [ INFO  ] Still waiting for VDSM host to become operational...
   [ INFO  ] The VDSM Host is now operational [ INFO  ] Enabling and
   starting HA services [ ERROR ] Failed to execute stage 'Closing up':
   Command '/bin/systemctl'
   failed to execute
   [ INFO  ] Stage: Clean up
   [ INFO  ] Generating answer file
   '/var/lib/ovirt-hosted-engine-setup/answers/answers-20150426080028.conf'
   [ INFO  ] Stage: Pre-termination
   [ INFO  ] Stage: Termination
   -
   After that the node is added to the cluster and is operational
   from the GUI, but the hosted  engine broker and agent fail to
   start with error
   messages:
   --
   [root@ovirt-node2 ~]# systemctl status ovirt-ha-agent.service -l
   ovirt-ha-agent.service - oVirt Hosted Engine High Availability
   Monitoring Agent
  Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service;
  enabled)
  Active: failed (Result: exit-code) since Sun 2015-04-26 08:00:28 CDT;
  20min ago
 Process: 5373 ExecStart=/usr/lib/systemd/systemd-ovirt-ha-agent start
 (code=exited, status=1/FAILURE)
  
   Apr 26 08:00:28 ovirt-node2.mgmt.asl.local
   systemd-ovirt-ha-agent[5373]: hdlr = FileHandler(filename, mode)
   Apr
   26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]:
   File /usr/lib64/python2.7/logging/__init__.py, line 902, in
   __init__ Apr 26 08:00:28 ovirt-node2.mgmt.asl.local
   systemd-ovirt-ha-agent[5373]:
   StreamHandler.__init__(self, self._open()) Apr 26 08:00:28
   ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]: File
   /usr/lib64/python2.7

Re: [ovirt-users] Hosted Engine-Setup issue additional host

2015-04-27 Thread Sven Achtelik
1.3.1-1.el7
  @ovirt-3.5
  ovirt-hosted-engine-ha.noarch  1.2.5-1.el7.centos
  @ovirt-3.5
  ovirt-hosted-engine-setup.noarch1.2.3-1.el7.centos
  @ovirt-3.5-pre
  ovirt-release35.noarch003-1
  @/ovirt-release35
  
  
  Von: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] Im 
  Auftrag von Roy Golan
  Gesendet: Sonntag, 26. April 2015 16:59
  An: users@ovirt.org; Yedidyah Bar David
  Betreff: Re: [ovirt-users] Hosted Engine-Setup issue additional host
  
  On 04/26/2015 05:38 PM, Sven Achtelik wrote:
  Hi All,
  
  after a successful setup of hosted-engine on the first node I'm 
  having trouble completing it on an additional node. The Setup fails with:
  -
  [ INFO  ] Waiting for the host to become operational in the engine.
  This may take several minutes...
  [ INFO  ] Still waiting for VDSM host to become operational...
  [ INFO  ] The VDSM Host is now operational [ INFO  ] Enabling and 
  starting HA services [ ERROR ] Failed to execute stage 'Closing up':
  Command '/bin/systemctl'
  failed to execute
  [ INFO  ] Stage: Clean up
  [ INFO  ] Generating answer file
  '/var/lib/ovirt-hosted-engine-setup/answers/answers-20150426080028.conf'
  [ INFO  ] Stage: Pre-termination
  [ INFO  ] Stage: Termination
  -
  After that the node is added to the cluster and is operational from 
  the GUI, but the hosted  engine broker and agent fail to start with 
  error
  messages:
  --
  [root@ovirt-node2 ~]# systemctl status ovirt-ha-agent.service -l 
  ovirt-ha-agent.service - oVirt Hosted Engine High Availability 
  Monitoring Agent
 Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service; enabled)
 Active: failed (Result: exit-code) since Sun 2015-04-26 08:00:28 CDT;
 20min ago
Process: 5373 ExecStart=/usr/lib/systemd/systemd-ovirt-ha-agent start
(code=exited, status=1/FAILURE)
  
  Apr 26 08:00:28 ovirt-node2.mgmt.asl.local
  systemd-ovirt-ha-agent[5373]: hdlr = FileHandler(filename, mode) Apr
  26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]:
  File /usr/lib64/python2.7/logging/__init__.py, line 902, in 
  __init__ Apr 26 08:00:28 ovirt-node2.mgmt.asl.local 
  systemd-ovirt-ha-agent[5373]:
  StreamHandler.__init__(self, self._open()) Apr 26 08:00:28 
  ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]: File 
  /usr/lib64/python2.7/logging/__init__.py, line 925, in _open Apr 
  26
  08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]:
  stream = open(self.baseFilename, self.mode) Apr 26 08:00:28 
  ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]:
  IOError: [Errno 6] No such device or address: '/dev/stdout'
  Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]:
  [FAILED]
  Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd[1]:
  ovirt-ha-agent.service: control process exited, code=exited status=1 
  Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd[1]: Failed to 
  start oVirt Hosted Engine High Availability Monitoring Agent.
  Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd[1]: Unit 
  ovirt-ha-agent.service entered failed state.
  -
  And
  -
  [root@ovirt-node2 ~]# systemctl status ovirt-ha-broker 
  ovirt-ha-broker.service - oVirt Hosted Engine High Availability 
  Communications Broker
 Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-broker.service;
 enabled)
 Active: failed (Result: exit-code) since Sun 2015-04-26 08:00:28 CDT;
 21min ago
Process: 5359 ExecStart=/usr/lib/systemd/systemd-ovirt-ha-broker start
(code=exited, status=1/FAILURE)
  
  Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[5359]:
  hdlr = FileHandler(filename, mode)
  Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[5359]:
  File /usr/lib64/python2.7/logging/__init__.py, line ...it__ Apr 26
  08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[5359]:
  StreamHandler.__init__(self, self._open()) Apr 26 08:00:28 
  ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[5359]:
  File /usr/lib64/python2.7/logging/__init__.py, line ...open Apr 26
  08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[5359]:
  stream = open(self.baseFilename, self.mode) Apr 26 08:00:28 
  ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[5359]:
  IOError: [Errno 6] No such device or address: '/dev/stdout'
  
  Didi any clue?
  the log says it runs as root so I canrule that out
  
 
 That's weird. Please check/post:
 
 ls -l /dev/stdout
 echo test  /dev/stdout
 
 It should be a symlink to /proc/self/fd/1 .
 
  Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[5359]:
  [FAILED]
  Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd[1]:
  ovirt-ha-broker.service: control process exited, code=exited 
  status=1 Apr 26 08:00:28 ovirt-node2

Re: [ovirt-users] Hosted Engine-Setup issue additional host

2015-04-27 Thread Yedidyah Bar David


- Original Message -
 From: Sven Achtelik sven.achte...@mailpool.us
 To: Roy Golan rgo...@redhat.com, users@ovirt.org, Yedidyah Bar David 
 d...@redhat.com
 Sent: Sunday, April 26, 2015 6:57:06 PM
 Subject: AW: [ovirt-users] Hosted Engine-Setup issue additional host
 
 On the node that fails to start the ha-broker and ha-agent I'm using:
 
 ovirt-engine-sdk-python.noarch3.5.2.1-1.el7.centos
 @ovirt-3.5-pre
 ovirt-host-deploy.noarch1.3.1-1.el7
 @ovirt-3.5
 ovirt-hosted-engine-ha.noarch  1.2.5-1.el7.centos
 @ovirt-3.5
 ovirt-hosted-engine-setup.noarch1.2.3-1.el7.centos
 @ovirt-3.5-pre
 ovirt-release35.noarch003-1
 @/ovirt-release35
 
 
 Von: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] Im Auftrag von
 Roy Golan
 Gesendet: Sonntag, 26. April 2015 16:59
 An: users@ovirt.org; Yedidyah Bar David
 Betreff: Re: [ovirt-users] Hosted Engine-Setup issue additional host
 
 On 04/26/2015 05:38 PM, Sven Achtelik wrote:
 Hi All,
 
 after a successful setup of hosted-engine on the first node I'm having
 trouble completing it on an additional node. The Setup fails with:
 -
 [ INFO  ] Waiting for the host to become operational in the engine. This may
 take several minutes...
 [ INFO  ] Still waiting for VDSM host to become operational...
 [ INFO  ] The VDSM Host is now operational
 [ INFO  ] Enabling and starting HA services
 [ ERROR ] Failed to execute stage 'Closing up': Command '/bin/systemctl'
 failed to execute
 [ INFO  ] Stage: Clean up
 [ INFO  ] Generating answer file
 '/var/lib/ovirt-hosted-engine-setup/answers/answers-20150426080028.conf'
 [ INFO  ] Stage: Pre-termination
 [ INFO  ] Stage: Termination
 -
 After that the node is added to the cluster and is operational from the GUI,
 but the hosted  engine broker and agent fail to start with error messages:
 --
 [root@ovirt-node2 ~]# systemctl status ovirt-ha-agent.service -l
 ovirt-ha-agent.service - oVirt Hosted Engine High Availability Monitoring
 Agent
Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service; enabled)
Active: failed (Result: exit-code) since Sun 2015-04-26 08:00:28 CDT;
20min ago
   Process: 5373 ExecStart=/usr/lib/systemd/systemd-ovirt-ha-agent start
   (code=exited, status=1/FAILURE)
 
 Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]: hdlr
 = FileHandler(filename, mode)
 Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]: File
 /usr/lib64/python2.7/logging/__init__.py, line 902, in __init__
 Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]:
 StreamHandler.__init__(self, self._open())
 Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]: File
 /usr/lib64/python2.7/logging/__init__.py, line 925, in _open
 Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]:
 stream = open(self.baseFilename, self.mode)
 Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]:
 IOError: [Errno 6] No such device or address: '/dev/stdout'
 Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]:
 [FAILED]
 Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd[1]:
 ovirt-ha-agent.service: control process exited, code=exited status=1
 Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd[1]: Failed to start oVirt
 Hosted Engine High Availability Monitoring Agent.
 Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd[1]: Unit
 ovirt-ha-agent.service entered failed state.
 -
 And
 -
 [root@ovirt-node2 ~]# systemctl status ovirt-ha-broker
 ovirt-ha-broker.service - oVirt Hosted Engine High Availability
 Communications Broker
Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-broker.service; enabled)
Active: failed (Result: exit-code) since Sun 2015-04-26 08:00:28 CDT;
21min ago
   Process: 5359 ExecStart=/usr/lib/systemd/systemd-ovirt-ha-broker start
   (code=exited, status=1/FAILURE)
 
 Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[5359]:
 hdlr = FileHandler(filename, mode)
 Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[5359]:
 File /usr/lib64/python2.7/logging/__init__.py, line ...it__
 Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[5359]:
 StreamHandler.__init__(self, self._open())
 Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[5359]:
 File /usr/lib64/python2.7/logging/__init__.py, line ...open
 Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[5359]:
 stream = open(self.baseFilename, self.mode)
 Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[5359]:
 IOError: [Errno 6] No such device or address: '/dev/stdout'
 
 Didi any clue?
 the log says it runs as root so I canrule that out
 

That's weird. Please check

Re: [ovirt-users] Hosted Engine-Setup issue additional host

2015-04-27 Thread Martin Sivak
 of
errors has exceeded
the limit (5)
 Apr 27 02:49:33 ovirt-node2.mgmt.asl.local libvirtd[1678]: metadata not
 found: Requested metadata element is not present
 
 
 
 -Ursprüngliche Nachricht-
 Von: Yedidyah Bar David [mailto:d...@redhat.com]
 Gesendet: Montag, 27. April 2015 09:46
 An: Sven Achtelik; Martin Sivak
 Cc: Roy Golan; users@ovirt.org
 Betreff: Re: AW: AW: [ovirt-users] Hosted Engine-Setup issue additional host
 
 - Original Message -
  From: Sven Achtelik sven.achte...@mailpool.us
  To: Yedidyah Bar David d...@redhat.com
  Cc: Roy Golan rgo...@redhat.com, users@ovirt.org
  Sent: Monday, April 27, 2015 10:34:13 AM
  Subject: AW: AW: [ovirt-users] Hosted Engine-Setup issue additional
  host
  
  Hi Did,
  
  results are
  ---
  [root@ovirt-node2 ~]# ls -l /dev/stdout lrwxrwxrwx 1 root root 15 Apr
  26 09:14 /dev/stdout - /proc/self/fd/1
  [root@ovirt-node2 ~]# echo test  /dev/stdout test
  ---
  Looks like everything is working fine.
 
 And it still fails with the same message when you restart ha daemons?
 
 Adding Martin.
 
 Weird.
 
  
  Sven
  
  
  -Ursprüngliche Nachricht-
  Von: Yedidyah Bar David [mailto:d...@redhat.com]
  Gesendet: Montag, 27. April 2015 08:57
  An: Sven Achtelik
  Cc: Roy Golan; users@ovirt.org
  Betreff: Re: AW: [ovirt-users] Hosted Engine-Setup issue additional
  host
  
  
  
  - Original Message -
   From: Sven Achtelik sven.achte...@mailpool.us
   To: Roy Golan rgo...@redhat.com, users@ovirt.org, Yedidyah Bar
   David d...@redhat.com
   Sent: Sunday, April 26, 2015 6:57:06 PM
   Subject: AW: [ovirt-users] Hosted Engine-Setup issue additional host
   
   On the node that fails to start the ha-broker and ha-agent I'm using:
   
   ovirt-engine-sdk-python.noarch3.5.2.1-1.el7.centos
   @ovirt-3.5-pre
   ovirt-host-deploy.noarch1.3.1-1.el7
   @ovirt-3.5
   ovirt-hosted-engine-ha.noarch  1.2.5-1.el7.centos
   @ovirt-3.5
   ovirt-hosted-engine-setup.noarch1.2.3-1.el7.centos
   @ovirt-3.5-pre
   ovirt-release35.noarch003-1
   @/ovirt-release35
   
   
   Von: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] Im
   Auftrag von Roy Golan
   Gesendet: Sonntag, 26. April 2015 16:59
   An: users@ovirt.org; Yedidyah Bar David
   Betreff: Re: [ovirt-users] Hosted Engine-Setup issue additional host
   
   On 04/26/2015 05:38 PM, Sven Achtelik wrote:
   Hi All,
   
   after a successful setup of hosted-engine on the first node I'm
   having trouble completing it on an additional node. The Setup fails with:
   -
   [ INFO  ] Waiting for the host to become operational in the engine.
   This may take several minutes...
   [ INFO  ] Still waiting for VDSM host to become operational...
   [ INFO  ] The VDSM Host is now operational [ INFO  ] Enabling and
   starting HA services [ ERROR ] Failed to execute stage 'Closing up':
   Command '/bin/systemctl'
   failed to execute
   [ INFO  ] Stage: Clean up
   [ INFO  ] Generating answer file
   '/var/lib/ovirt-hosted-engine-setup/answers/answers-20150426080028.conf'
   [ INFO  ] Stage: Pre-termination
   [ INFO  ] Stage: Termination
   -
   After that the node is added to the cluster and is operational from
   the GUI, but the hosted  engine broker and agent fail to start with
   error
   messages:
   --
   [root@ovirt-node2 ~]# systemctl status ovirt-ha-agent.service -l
   ovirt-ha-agent.service - oVirt Hosted Engine High Availability
   Monitoring Agent
  Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service;
  enabled)
  Active: failed (Result: exit-code) since Sun 2015-04-26 08:00:28 CDT;
  20min ago
 Process: 5373 ExecStart=/usr/lib/systemd/systemd-ovirt-ha-agent start
 (code=exited, status=1/FAILURE)
   
   Apr 26 08:00:28 ovirt-node2.mgmt.asl.local
   systemd-ovirt-ha-agent[5373]: hdlr = FileHandler(filename, mode) Apr
   26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]:
   File /usr/lib64/python2.7/logging/__init__.py, line 902, in
   __init__ Apr 26 08:00:28 ovirt-node2.mgmt.asl.local
   systemd-ovirt-ha-agent[5373]:
   StreamHandler.__init__(self, self._open()) Apr 26 08:00:28
   ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]: File
   /usr/lib64/python2.7/logging/__init__.py, line 925, in _open Apr
   26
   08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]:
   stream = open(self.baseFilename, self.mode) Apr 26 08:00:28
   ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]:
   IOError: [Errno 6] No such device or address: '/dev/stdout'
   Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]:
   [FAILED]
   Apr 26 08:00:28 ovirt-node2.mgmt.asl.local

Re: [ovirt-users] Hosted-Engine Setup: Failed to setup networks

2015-04-26 Thread Yedidyah Bar David
- Original Message -
 From: Sven Achtelik sven.achte...@mailpool.us
 To: users@ovirt.org
 Sent: Thursday, April 23, 2015 10:58:15 AM
 Subject: Re: [ovirt-users] Hosted-Engine Setup: Failed to setup networks
 
 
 
 Hi All,
 
 
 
 fixed it, vdsm doesn’t like the PREFIX entry in the ifcfg file. After
 changing that to NETMASK it worked.

Thanks for the report!

Dan - is that expected/fixed/tracked?
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine-Setup issue additional host

2015-04-26 Thread Sven Achtelik
On the node that fails to start the ha-broker and ha-agent I'm using:

ovirt-engine-sdk-python.noarch3.5.2.1-1.el7.centos  
@ovirt-3.5-pre
ovirt-host-deploy.noarch1.3.1-1.el7 
@ovirt-3.5
ovirt-hosted-engine-ha.noarch  1.2.5-1.el7.centos @ovirt-3.5
ovirt-hosted-engine-setup.noarch1.2.3-1.el7.centos  @ovirt-3.5-pre
ovirt-release35.noarch003-1 
@/ovirt-release35


Von: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] Im Auftrag von 
Roy Golan
Gesendet: Sonntag, 26. April 2015 16:59
An: users@ovirt.org; Yedidyah Bar David
Betreff: Re: [ovirt-users] Hosted Engine-Setup issue additional host

On 04/26/2015 05:38 PM, Sven Achtelik wrote:
Hi All,

after a successful setup of hosted-engine on the first node I'm having trouble 
completing it on an additional node. The Setup fails with:
-
[ INFO  ] Waiting for the host to become operational in the engine. This may 
take several minutes...
[ INFO  ] Still waiting for VDSM host to become operational...
[ INFO  ] The VDSM Host is now operational
[ INFO  ] Enabling and starting HA services
[ ERROR ] Failed to execute stage 'Closing up': Command '/bin/systemctl' failed 
to execute
[ INFO  ] Stage: Clean up
[ INFO  ] Generating answer file 
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20150426080028.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
-
After that the node is added to the cluster and is operational from the GUI, 
but the hosted  engine broker and agent fail to start with error messages:
--
[root@ovirt-node2 ~]# systemctl status ovirt-ha-agent.service -l
ovirt-ha-agent.service - oVirt Hosted Engine High Availability Monitoring Agent
   Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service; enabled)
   Active: failed (Result: exit-code) since Sun 2015-04-26 08:00:28 CDT; 20min 
ago
  Process: 5373 ExecStart=/usr/lib/systemd/systemd-ovirt-ha-agent start 
(code=exited, status=1/FAILURE)

Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]: hdlr = 
FileHandler(filename, mode)
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]: File 
/usr/lib64/python2.7/logging/__init__.py, line 902, in __init__
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]: 
StreamHandler.__init__(self, self._open())
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]: File 
/usr/lib64/python2.7/logging/__init__.py, line 925, in _open
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]: stream 
= open(self.baseFilename, self.mode)
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]: 
IOError: [Errno 6] No such device or address: '/dev/stdout'
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-agent[5373]: 
[FAILED]
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd[1]: ovirt-ha-agent.service: 
control process exited, code=exited status=1
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd[1]: Failed to start oVirt 
Hosted Engine High Availability Monitoring Agent.
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd[1]: Unit 
ovirt-ha-agent.service entered failed state.
-
And
-
[root@ovirt-node2 ~]# systemctl status ovirt-ha-broker
ovirt-ha-broker.service - oVirt Hosted Engine High Availability Communications 
Broker
   Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-broker.service; enabled)
   Active: failed (Result: exit-code) since Sun 2015-04-26 08:00:28 CDT; 21min 
ago
  Process: 5359 ExecStart=/usr/lib/systemd/systemd-ovirt-ha-broker start 
(code=exited, status=1/FAILURE)

Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[5359]: hdlr 
= FileHandler(filename, mode)
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[5359]: File 
/usr/lib64/python2.7/logging/__init__.py, line ...it__
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[5359]: 
StreamHandler.__init__(self, self._open())
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[5359]: File 
/usr/lib64/python2.7/logging/__init__.py, line ...open
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[5359]: 
stream = open(self.baseFilename, self.mode)
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[5359]: 
IOError: [Errno 6] No such device or address: '/dev/stdout'

Didi any clue?
the log says it runs as root so I canrule that out

Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd-ovirt-ha-broker[5359]: 
[FAILED]
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd[1]: ovirt-ha-broker.service: 
control process exited, code=exited status=1
Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd[1]: Failed to start oVirt 
Hosted Engine High Availability Communications

Re: [ovirt-users] Hosted Engine-Setup issue additional host

2015-04-26 Thread Roy Golan

On 04/26/2015 05:38 PM, Sven Achtelik wrote:


Hi All,

after a successful setup of hosted-engine on the first node I’m having 
trouble completing it on an additional node. The Setup fails with:


-

[ INFO  ] Waiting for the host to become operational in the engine. 
This may take several minutes...


[ INFO  ] Still waiting for VDSM host to become operational...

[ INFO  ] The VDSM Host is now operational

[ INFO  ] Enabling and starting HA services

[ ERROR ] Failed to execute stage 'Closing up': Command 
'/bin/systemctl' failed to execute


[ INFO  ] Stage: Clean up

[ INFO  ] Generating answer file 
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20150426080028.conf'


[ INFO  ] Stage: Pre-termination

[ INFO  ] Stage: Termination

-

After that the node is added to the cluster and is operational from 
the GUI, but the hosted  engine broker and agent fail to start with 
error messages:


--

[root@ovirt-node2 ~]# systemctl status ovirt-ha-agent.service -l

ovirt-ha-agent.service - oVirt Hosted Engine High Availability 
Monitoring Agent


   Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service; 
enabled)


   Active: failed (Result: exit-code) since Sun 2015-04-26 08:00:28 
CDT; 20min ago


  Process: 5373 ExecStart=/usr/lib/systemd/systemd-ovirt-ha-agent 
start (code=exited, status=1/FAILURE)


Apr 26 08:00:28 ovirt-node2.mgmt.asl.local 
systemd-ovirt-ha-agent[5373]: hdlr = FileHandler(filename, mode)


Apr 26 08:00:28 ovirt-node2.mgmt.asl.local 
systemd-ovirt-ha-agent[5373]: File 
/usr/lib64/python2.7/logging/__init__.py, line 902, in __init__


Apr 26 08:00:28 ovirt-node2.mgmt.asl.local 
systemd-ovirt-ha-agent[5373]: StreamHandler.__init__(self, self._open())


Apr 26 08:00:28 ovirt-node2.mgmt.asl.local 
systemd-ovirt-ha-agent[5373]: File 
/usr/lib64/python2.7/logging/__init__.py, line 925, in _open


Apr 26 08:00:28 ovirt-node2.mgmt.asl.local 
systemd-ovirt-ha-agent[5373]: stream = open(self.baseFilename, self.mode)


Apr 26 08:00:28 ovirt-node2.mgmt.asl.local 
systemd-ovirt-ha-agent[5373]: IOError: [Errno 6] No such device or 
address: '/dev/stdout'


Apr 26 08:00:28 ovirt-node2.mgmt.asl.local 
systemd-ovirt-ha-agent[5373]: [FAILED]


Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd[1]: 
ovirt-ha-agent.service: control process exited, code=exited status=1


Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd[1]: Failed to start 
oVirt Hosted Engine High Availability Monitoring Agent.


Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd[1]: Unit 
ovirt-ha-agent.service entered failed state.


-

And

-

[root@ovirt-node2 ~]# systemctl status ovirt-ha-broker

ovirt-ha-broker.service - oVirt Hosted Engine High Availability 
Communications Broker


   Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-broker.service; 
enabled)


   Active: failed (Result: exit-code) since Sun 2015-04-26 08:00:28 
CDT; 21min ago


  Process: 5359 ExecStart=/usr/lib/systemd/systemd-ovirt-ha-broker 
start (code=exited, status=1/FAILURE)


Apr 26 08:00:28 ovirt-node2.mgmt.asl.local 
systemd-ovirt-ha-broker[5359]: hdlr = FileHandler(filename, mode)


Apr 26 08:00:28 ovirt-node2.mgmt.asl.local 
systemd-ovirt-ha-broker[5359]: File 
/usr/lib64/python2.7/logging/__init__.py, line ...it__


Apr 26 08:00:28 ovirt-node2.mgmt.asl.local 
systemd-ovirt-ha-broker[5359]: StreamHandler.__init__(self, self._open())


Apr 26 08:00:28 ovirt-node2.mgmt.asl.local 
systemd-ovirt-ha-broker[5359]: File 
/usr/lib64/python2.7/logging/__init__.py, line ...open


Apr 26 08:00:28 ovirt-node2.mgmt.asl.local 
systemd-ovirt-ha-broker[5359]: stream = open(self.baseFilename, self.mode)


Apr 26 08:00:28 ovirt-node2.mgmt.asl.local 
systemd-ovirt-ha-broker[5359]: IOError: [Errno 6] No such device or 
address: '/dev/stdout'




Didi any clue?
the log says it runs as root so I canrule that out


Apr 26 08:00:28 ovirt-node2.mgmt.asl.local 
systemd-ovirt-ha-broker[5359]: [FAILED]


Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd[1]: 
ovirt-ha-broker.service: control process exited, code=exited status=1


Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd[1]: Failed to start 
oVirt Hosted Engine High Availability Communications Broker.


Apr 26 08:00:28 ovirt-node2.mgmt.asl.local systemd[1]: Unit 
ovirt-ha-broker.service entered failed state.




The system is a CentOS 7 Setup with SeLinux switched off, no firewall 
or iptables. How can I find out which version of ovirt I’m running 
exactly ? I’ve had a lock at the logs and read through old bug reports.




the rpm version of ovirt* will be enough I guess


Thank you,

Sven



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org

Re: [ovirt-users] Hosted-Engine Setup: Failed to setup networks

2015-04-23 Thread Sven Achtelik
Hi All,

fixed it, vdsm doesn't like the PREFIX entry in the ifcfg file. After changing 
that to NETMASK it worked.

Sven



Von: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] Im Auftrag von 
Sven Achtelik
Gesendet: Mittwoch, 22. April 2015 23:51
An: users@ovirt.org
Betreff: [ovirt-users] Hosted-Engine Setup: Failed to setup networks

Hi Everyone,

I tried to install oVirt 3.5 - hosted engine and it fails with some VDSM error 
while creating the ovirtmgmt bridge. The Host is running CentOS 7 and the 
interface I want to use is em1 and it's the parent interface from a vlan.

[ ERROR ] Failed to execute stage 'Misc configuration': Failed to setup 
networks {'ovirtmgmt': {'nic': 'em1', 'netmask': '255.255.255.128', 
'bootproto': 'none', 'ipaddr': '172.16.1.13', 'gateway': '172.16.1.1'}}. Error 
code: 16 message: Unexpected exception

2015-04-22 16:33:55 INFO otopi.plugins.ovirt_hosted_engine_setup.network.bridge 
bridge._misc:198 Configuring the management ridge
2015-04-22 16:33:55 DEBUG otopi.context context._executeMethod:152 method 
exception Traceback (most recent call last):
  File /usr/lib/python2.7/site-packages/otopi/context.py, line 142, in 
_executeMethod
method['method']()
  File 
/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/network/bridge.py,
 line 207, in _misc
_setupNetworks(conn, networks, {}, {'connectivityCheck': False})
  File 
/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/network/bridge.py,
 line 225, in _setupNetworks
'message: %s' % (networks, code, message))
RuntimeError: Failed to setup networks {'ovirtmgmt': {'nic': 'em1', 'netmask': 
'255.255.255.128', 'bootproto': 'none', 'ipaddr': '172.16.1.13', 'gateway':
'172.16.1.1'}}. Error code: 16 message: Unexpected exception
2015-04-22 16:33:55 ERROR otopi.context context._executeMethod:161 Failed to 
execute stage 'Misc configuration': Failed to setup networks {'ovirtmgmt': {'
nic': 'em1', 'netmask': '255.255.255.128', 'bootproto': 'none', 'ipaddr': 
'172.16.1.13', 'gateway': '172.16.1.1'}}. Error code: 16 message: Unexpected
exception


Is there anything I can do like creating the bridge manually or use older 
version of the packages that don't have that issue ?

Thank you,

Sven
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine Setup

2015-02-02 Thread Uwe Laverenz

Hello Michael,

Am 02.02.2015 um 00:55 schrieb Michael Schefczyk:


- In the web interface of the hosted engine, however (Hosted Engine
Network.pdf, page 3) the required network ovirtmgmt is initially
not connected to bond0 (while it is in reality connected, as ifconfig
shows). When dragging ovirtmtgt to the arrow pointing to bond0, it
does not work. The error message is Bad bond name, it must begin
with the prefix 'bond' followed by a number. This is easy to
understand, as bond0 is a combination of bond and the number zero.


bond0 is the correct one, the error messages refers to your other 
bond: bondC is not a correct name, you should name it bond1 or bond2.


hth,
Uwe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted-engine setup ovirtmgmt bridge

2015-01-26 Thread Uwe Laverenz

Hi,

Am 26.01.2015 um 23:49 schrieb Mikola Rose:


On a hosted-engine --deploy on a machine that has 2 network cards
em1 192.168.0.178  General Network
em2 192.168.1.151  Net that NFS server is on,  no dns no gateway

which one would I set as ovirtmgmt bridge

Please indicate a nic to set ovirtmgmt bridge on: (em1, em2) [em1]


The general network would be the correct one (em1).

cu,
Uwe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted engine setup on second host fails

2014-09-24 Thread Jiri Moskovcak

Hi,
it's getting a little too long, so please forgive the top post. The 
engine emits the message Host with the same address already exists. 
only if you trying to add host with the same hostname it doesn't have 
any connection to it's ID, so please check if your hosts have unique 
hostnames (e.g. I ran into this when I didn't get hostname from dhcp and 
both of my hosts were localhost.localdomain).


Regards,
Jirka

On 09/24/2014 07:59 AM, Yedidyah Bar David wrote:

- Original Message -

From: Yedidyah Bar David d...@redhat.com
To: Itamar Heim ih...@redhat.com
Cc: Stefan Wendler stefan.wend...@tngtech.com, users@ovirt.org
Sent: Wednesday, September 24, 2014 8:40:58 AM
Subject: Re: [ovirt-users] hosted engine setup on second host fails

- Original Message -

From: Itamar Heim ih...@redhat.com
To: Stefan Wendler stefan.wend...@tngtech.com
Cc: Yedidyah Bar David ybard...@redhat.com, users@ovirt.org
Sent: Tuesday, September 23, 2014 7:07:12 PM
Subject: Re: [ovirt-users] hosted engine setup on second host fails


On Sep 23, 2014 7:03 PM, Stefan Wendler stefan.wend...@tngtech.com wrote:


On 09/23/2014 17:01, Itamar Heim wrote:

On 09/23/2014 05:17 PM, Stefan Wendler wrote:

On 09/22/2014 10:52, Stefan Wendler wrote:

On 09/19/2014 15:58, Itamar Heim wrote:

On 09/19/2014 03:32 PM, Stefan Wendler wrote:

Hi there.

I'm trying to install a hosted-engine on our second node (fist
engine
runs on node1).

But I always get the message:

[ ERROR ] Cannot automatically add the host to the Default cluster:
Cannot add Host. Host with the same address already exists.

I'm not entirely sure what I have to do when this message comes, so
I
just press ENTER:

###
To continue make a selection from the options below:
  (1) Continue setup - engine installation is complete
  (2) Power off and restart the VM
  (3) Abort setup

  (1, 2, 3)[1]:


Is there any other interaction required prior to selecting 1?

In the Web Gui I get the following message:

X Adding new Host hosted_engine_2 to Cluster Default

Here is the console output:

# hosted-engine --deploy
[ INFO  ] Stage: Initializing
  Continuing will configure this host for serving as
hypervisor
and create a VM where you have to install oVirt Engine afterwards.
  Are you sure you want to continue? (Yes, No)[Yes]:
[ INFO  ] Generating a temporary VNC password.
[ INFO  ] Stage: Environment setup
  Configuration files: []
  Log file:
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20140919141012-k2lag6.log


  Version: otopi-1.2.3 (otopi-1.2.3-1.el6)
[ INFO  ] Hardware supports virtualization
[ INFO  ] Bridge ovirtmgmt already created
[ INFO  ] Stage: Environment packages setup
[ INFO  ] Stage: Programs detection
[ INFO  ] Stage: Environment setup
[ INFO  ] Stage: Environment customization

  --== STORAGE CONFIGURATION ==--

  During customization use CTRL-D to abort.
  Please specify the storage you would like to use (nfs3,
nfs4)[nfs3]:
  Please specify the full shared storage connection path
to use
(example: host:/path): some address:/volume1
  The specified storage location already contains a data
domain.
Is this an additional host setup (Yes, No)[Yes]?
[ INFO  ] Installing on additional host
  Please specify the Host ID [Must be integer, default:
  2]:
  The Host ID is already known. Is this a re-deployment
on an
additional host that was previously set up (Yes, No)[Yes]?


I admit I never tried that. Not sure how exactly it's supposed to work.


A bit more details:

Normally, a host is registered only in the engine's database. A hosted
engine is additionally registered in a special hosted-engine metadata
file managed by the ha daemon [1]. The question above appears if the host id
is found in this metadata file. It seems we never check if it's already
in the engine database - the assumption is that if an existing host is
re-purposed as a hosted-engine, it should first be uninstalled - at least
not be in use (no VMs) and removed from its cluster/dc/the engine.

[1] http://www.ovirt.org/images/d/d5/Fosdem-hosted-engine.pdf pages 17-18





  --== SYSTEM CONFIGURATION ==--

[WARNING] A configuration file must be supplied to deploy Hosted
Engine
on an additional host.
  The answer file may be fetched from the first host
using scp.
  If you do not want to download it automatically you can
abort
the setup answering no to the following question.
  Do you want to scp the answer file from the first host?
(Yes,
No)[Yes]:
  Please provide the FQDN or IP of the first host:
node1.domain
  Enter 'root' user password for host node1.domain:
[ INFO  ] Answer file successfully downloaded

  --== NETWORK CONFIGURATION ==--

  The following CPU types

Re: [ovirt-users] hosted engine setup on second host fails

2014-09-24 Thread Stefan Wendler
Oh well. I think this is fixed. I upgraded to 3.4.4 and the message
seems to be gone. the agents are running :)

Thank you very much !!! :)


On 09/24/2014 15:23, Stefan Wendler wrote:
 Okay, I'm truncating the previous mails here
 
 Davids hint was the solution. I had the ovirt hosts already added to the
 cluster and tried to do the hosted-engine-ha setup on them.
 
 After removing the hosts from the cluster and putting the data domain to
 maintenance mode I was able to deploy an all other nodes. I now have a
 HA'd hosted engine. Which can also be migrated \o/
 
 Maybe that is something that could be stated in the documentation more
 clearly?
 
 Unfortunately now I have a new problem. The agents crash rapidly after
 startup. The error is the following:
 (/var/log/ovirt-hosted-engine-ha/agent.log)
 
 AttributeError: 'NoneType' object has no attribute 'iteritems'
 
 And the whole output here - The agents have been started and I tried a
 migration of the hosted engine from ovirt host 1 to host 2 which
 succeeded. But the agents crashed afterwards:
 
 MainThread::INFO::2014-09-24
 15:09:24,839::agent::52::ovirt_hosted_engine_ha.agent.agent.Agent::(run)
 ovirt-hosted-engine-ha agent 1.1.5 started
 MainThread::INFO::2014-09-24
 15:09:24,871::hosted_engine::223::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_hostname)
 Found certificate common name: 10.8.2.101
 MainThread::INFO::2014-09-24
 15:09:25,081::hosted_engine::367::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker)
 Initializing ha-broker connection
 MainThread::INFO::2014-09-24
 15:09:25,082::brokerlink::126::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
 Starting monitor ping, options {'addr': '10.8.2.1'}
 MainThread::INFO::2014-09-24
 15:09:25,083::brokerlink::137::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
 Success, id 25293072
 MainThread::INFO::2014-09-24
 15:09:25,083::brokerlink::126::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
 Starting monitor mgmt-bridge, options {'use_ssl': 'true', 'bridge_name':
 'ovirtmgmt', 'address': '0'}
 MainThread::INFO::2014-09-24
 15:09:25,086::brokerlink::137::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
 Success, id 25294160
 MainThread::INFO::2014-09-24
 15:09:25,086::brokerlink::126::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
 Starting monitor mem-free, options {'use_ssl': 'true', 'address': '0'}
 MainThread::INFO::2014-09-24
 15:09:25,088::brokerlink::137::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
 Success, id 25293968
 MainThread::INFO::2014-09-24
 15:09:25,088::brokerlink::126::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
 Starting monitor cpu-load-no-engine, options {'use_ssl': 'true',
 'vm_uuid': 'e1ca293f-09e0-4d2e-8915-221839af1489', 'address': '0'}
 MainThread::INFO::2014-09-24
 15:09:25,089::brokerlink::137::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
 Success, id 25360400
 MainThread::INFO::2014-09-24
 15:09:25,089::brokerlink::126::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
 Starting monitor engine-health, options {'use_ssl': 'true', 'vm_uuid':
 'e1ca293f-09e0-4d2e-8915-221839af1489', 'address': '0'}
 MainThread::INFO::2014-09-24
 15:09:25,091::brokerlink::137::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
 Success, id 25509776
 MainThread::INFO::2014-09-24
 15:09:25,091::hosted_engine::391::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker)
 Broker initialized, all submonitors started
 MainThread::INFO::2014-09-24
 15:09:25,125::hosted_engine::476::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_sanlock)
 Ensuring lease for lockspace hosted-engine, host id 2 is acquired (file:
 /rhev/data-center/mnt/10.8.2.12:_volume1_engine-store/e313da39-594c-46b5-95c9-c445889c745c/ha_agent/hosted-engine.lockspace)
 MainThread::INFO::2014-09-24
 15:09:25,134::state_machine::153::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
 Global metadata: {'maintenance': False}
 MainThread::INFO::2014-09-24
 15:09:25,134::state_machine::158::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
 Host 10.8.2.100 (id 1): {'live-data': True, 'extra':
 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=1411564164
 (Wed Sep 24 15:09:24
 2014)\nhost-id=1\nscore=2400\nmaintenance=False\nstate=EngineUp\n',
 'hostname': '10.8.2.100', 'host-id': 1, 'engine-status': {'health':
 'good', 'vm': 'up', 'detail': 'up'}, 'score': 2400, 'maintenance':
 False, 'host-ts': 1411564164}
 MainThread::INFO::2014-09-24
 15:09:25,134::state_machine::158::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
 Host 10.8.2.102 (id 3): {'live-data': False, 'extra':
 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=1411562496
 (Wed Sep 24 14:41:36
 

Re: [ovirt-users] hosted engine setup on second host fails

2014-09-24 Thread Itamar Heim
Seems we should consider not adding the host if already there. Please open a 
bug.
Though I really hope in 3.6 to see this done from the gui

On Sep 24, 2014 4:23 PM, Stefan Wendler stefan.wend...@tngtech.com wrote:

 Okay, I'm truncating the previous mails here 

 Davids hiOkay, I'm truncating the previous mails here

Davids hint was the solution. I had the ovirt hosts already added to the
cluster and tried to do the hosted-engine-ha setup on them.

After removing the hosts from the cluster and putting the data domain to
maintenance mode I was able to deploy an all other nodes. I now have a
HA'd hosted engine. Which can also be migrated \o/

Maybe that is something that could be stated in the documentation more
clearly?

Unfortunately now I have a new problem. The agents crash rapidly after
startup. The error is the following:
(/var/log/ovirt-hosted-engine-ha/agent.log)

AttributeError: 'NoneType' object has no attribute 'iteritems'

And the whole output here - The agents have been started and I tried a
migration of the hosted engine from ovirt host 1 to host 2 which
succeeded. But the agents crashed afterwards:

MainThread::INFO::2014-09-24
15:09:24,839::agent::52::ovirt_hosted_engine_ha.agent.agent.Agent::(run)
ovirt-hosted-engine-ha agent 1.1.5 started
MainThread::INFO::2014-09-24
15:09:24,871::hosted_engine::223::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_hostname)
Found certificate common name: 10.8.2.101
MainThread::INFO::2014-09-24
15:09:25,081::hosted_engine::367::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker)
Initializing ha-broker connection
MainThread::INFO::2014-09-24
15:09:25,082::brokerlink::126::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Starting monitor ping, options {'addr': '10.8.2.1'}
MainThread::INFO::2014-09-24
15:09:25,083::brokerlink::137::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Success, id 25293072
MainThread::INFO::2014-09-24
15:09:25,083::brokerlink::126::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Starting monitor mgmt-bridge, options {'use_ssl': 'true', 'bridge_name':
'ovirtmgmt', 'address': '0'}
MainThread::INFO::2014-09-24
15:09:25,086::brokerlink::137::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Success, id 25294160
MainThread::INFO::2014-09-24
15:09:25,086::brokerlink::126::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Starting monitor mem-free, options {'use_ssl': 'true', 'address': '0'}
MainThread::INFO::2014-09-24
15:09:25,088::brokerlink::137::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Success, id 25293968
MainThread::INFO::2014-09-24
15:09:25,088::brokerlink::126::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Starting monitor cpu-load-no-engine, options {'use_ssl': 'true',
'vm_uuid': 'e1ca293f-09e0-4d2e-8915-221839af1489', 'address': '0'}
MainThread::INFO::2014-09-24
15:09:25,089::brokerlink::137::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Success, id 25360400
MainThread::INFO::2014-09-24
15:09:25,089::brokerlink::126::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Starting monitor engine-health, options {'use_ssl': 'true', 'vm_uuid':
'e1ca293f-09e0-4d2e-8915-221839af1489', 'address': '0'}
MainThread::INFO::2014-09-24
15:09:25,091::brokerlink::137::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Success, id 25509776
MainThread::INFO::2014-09-24
15:09:25,091::hosted_engine::391::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker)
Broker initialized, all submonitors started
MainThread::INFO::2014-09-24
15:09:25,125::hosted_engine::476::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_sanlock)
Ensuring lease for lockspace hosted-engine, host id 2 is acquired (file:
/rhev/data-center/mnt/10.8.2.12:_volume1_engine-store/e313da39-594c-46b5-95c9-c445889c745c/ha_agent/hosted-engine.lockspace)
MainThread::INFO::2014-09-24
15:09:25,134::state_machine::153::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Global metadata: {'maintenance': False}
MainThread::INFO::2014-09-24
15:09:25,134::state_machine::158::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Host 10.8.2.100 (id 1): {'live-data': True, 'extra':
'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=1411564164
(Wed Sep 24 15:09:24
2014)\nhost-id=1\nscore=2400\nmaintenance=False\nstate=EngineUp\n',
'hostname': '10.8.2.100', 'host-id': 1, 'engine-status': {'health':
'good', 'vm': 'up', 'detail': 'up'}, 'score': 2400, 'maintenance':
False, 'host-ts': 1411564164}
MainThread::INFO::2014-09-24
15:09:25,134::state_machine::158::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Host 10.8.2.102 (id 3): {'live-data': False, 'extra':
'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=1411562496
(Wed Sep 24 14:41:36

Re: [ovirt-users] hosted engine setup on second host fails

2014-09-23 Thread Stefan Wendler
On 09/22/2014 10:52, Stefan Wendler wrote:
 On 09/19/2014 15:58, Itamar Heim wrote:
 On 09/19/2014 03:32 PM, Stefan Wendler wrote:
 Hi there.

 I'm trying to install a hosted-engine on our second node (fist engine
 runs on node1).

 But I always get the message:

 [ ERROR ] Cannot automatically add the host to the Default cluster:
 Cannot add Host. Host with the same address already exists.

 I'm not entirely sure what I have to do when this message comes, so I
 just press ENTER:

 ###
 To continue make a selection from the options below:
(1) Continue setup - engine installation is complete
(2) Power off and restart the VM
(3) Abort setup

(1, 2, 3)[1]:
 

 Is there any other interaction required prior to selecting 1?

 In the Web Gui I get the following message:

 X Adding new Host hosted_engine_2 to Cluster Default

 Here is the console output:

 # hosted-engine --deploy
 [ INFO  ] Stage: Initializing
Continuing will configure this host for serving as hypervisor
 and create a VM where you have to install oVirt Engine afterwards.
Are you sure you want to continue? (Yes, No)[Yes]:
 [ INFO  ] Generating a temporary VNC password.
 [ INFO  ] Stage: Environment setup
Configuration files: []
Log file:
 /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20140919141012-k2lag6.log

Version: otopi-1.2.3 (otopi-1.2.3-1.el6)
 [ INFO  ] Hardware supports virtualization
 [ INFO  ] Bridge ovirtmgmt already created
 [ INFO  ] Stage: Environment packages setup
 [ INFO  ] Stage: Programs detection
 [ INFO  ] Stage: Environment setup
 [ INFO  ] Stage: Environment customization

--== STORAGE CONFIGURATION ==--

During customization use CTRL-D to abort.
Please specify the storage you would like to use (nfs3,
 nfs4)[nfs3]:
Please specify the full shared storage connection path to use
 (example: host:/path): some address:/volume1
The specified storage location already contains a data domain.
 Is this an additional host setup (Yes, No)[Yes]?
 [ INFO  ] Installing on additional host
Please specify the Host ID [Must be integer, default: 2]:
The Host ID is already known. Is this a re-deployment on an
 additional host that was previously set up (Yes, No)[Yes]?

--== SYSTEM CONFIGURATION ==--

 [WARNING] A configuration file must be supplied to deploy Hosted Engine
 on an additional host.
The answer file may be fetched from the first host using scp.
If you do not want to download it automatically you can abort
 the setup answering no to the following question.
Do you want to scp the answer file from the first host? (Yes,
 No)[Yes]:
Please provide the FQDN or IP of the first host:
 node1.domain
Enter 'root' user password for host node1.domain:
 [ INFO  ] Answer file successfully downloaded

--== NETWORK CONFIGURATION ==--

The following CPU types are supported by this host:
   - model_Westmere: Intel Westmere Family
   - model_Nehalem: Intel Nehalem Family
   - model_Penryn: Intel Penryn Family
   - model_Conroe: Intel Conroe Family

--== HOSTED ENGINE CONFIGURATION ==--

Enter the name which will be used to identify this host inside
 the Administrator Portal [hosted_engine_2]:
Enter 'admin@internal' user password that will be used for
 accessing the Administrator Portal:
Confirm 'admin@internal' user password:
   [ INFO  ] Stage: Setup validation

--== CONFIGURATION PREVIEW ==--

Engine FQDN: engine.domain
Bridge name: ovirtmgmt
SSH daemon port: 22
Gateway address: some address
Host name for web application  : hosted_engine_2
Host ID: 2
Image size GB  : 25
Storage connection : some address:/volume1
Console type   : vnc
Memory size MB : 8192
MAC address: 00:16:3e:3b:8d:66
Boot type  : disk
Number of CPUs : 2
CPU Type   : model_Westmere

Please confirm installation settings (Yes, No)[No]: yes
 [ ERROR ] Invalid value

Please confirm installation settings (Yes, No)[No]: Yes
 [ INFO  ] Stage: Transaction setup
 [ INFO  ] Stage: Misc configuration
 [ INFO  ] Stage: Package installation
 [ INFO  ] Stage: Misc configuration
 [ INFO  ] Configuring libvirt
 [ INFO  ] Configuring VDSM
 [ INFO  ] Starting vdsmd
 [ INFO  ] Waiting for VDSM hardware 

Re: [ovirt-users] hosted engine setup on second host fails

2014-09-23 Thread Itamar Heim

On 09/23/2014 05:17 PM, Stefan Wendler wrote:

On 09/22/2014 10:52, Stefan Wendler wrote:

On 09/19/2014 15:58, Itamar Heim wrote:

On 09/19/2014 03:32 PM, Stefan Wendler wrote:

Hi there.

I'm trying to install a hosted-engine on our second node (fist engine
runs on node1).

But I always get the message:

[ ERROR ] Cannot automatically add the host to the Default cluster:
Cannot add Host. Host with the same address already exists.

I'm not entirely sure what I have to do when this message comes, so I
just press ENTER:

###
To continue make a selection from the options below:
(1) Continue setup - engine installation is complete
(2) Power off and restart the VM
(3) Abort setup

(1, 2, 3)[1]:


Is there any other interaction required prior to selecting 1?

In the Web Gui I get the following message:

X Adding new Host hosted_engine_2 to Cluster Default

Here is the console output:

# hosted-engine --deploy
[ INFO  ] Stage: Initializing
Continuing will configure this host for serving as hypervisor
and create a VM where you have to install oVirt Engine afterwards.
Are you sure you want to continue? (Yes, No)[Yes]:
[ INFO  ] Generating a temporary VNC password.
[ INFO  ] Stage: Environment setup
Configuration files: []
Log file:
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20140919141012-k2lag6.log

Version: otopi-1.2.3 (otopi-1.2.3-1.el6)
[ INFO  ] Hardware supports virtualization
[ INFO  ] Bridge ovirtmgmt already created
[ INFO  ] Stage: Environment packages setup
[ INFO  ] Stage: Programs detection
[ INFO  ] Stage: Environment setup
[ INFO  ] Stage: Environment customization

--== STORAGE CONFIGURATION ==--

During customization use CTRL-D to abort.
Please specify the storage you would like to use (nfs3,
nfs4)[nfs3]:
Please specify the full shared storage connection path to use
(example: host:/path): some address:/volume1
The specified storage location already contains a data domain.
Is this an additional host setup (Yes, No)[Yes]?
[ INFO  ] Installing on additional host
Please specify the Host ID [Must be integer, default: 2]:
The Host ID is already known. Is this a re-deployment on an
additional host that was previously set up (Yes, No)[Yes]?

--== SYSTEM CONFIGURATION ==--

[WARNING] A configuration file must be supplied to deploy Hosted Engine
on an additional host.
The answer file may be fetched from the first host using scp.
If you do not want to download it automatically you can abort
the setup answering no to the following question.
Do you want to scp the answer file from the first host? (Yes,
No)[Yes]:
Please provide the FQDN or IP of the first host:
node1.domain
Enter 'root' user password for host node1.domain:
[ INFO  ] Answer file successfully downloaded

--== NETWORK CONFIGURATION ==--

The following CPU types are supported by this host:
   - model_Westmere: Intel Westmere Family
   - model_Nehalem: Intel Nehalem Family
   - model_Penryn: Intel Penryn Family
   - model_Conroe: Intel Conroe Family

--== HOSTED ENGINE CONFIGURATION ==--

Enter the name which will be used to identify this host inside
the Administrator Portal [hosted_engine_2]:
Enter 'admin@internal' user password that will be used for
accessing the Administrator Portal:
Confirm 'admin@internal' user password:
   [ INFO  ] Stage: Setup validation

--== CONFIGURATION PREVIEW ==--

Engine FQDN: engine.domain
Bridge name: ovirtmgmt
SSH daemon port: 22
Gateway address: some address
Host name for web application  : hosted_engine_2
Host ID: 2
Image size GB  : 25
Storage connection : some address:/volume1
Console type   : vnc
Memory size MB : 8192
MAC address: 00:16:3e:3b:8d:66
Boot type  : disk
Number of CPUs : 2
CPU Type   : model_Westmere

Please confirm installation settings (Yes, No)[No]: yes
[ ERROR ] Invalid value

Please confirm installation settings (Yes, No)[No]: Yes
[ INFO  ] Stage: Transaction setup
[ INFO  ] Stage: Misc configuration
[ INFO  ] Stage: Package installation
[ INFO  ] Stage: Misc configuration
[ INFO  ] Configuring libvirt
[ INFO  ] Configuring VDSM
[ INFO  ] Starting vdsmd
[ INFO  ] Waiting for VDSM hardware 

Re: [ovirt-users] hosted engine setup on second host fails

2014-09-23 Thread Stefan Wendler
On 09/23/2014 17:01, Itamar Heim wrote:
 On 09/23/2014 05:17 PM, Stefan Wendler wrote:
 On 09/22/2014 10:52, Stefan Wendler wrote:
 On 09/19/2014 15:58, Itamar Heim wrote:
 On 09/19/2014 03:32 PM, Stefan Wendler wrote:
 Hi there.

 I'm trying to install a hosted-engine on our second node (fist engine
 runs on node1).

 But I always get the message:

 [ ERROR ] Cannot automatically add the host to the Default cluster:
 Cannot add Host. Host with the same address already exists.

 I'm not entirely sure what I have to do when this message comes, so I
 just press ENTER:

 ###
 To continue make a selection from the options below:
 (1) Continue setup - engine installation is complete
 (2) Power off and restart the VM
 (3) Abort setup

 (1, 2, 3)[1]:
 

 Is there any other interaction required prior to selecting 1?

 In the Web Gui I get the following message:

 X Adding new Host hosted_engine_2 to Cluster Default

 Here is the console output:

 # hosted-engine --deploy
 [ INFO  ] Stage: Initializing
 Continuing will configure this host for serving as
 hypervisor
 and create a VM where you have to install oVirt Engine afterwards.
 Are you sure you want to continue? (Yes, No)[Yes]:
 [ INFO  ] Generating a temporary VNC password.
 [ INFO  ] Stage: Environment setup
 Configuration files: []
 Log file:
 /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20140919141012-k2lag6.log


 Version: otopi-1.2.3 (otopi-1.2.3-1.el6)
 [ INFO  ] Hardware supports virtualization
 [ INFO  ] Bridge ovirtmgmt already created
 [ INFO  ] Stage: Environment packages setup
 [ INFO  ] Stage: Programs detection
 [ INFO  ] Stage: Environment setup
 [ INFO  ] Stage: Environment customization

 --== STORAGE CONFIGURATION ==--

 During customization use CTRL-D to abort.
 Please specify the storage you would like to use (nfs3,
 nfs4)[nfs3]:
 Please specify the full shared storage connection path
 to use
 (example: host:/path): some address:/volume1
 The specified storage location already contains a data
 domain.
 Is this an additional host setup (Yes, No)[Yes]?
 [ INFO  ] Installing on additional host
 Please specify the Host ID [Must be integer, default: 2]:
 The Host ID is already known. Is this a re-deployment
 on an
 additional host that was previously set up (Yes, No)[Yes]?

 --== SYSTEM CONFIGURATION ==--

 [WARNING] A configuration file must be supplied to deploy Hosted
 Engine
 on an additional host.
 The answer file may be fetched from the first host
 using scp.
 If you do not want to download it automatically you can
 abort
 the setup answering no to the following question.
 Do you want to scp the answer file from the first host?
 (Yes,
 No)[Yes]:
 Please provide the FQDN or IP of the first host:
 node1.domain
 Enter 'root' user password for host node1.domain:
 [ INFO  ] Answer file successfully downloaded

 --== NETWORK CONFIGURATION ==--

 The following CPU types are supported by this host:
- model_Westmere: Intel Westmere Family
- model_Nehalem: Intel Nehalem Family
- model_Penryn: Intel Penryn Family
- model_Conroe: Intel Conroe Family

 --== HOSTED ENGINE CONFIGURATION ==--

 Enter the name which will be used to identify this host
 inside
 the Administrator Portal [hosted_engine_2]:
 Enter 'admin@internal' user password that will be used for
 accessing the Administrator Portal:
 Confirm 'admin@internal' user password:
[ INFO  ] Stage: Setup validation

 --== CONFIGURATION PREVIEW ==--

 Engine FQDN: engine.domain
 Bridge name: ovirtmgmt
 SSH daemon port: 22
 Gateway address: some address
 Host name for web application  : hosted_engine_2
 Host ID: 2
 Image size GB  : 25
 Storage connection : some
 address:/volume1
 Console type   : vnc
 Memory size MB : 8192
 MAC address: 00:16:3e:3b:8d:66
 Boot type  : disk
 Number of CPUs : 2
 CPU Type   : model_Westmere

 Please confirm installation settings (Yes, No)[No]: yes
 [ ERROR ] Invalid value

 Please confirm installation settings (Yes, No)[No]: Yes
 [ INFO  ] Stage: Transaction setup
 [ INFO  ] Stage: Misc configuration
 [ INFO  ] Stage: Package installation
 [ INFO  

Re: [ovirt-users] hosted engine setup on second host fails

2014-09-23 Thread Itamar Heim

On Sep 23, 2014 7:03 PM, Stefan Wendler stefan.wend...@tngtech.com wrote:

 On 09/23/2014 17:01, Itamar Heim wrote: 
  On 09/23/2014 05:17 PM, Stefan Wendler wrote: 
  On 09/22/2014 10:52, Stefan Wendler wrote: 
  On 09/19/2014 15:58, Itamar Heim wrote: 
  On 09/19/2014 03:32 PM, Stefan Wendler wrote: 
  Hi there. 
  
  I'm trying to install a hosted-engine on our second node (fist engine 
  runs on node1). 
  
  But I always get the message: 
  
  [ ERROR ] Cannot automatically add the host to the Default cluster: 
  Cannot add Host. Host with the same address already exists. 
  
  I'm not entirely sure what I have to do when this message comes, so I 
  just press ENTER: 
  
  ### 
  To continue make a selection from the options below: 
  (1) Continue setup - engine installation is complete 
  (2) Power off and restart the VM 
  (3) Abort setup 
  
  (1, 2, 3)[1]: 
   
  
  Is there any other interaction required prior to selecting 1? 
  
  In the Web Gui I get the following message: 
  
  X Adding new Host hosted_engine_2 to Cluster Default 
  
  Here is the console output: 
  
  # hosted-engine --deploy 
  [ INFO  ] Stage: Initializing 
  Continuing will configure this host for serving as 
  hypervisor 
  and create a VM where you have to install oVirt Engine afterwards. 
  Are you sure you want to continue? (Yes, No)[Yes]: 
  [ INFO  ] Generating a temporary VNC password. 
  [ INFO  ] Stage: Environment setup 
  Configuration files: [] 
  Log file: 
  /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20140919141012-k2lag6.log
   
  
  
  Version: otopi-1.2.3 (otopi-1.2.3-1.el6) 
  [ INFO  ] Hardware supports virtualization 
  [ INFO  ] Bridge ovirtmgmt already created 
  [ INFO  ] Stage: Environment packages setup 
  [ INFO  ] Stage: Programs detection 
  [ INFO  ] Stage: Environment setup 
  [ INFO  ] Stage: Environment customization 
  
  --== STORAGE CONFIGURATION ==-- 
  
  During customization use CTRL-D to abort. 
  Please specify the storage you would like to use (nfs3, 
  nfs4)[nfs3]: 
  Please specify the full shared storage connection path 
  to use 
  (example: host:/path): some address:/volume1 
  The specified storage location already contains a data 
  domain. 
  Is this an additional host setup (Yes, No)[Yes]? 
  [ INFO  ] Installing on additional host 
  Please specify the Host ID [Must be integer, default: 2]: 
  The Host ID is already known. Is this a re-deployment 
  on an 
  additional host that was previously set up (Yes, No)[Yes]? 
  
  --== SYSTEM CONFIGURATION ==-- 
  
  [WARNING] A configuration file must be supplied to deploy Hosted 
  Engine 
  on an additional host. 
  The answer file may be fetched from the first host 
  using scp. 
  If you do not want to download it automatically you can 
  abort 
  the setup answering no to the following question. 
  Do you want to scp the answer file from the first host? 
  (Yes, 
  No)[Yes]: 
  Please provide the FQDN or IP of the first host: 
  node1.domain 
  Enter 'root' user password for host node1.domain: 
  [ INFO  ] Answer file successfully downloaded 
  
  --== NETWORK CONFIGURATION ==-- 
  
  The following CPU types are supported by this host: 
     - model_Westmere: Intel Westmere Family 
     - model_Nehalem: Intel Nehalem Family 
     - model_Penryn: Intel Penryn Family 
     - model_Conroe: Intel Conroe Family 
  
  --== HOSTED ENGINE CONFIGURATION ==-- 
  
  Enter the name which will be used to identify this host 
  inside 
  the Administrator Portal [hosted_engine_2]: 
  Enter 'admin@internal' user password that will be used for 
  accessing the Administrator Portal: 
  Confirm 'admin@internal' user password: 
     [ INFO  ] Stage: Setup validation 
  
  --== CONFIGURATION PREVIEW ==-- 
  
  Engine FQDN    : engine.domain 
  Bridge name    : ovirtmgmt 
  SSH daemon port    : 22 
  Gateway address    : some address 
  Host name for web application  : hosted_engine_2 
  Host ID    : 2 
  Image size GB  : 25 
  Storage connection : some 
  address:/volume1 
  Console type   : vnc 
  Memory size MB : 8192 
  MAC address    : 00:16:3e:3b:8d:66 
  Boot type  : disk 
  Number of CPUs : 2 
  CPU Type 

Re: [ovirt-users] hosted engine setup on second host fails

2014-09-22 Thread Stefan Wendler
On 09/19/2014 15:58, Itamar Heim wrote:
 On 09/19/2014 03:32 PM, Stefan Wendler wrote:
 Hi there.

 I'm trying to install a hosted-engine on our second node (fist engine
 runs on node1).

 But I always get the message:

 [ ERROR ] Cannot automatically add the host to the Default cluster:
 Cannot add Host. Host with the same address already exists.

 I'm not entirely sure what I have to do when this message comes, so I
 just press ENTER:

 ###
 To continue make a selection from the options below:
(1) Continue setup - engine installation is complete
(2) Power off and restart the VM
(3) Abort setup

(1, 2, 3)[1]:
 

 Is there any other interaction required prior to selecting 1?

 In the Web Gui I get the following message:

 X Adding new Host hosted_engine_2 to Cluster Default

 Here is the console output:

 # hosted-engine --deploy
 [ INFO  ] Stage: Initializing
Continuing will configure this host for serving as hypervisor
 and create a VM where you have to install oVirt Engine afterwards.
Are you sure you want to continue? (Yes, No)[Yes]:
 [ INFO  ] Generating a temporary VNC password.
 [ INFO  ] Stage: Environment setup
Configuration files: []
Log file:
 /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20140919141012-k2lag6.log

Version: otopi-1.2.3 (otopi-1.2.3-1.el6)
 [ INFO  ] Hardware supports virtualization
 [ INFO  ] Bridge ovirtmgmt already created
 [ INFO  ] Stage: Environment packages setup
 [ INFO  ] Stage: Programs detection
 [ INFO  ] Stage: Environment setup
 [ INFO  ] Stage: Environment customization

--== STORAGE CONFIGURATION ==--

During customization use CTRL-D to abort.
Please specify the storage you would like to use (nfs3,
 nfs4)[nfs3]:
Please specify the full shared storage connection path to use
 (example: host:/path): some address:/volume1
The specified storage location already contains a data domain.
 Is this an additional host setup (Yes, No)[Yes]?
 [ INFO  ] Installing on additional host
Please specify the Host ID [Must be integer, default: 2]:
The Host ID is already known. Is this a re-deployment on an
 additional host that was previously set up (Yes, No)[Yes]?

--== SYSTEM CONFIGURATION ==--

 [WARNING] A configuration file must be supplied to deploy Hosted Engine
 on an additional host.
The answer file may be fetched from the first host using scp.
If you do not want to download it automatically you can abort
 the setup answering no to the following question.
Do you want to scp the answer file from the first host? (Yes,
 No)[Yes]:
Please provide the FQDN or IP of the first host:
 node1.domain
Enter 'root' user password for host node1.domain:
 [ INFO  ] Answer file successfully downloaded

--== NETWORK CONFIGURATION ==--

The following CPU types are supported by this host:
   - model_Westmere: Intel Westmere Family
   - model_Nehalem: Intel Nehalem Family
   - model_Penryn: Intel Penryn Family
   - model_Conroe: Intel Conroe Family

--== HOSTED ENGINE CONFIGURATION ==--

Enter the name which will be used to identify this host inside
 the Administrator Portal [hosted_engine_2]:
Enter 'admin@internal' user password that will be used for
 accessing the Administrator Portal:
Confirm 'admin@internal' user password:
   [ INFO  ] Stage: Setup validation

--== CONFIGURATION PREVIEW ==--

Engine FQDN: engine.domain
Bridge name: ovirtmgmt
SSH daemon port: 22
Gateway address: some address
Host name for web application  : hosted_engine_2
Host ID: 2
Image size GB  : 25
Storage connection : some address:/volume1
Console type   : vnc
Memory size MB : 8192
MAC address: 00:16:3e:3b:8d:66
Boot type  : disk
Number of CPUs : 2
CPU Type   : model_Westmere

Please confirm installation settings (Yes, No)[No]: yes
 [ ERROR ] Invalid value

Please confirm installation settings (Yes, No)[No]: Yes
 [ INFO  ] Stage: Transaction setup
 [ INFO  ] Stage: Misc configuration
 [ INFO  ] Stage: Package installation
 [ INFO  ] Stage: Misc configuration
 [ INFO  ] Configuring libvirt
 [ INFO  ] Configuring VDSM
 [ INFO  ] Starting vdsmd
 [ INFO  ] Waiting for VDSM hardware info
 [ INFO  ] Waiting for VDSM hardware 

Re: [ovirt-users] hosted engine setup on second host fails

2014-09-19 Thread Itamar Heim

On 09/19/2014 03:32 PM, Stefan Wendler wrote:

Hi there.

I'm trying to install a hosted-engine on our second node (fist engine
runs on node1).

But I always get the message:

[ ERROR ] Cannot automatically add the host to the Default cluster:
Cannot add Host. Host with the same address already exists.

I'm not entirely sure what I have to do when this message comes, so I
just press ENTER:

###
To continue make a selection from the options below:
   (1) Continue setup - engine installation is complete
   (2) Power off and restart the VM
   (3) Abort setup

   (1, 2, 3)[1]:


Is there any other interaction required prior to selecting 1?

In the Web Gui I get the following message:

X Adding new Host hosted_engine_2 to Cluster Default

Here is the console output:

# hosted-engine --deploy
[ INFO  ] Stage: Initializing
   Continuing will configure this host for serving as hypervisor
and create a VM where you have to install oVirt Engine afterwards.
   Are you sure you want to continue? (Yes, No)[Yes]:
[ INFO  ] Generating a temporary VNC password.
[ INFO  ] Stage: Environment setup
   Configuration files: []
   Log file:
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20140919141012-k2lag6.log
   Version: otopi-1.2.3 (otopi-1.2.3-1.el6)
[ INFO  ] Hardware supports virtualization
[ INFO  ] Bridge ovirtmgmt already created
[ INFO  ] Stage: Environment packages setup
[ INFO  ] Stage: Programs detection
[ INFO  ] Stage: Environment setup
[ INFO  ] Stage: Environment customization

   --== STORAGE CONFIGURATION ==--

   During customization use CTRL-D to abort.
   Please specify the storage you would like to use (nfs3,
nfs4)[nfs3]:
   Please specify the full shared storage connection path to use
(example: host:/path): some address:/volume1
   The specified storage location already contains a data domain.
Is this an additional host setup (Yes, No)[Yes]?
[ INFO  ] Installing on additional host
   Please specify the Host ID [Must be integer, default: 2]:
   The Host ID is already known. Is this a re-deployment on an
additional host that was previously set up (Yes, No)[Yes]?

   --== SYSTEM CONFIGURATION ==--

[WARNING] A configuration file must be supplied to deploy Hosted Engine
on an additional host.
   The answer file may be fetched from the first host using scp.
   If you do not want to download it automatically you can abort
the setup answering no to the following question.
   Do you want to scp the answer file from the first host? (Yes,
No)[Yes]:
   Please provide the FQDN or IP of the first host: node1.domain
   Enter 'root' user password for host node1.domain:
[ INFO  ] Answer file successfully downloaded

   --== NETWORK CONFIGURATION ==--

   The following CPU types are supported by this host:
  - model_Westmere: Intel Westmere Family
  - model_Nehalem: Intel Nehalem Family
  - model_Penryn: Intel Penryn Family
  - model_Conroe: Intel Conroe Family

   --== HOSTED ENGINE CONFIGURATION ==--

   Enter the name which will be used to identify this host inside
the Administrator Portal [hosted_engine_2]:
   Enter 'admin@internal' user password that will be used for
accessing the Administrator Portal:
   Confirm 'admin@internal' user password:
  [ INFO  ] Stage: Setup validation

   --== CONFIGURATION PREVIEW ==--

   Engine FQDN: engine.domain
   Bridge name: ovirtmgmt
   SSH daemon port: 22
   Gateway address: some address
   Host name for web application  : hosted_engine_2
   Host ID: 2
   Image size GB  : 25
   Storage connection : some address:/volume1
   Console type   : vnc
   Memory size MB : 8192
   MAC address: 00:16:3e:3b:8d:66
   Boot type  : disk
   Number of CPUs : 2
   CPU Type   : model_Westmere

   Please confirm installation settings (Yes, No)[No]: yes
[ ERROR ] Invalid value

   Please confirm installation settings (Yes, No)[No]: Yes
[ INFO  ] Stage: Transaction setup
[ INFO  ] Stage: Misc configuration
[ INFO  ] Stage: Package installation
[ INFO  ] Stage: Misc configuration
[ INFO  ] Configuring libvirt
[ INFO  ] Configuring VDSM
[ INFO  ] Starting vdsmd
[ INFO  ] Waiting for VDSM hardware info
[ INFO  ] Waiting for VDSM hardware info
[ INFO  ] Connecting Storage Domain
[ INFO  ] Configuring VM
[ INFO  ] Updating hosted-engine configuration
[ INFO  ] Stage: Transaction