Re: [ovirt-users] [hosted-engine] Metadata too new error when adding 2nd host

2016-04-21 Thread Wee Sritippho


On 20 เมษายน 2016 18 นาฬิกา 29 นาที 26 วินาที GMT+07:00, Yedidyah Bar David 
 wrote:
>On Wed, Apr 20, 2016 at 1:42 PM, Wee Sritippho 
>wrote:
>> Hi Didi & Martin,
>>
>> I followed your instructions and are able to add the 2nd host. Thank
>you :)
>>
>> This is what I've done:
>>
>> [root@host01 ~]# hosted-engine --set-maintenance --mode=global
>>
>> [root@host01 ~]# systemctl stop ovirt-ha-agent
>>
>> [root@host01 ~]# systemctl stop ovirt-ha-broker
>>
>> [root@host01 ~]# find /rhev -name hosted-engine.metadata
>>
>/rhev/data-center/mnt/blockSD/47e3e4ac-534a-4e11-b14e-27ecb4585431/ha_agent/hosted-engine.metadata
>>
>/rhev/data-center/mnt/blockSD/336dc4a3-f65c-4a67-bc42-1f73597564cf/ha_agent/hosted-engine.metadata
>>
>> [root@host01 ~]# ls -al
>>
>/rhev/data-center/mnt/blockSD/47e3e4ac-534a-4e11-b14e-27ecb4585431/ha_agent/hosted-engine.metadata
>> lrwxrwxrwx. 1 vdsm kvm 132 Apr 20 02:56
>>
>/rhev/data-center/mnt/blockSD/47e3e4ac-534a-4e11-b14e-27ecb4585431/ha_agent/hosted-engine.metadata
>> ->
>>
>/var/run/vdsm/storage/47e3e4ac-534a-4e11-b14e-27ecb4585431/d92632bf-8c15-44ba-9aa8-4a39dcb81e8d/4761bb8d-779e-4378-8b13-7b12f96f5c56
>>
>> [root@host01 ~]# ls -al
>>
>/rhev/data-center/mnt/blockSD/336dc4a3-f65c-4a67-bc42-1f73597564cf/ha_agent/hosted-engine.metadata
>> lrwxrwxrwx. 1 vdsm kvm 132 Apr 21 03:40
>>
>/rhev/data-center/mnt/blockSD/336dc4a3-f65c-4a67-bc42-1f73597564cf/ha_agent/hosted-engine.metadata
>> ->
>>
>/var/run/vdsm/storage/336dc4a3-f65c-4a67-bc42-1f73597564cf/49d6ee16-cfa0-47f2-b461-125bc6f614db/89ee314d-33ce-43fb-9a66-0852c5f675d3
>>
>> [root@host01 ~]# dd if=/dev/zero
>>
>of=/var/run/vdsm/storage/336dc4a3-f65c-4a67-bc42-1f73597564cf/49d6ee16-cfa0-47f2-b461-125bc6f614db/89ee314d-33ce-43fb-9a66-0852c5f675d3
>> bs=1M
>> dd: error writing
>>
>‘/var/run/vdsm/storage/336dc4a3-f65c-4a67-bc42-1f73597564cf/49d6ee16-cfa0-47f2-b461-125bc6f614db/89ee314d-33ce-43fb-9a66-0852c5f675d3’:
>> No space left on device
>> 129+0 records in
>> 128+0 records out
>> 134217728 bytes (134 MB) copied, 0.246691 s, 544 MB/s
>>
>> [root@host01 ~]# systemctl start ovirt-ha-broker
>>
>> [root@host01 ~]# systemctl start ovirt-ha-agent
>>
>> [root@host01 ~]# hosted-engine --set-maintenance --mode=none
>>
>> (Found 2 metadata files but the first one is red when I used 'ls -al'
>so I
>> assume it is a leftover from the previous failed installation and
>didn't
>> touch it)
>>
>> BTW, how to properly clean the FC storage before using it with oVirt?
>I used
>> "parted /dev/mapper/wwid mklabel msdos" to destroy the partition
>table.
>> Isn't that enough?
>
>Even this should not be needed in 3.6. Did you start with 3.6? Or
>upgraded
>from a previous version?

I started with 3.6.1. The first deployment failed due to corrupted os when I 
tried to restart the vm with option 3 (power off & restart vm) before trying to 
install ovirt-engine on it. I then chose another option to destroy the vm and 
to quit the setup, destroyed the FC LUN's partition table and then run 
hosted-engine --deploy on the 1st host again with success.

>Also please verify that output of 'hosted-engine --vm-status' makes
>sense.
>
>Thanks,
>
>>
>>
>> On 20/4/2559 15:11, Martin Sivak wrote:

 Assuming you never deployed a host with ID 52, this is likely a
>result of
 a
 corruption or dirt or something like that.
 I see that you use FC storage. In previous versions, we did not
>clean
 such
 storage, so you might have dirt left.
>>>
>>> This is the exact reason for an error like yours. Using dirty block
>>> storage. Please stop all hosted engine tooling (both agent and
>broker)
>>> and fill the metadata drive with zeros.
>>>
>>> You will have to find the proper hosted-engine.metadata file (which
>>> will be a symlink) under /rhev:
>>>
>>> Example:
>>>
>>> [root@dev-03 rhev]# find . -name hosted-engine.metadata
>>>
>>>
>>>
>./data-center/mnt/str-01.rhev.lab.eng.brq.redhat.com:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
>>>
>>> [root@dev-03 rhev]# ls -al
>>>
>>>
>./data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
>>>
>>> lrwxrwxrwx. 1 vdsm kvm 201 Mar 15 15:00
>>>
>>>
>./data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
>>> ->
>>>
>/rhev/data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/images/6ab3f215-f234-4cd4-b9d4-8680767c3d99/dcbfa48d-8543-42d1-93dc-aa40855c4855
>>>
>>> And use (for example) dd if=/dev/zero of=/path/to/metadata bs=1M to
>>> clean it - But be CAREFUL to not touch any other file or disk you
>>> might find.
>>>
>>> Then restart the hosted engine tools and all should be fine.
>>>
>>>
>>>
>>> Martin
>>>
>>>
>>> On Wed, Apr 20, 2016 at 8:20 AM, Yedidyah Bar David
>
>>> wrote:

 On Wed, Apr 20, 2016 at 7:15 AM, Wee Sritippho 
 wrote:
>
> Hi,
>
> I used CentOS-7-x86_64-Minimal-1511

Re: [ovirt-users] [hosted-engine] Metadata too new error when adding 2nd host

2016-04-20 Thread Martin Sivak
Hi everybody,

I added the procedure to the wiki, it you would be so kind to review it.

https://github.com/oVirt/ovirt-site/pull/188

Thanks

Martin


On Wed, Apr 20, 2016 at 1:29 PM, Yedidyah Bar David  wrote:
> On Wed, Apr 20, 2016 at 1:42 PM, Wee Sritippho  wrote:
>> Hi Didi & Martin,
>>
>> I followed your instructions and are able to add the 2nd host. Thank you :)
>>
>> This is what I've done:
>>
>> [root@host01 ~]# hosted-engine --set-maintenance --mode=global
>>
>> [root@host01 ~]# systemctl stop ovirt-ha-agent
>>
>> [root@host01 ~]# systemctl stop ovirt-ha-broker
>>
>> [root@host01 ~]# find /rhev -name hosted-engine.metadata
>> /rhev/data-center/mnt/blockSD/47e3e4ac-534a-4e11-b14e-27ecb4585431/ha_agent/hosted-engine.metadata
>> /rhev/data-center/mnt/blockSD/336dc4a3-f65c-4a67-bc42-1f73597564cf/ha_agent/hosted-engine.metadata
>>
>> [root@host01 ~]# ls -al
>> /rhev/data-center/mnt/blockSD/47e3e4ac-534a-4e11-b14e-27ecb4585431/ha_agent/hosted-engine.metadata
>> lrwxrwxrwx. 1 vdsm kvm 132 Apr 20 02:56
>> /rhev/data-center/mnt/blockSD/47e3e4ac-534a-4e11-b14e-27ecb4585431/ha_agent/hosted-engine.metadata
>> ->
>> /var/run/vdsm/storage/47e3e4ac-534a-4e11-b14e-27ecb4585431/d92632bf-8c15-44ba-9aa8-4a39dcb81e8d/4761bb8d-779e-4378-8b13-7b12f96f5c56
>>
>> [root@host01 ~]# ls -al
>> /rhev/data-center/mnt/blockSD/336dc4a3-f65c-4a67-bc42-1f73597564cf/ha_agent/hosted-engine.metadata
>> lrwxrwxrwx. 1 vdsm kvm 132 Apr 21 03:40
>> /rhev/data-center/mnt/blockSD/336dc4a3-f65c-4a67-bc42-1f73597564cf/ha_agent/hosted-engine.metadata
>> ->
>> /var/run/vdsm/storage/336dc4a3-f65c-4a67-bc42-1f73597564cf/49d6ee16-cfa0-47f2-b461-125bc6f614db/89ee314d-33ce-43fb-9a66-0852c5f675d3
>>
>> [root@host01 ~]# dd if=/dev/zero
>> of=/var/run/vdsm/storage/336dc4a3-f65c-4a67-bc42-1f73597564cf/49d6ee16-cfa0-47f2-b461-125bc6f614db/89ee314d-33ce-43fb-9a66-0852c5f675d3
>> bs=1M
>> dd: error writing
>> ‘/var/run/vdsm/storage/336dc4a3-f65c-4a67-bc42-1f73597564cf/49d6ee16-cfa0-47f2-b461-125bc6f614db/89ee314d-33ce-43fb-9a66-0852c5f675d3’:
>> No space left on device
>> 129+0 records in
>> 128+0 records out
>> 134217728 bytes (134 MB) copied, 0.246691 s, 544 MB/s
>>
>> [root@host01 ~]# systemctl start ovirt-ha-broker
>>
>> [root@host01 ~]# systemctl start ovirt-ha-agent
>>
>> [root@host01 ~]# hosted-engine --set-maintenance --mode=none
>>
>> (Found 2 metadata files but the first one is red when I used 'ls -al' so I
>> assume it is a leftover from the previous failed installation and didn't
>> touch it)
>>
>> BTW, how to properly clean the FC storage before using it with oVirt? I used
>> "parted /dev/mapper/wwid mklabel msdos" to destroy the partition table.
>> Isn't that enough?
>
> Even this should not be needed in 3.6. Did you start with 3.6? Or upgraded
> from a previous version?
>
> Also please verify that output of 'hosted-engine --vm-status' makes sense.
>
> Thanks,
>
>>
>>
>> On 20/4/2559 15:11, Martin Sivak wrote:

 Assuming you never deployed a host with ID 52, this is likely a result of
 a
 corruption or dirt or something like that.
 I see that you use FC storage. In previous versions, we did not clean
 such
 storage, so you might have dirt left.
>>>
>>> This is the exact reason for an error like yours. Using dirty block
>>> storage. Please stop all hosted engine tooling (both agent and broker)
>>> and fill the metadata drive with zeros.
>>>
>>> You will have to find the proper hosted-engine.metadata file (which
>>> will be a symlink) under /rhev:
>>>
>>> Example:
>>>
>>> [root@dev-03 rhev]# find . -name hosted-engine.metadata
>>>
>>>
>>> ./data-center/mnt/str-01.rhev.lab.eng.brq.redhat.com:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
>>>
>>> [root@dev-03 rhev]# ls -al
>>>
>>> ./data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
>>>
>>> lrwxrwxrwx. 1 vdsm kvm 201 Mar 15 15:00
>>>
>>> ./data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
>>> ->
>>> /rhev/data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/images/6ab3f215-f234-4cd4-b9d4-8680767c3d99/dcbfa48d-8543-42d1-93dc-aa40855c4855
>>>
>>> And use (for example) dd if=/dev/zero of=/path/to/metadata bs=1M to
>>> clean it - But be CAREFUL to not touch any other file or disk you
>>> might find.
>>>
>>> Then restart the hosted engine tools and all should be fine.
>>>
>>>
>>>
>>> Martin
>>>
>>>
>>> On Wed, Apr 20, 2016 at 8:20 AM, Yedidyah Bar David 
>>> wrote:

 On Wed, Apr 20, 2016 at 7:15 AM, Wee Sritippho 
 wrote:
>
> Hi,
>
> I used CentOS-7-x86_64-Minimal-1511.iso to install the hosts and the
> engine.
>
> The 1st host and the hosted-engine were installed successfully, but the
> 2nd
> host failed with this error message:
>
> "Failed to execute stage 'Setup validation': Metadata version 2 from

Re: [ovirt-users] [hosted-engine] Metadata too new error when adding 2nd host

2016-04-20 Thread Yedidyah Bar David
On Wed, Apr 20, 2016 at 1:42 PM, Wee Sritippho  wrote:
> Hi Didi & Martin,
>
> I followed your instructions and are able to add the 2nd host. Thank you :)
>
> This is what I've done:
>
> [root@host01 ~]# hosted-engine --set-maintenance --mode=global
>
> [root@host01 ~]# systemctl stop ovirt-ha-agent
>
> [root@host01 ~]# systemctl stop ovirt-ha-broker
>
> [root@host01 ~]# find /rhev -name hosted-engine.metadata
> /rhev/data-center/mnt/blockSD/47e3e4ac-534a-4e11-b14e-27ecb4585431/ha_agent/hosted-engine.metadata
> /rhev/data-center/mnt/blockSD/336dc4a3-f65c-4a67-bc42-1f73597564cf/ha_agent/hosted-engine.metadata
>
> [root@host01 ~]# ls -al
> /rhev/data-center/mnt/blockSD/47e3e4ac-534a-4e11-b14e-27ecb4585431/ha_agent/hosted-engine.metadata
> lrwxrwxrwx. 1 vdsm kvm 132 Apr 20 02:56
> /rhev/data-center/mnt/blockSD/47e3e4ac-534a-4e11-b14e-27ecb4585431/ha_agent/hosted-engine.metadata
> ->
> /var/run/vdsm/storage/47e3e4ac-534a-4e11-b14e-27ecb4585431/d92632bf-8c15-44ba-9aa8-4a39dcb81e8d/4761bb8d-779e-4378-8b13-7b12f96f5c56
>
> [root@host01 ~]# ls -al
> /rhev/data-center/mnt/blockSD/336dc4a3-f65c-4a67-bc42-1f73597564cf/ha_agent/hosted-engine.metadata
> lrwxrwxrwx. 1 vdsm kvm 132 Apr 21 03:40
> /rhev/data-center/mnt/blockSD/336dc4a3-f65c-4a67-bc42-1f73597564cf/ha_agent/hosted-engine.metadata
> ->
> /var/run/vdsm/storage/336dc4a3-f65c-4a67-bc42-1f73597564cf/49d6ee16-cfa0-47f2-b461-125bc6f614db/89ee314d-33ce-43fb-9a66-0852c5f675d3
>
> [root@host01 ~]# dd if=/dev/zero
> of=/var/run/vdsm/storage/336dc4a3-f65c-4a67-bc42-1f73597564cf/49d6ee16-cfa0-47f2-b461-125bc6f614db/89ee314d-33ce-43fb-9a66-0852c5f675d3
> bs=1M
> dd: error writing
> ‘/var/run/vdsm/storage/336dc4a3-f65c-4a67-bc42-1f73597564cf/49d6ee16-cfa0-47f2-b461-125bc6f614db/89ee314d-33ce-43fb-9a66-0852c5f675d3’:
> No space left on device
> 129+0 records in
> 128+0 records out
> 134217728 bytes (134 MB) copied, 0.246691 s, 544 MB/s
>
> [root@host01 ~]# systemctl start ovirt-ha-broker
>
> [root@host01 ~]# systemctl start ovirt-ha-agent
>
> [root@host01 ~]# hosted-engine --set-maintenance --mode=none
>
> (Found 2 metadata files but the first one is red when I used 'ls -al' so I
> assume it is a leftover from the previous failed installation and didn't
> touch it)
>
> BTW, how to properly clean the FC storage before using it with oVirt? I used
> "parted /dev/mapper/wwid mklabel msdos" to destroy the partition table.
> Isn't that enough?

Even this should not be needed in 3.6. Did you start with 3.6? Or upgraded
from a previous version?

Also please verify that output of 'hosted-engine --vm-status' makes sense.

Thanks,

>
>
> On 20/4/2559 15:11, Martin Sivak wrote:
>>>
>>> Assuming you never deployed a host with ID 52, this is likely a result of
>>> a
>>> corruption or dirt or something like that.
>>> I see that you use FC storage. In previous versions, we did not clean
>>> such
>>> storage, so you might have dirt left.
>>
>> This is the exact reason for an error like yours. Using dirty block
>> storage. Please stop all hosted engine tooling (both agent and broker)
>> and fill the metadata drive with zeros.
>>
>> You will have to find the proper hosted-engine.metadata file (which
>> will be a symlink) under /rhev:
>>
>> Example:
>>
>> [root@dev-03 rhev]# find . -name hosted-engine.metadata
>>
>>
>> ./data-center/mnt/str-01.rhev.lab.eng.brq.redhat.com:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
>>
>> [root@dev-03 rhev]# ls -al
>>
>> ./data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
>>
>> lrwxrwxrwx. 1 vdsm kvm 201 Mar 15 15:00
>>
>> ./data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
>> ->
>> /rhev/data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/images/6ab3f215-f234-4cd4-b9d4-8680767c3d99/dcbfa48d-8543-42d1-93dc-aa40855c4855
>>
>> And use (for example) dd if=/dev/zero of=/path/to/metadata bs=1M to
>> clean it - But be CAREFUL to not touch any other file or disk you
>> might find.
>>
>> Then restart the hosted engine tools and all should be fine.
>>
>>
>>
>> Martin
>>
>>
>> On Wed, Apr 20, 2016 at 8:20 AM, Yedidyah Bar David 
>> wrote:
>>>
>>> On Wed, Apr 20, 2016 at 7:15 AM, Wee Sritippho 
>>> wrote:

 Hi,

 I used CentOS-7-x86_64-Minimal-1511.iso to install the hosts and the
 engine.

 The 1st host and the hosted-engine were installed successfully, but the
 2nd
 host failed with this error message:

 "Failed to execute stage 'Setup validation': Metadata version 2 from
 host 52
 too new for this agent (highest compatible version: 1)"
>>>
>>> Assuming you never deployed a host with ID 52, this is likely a result of
>>> a
>>> corruption or dirt or something like that.
>>>
>>> What do you get on host 1 running 'hosted-engine --vm-status'?
>>>
>>> I see that you use FC storage. In previous versions,

Re: [ovirt-users] [hosted-engine] Metadata too new error when adding 2nd host

2016-04-20 Thread Wee Sritippho

Hi Didi & Martin,

I followed your instructions and are able to add the 2nd host. Thank you :)

This is what I've done:

[root@host01 ~]# hosted-engine --set-maintenance --mode=global

[root@host01 ~]# systemctl stop ovirt-ha-agent

[root@host01 ~]# systemctl stop ovirt-ha-broker

[root@host01 ~]# find /rhev -name hosted-engine.metadata
/rhev/data-center/mnt/blockSD/47e3e4ac-534a-4e11-b14e-27ecb4585431/ha_agent/hosted-engine.metadata
/rhev/data-center/mnt/blockSD/336dc4a3-f65c-4a67-bc42-1f73597564cf/ha_agent/hosted-engine.metadata

[root@host01 ~]# ls -al 
/rhev/data-center/mnt/blockSD/47e3e4ac-534a-4e11-b14e-27ecb4585431/ha_agent/hosted-engine.metadata
lrwxrwxrwx. 1 vdsm kvm 132 Apr 20 02:56 
/rhev/data-center/mnt/blockSD/47e3e4ac-534a-4e11-b14e-27ecb4585431/ha_agent/hosted-engine.metadata 
-> 
/var/run/vdsm/storage/47e3e4ac-534a-4e11-b14e-27ecb4585431/d92632bf-8c15-44ba-9aa8-4a39dcb81e8d/4761bb8d-779e-4378-8b13-7b12f96f5c56


[root@host01 ~]# ls -al 
/rhev/data-center/mnt/blockSD/336dc4a3-f65c-4a67-bc42-1f73597564cf/ha_agent/hosted-engine.metadata
lrwxrwxrwx. 1 vdsm kvm 132 Apr 21 03:40 
/rhev/data-center/mnt/blockSD/336dc4a3-f65c-4a67-bc42-1f73597564cf/ha_agent/hosted-engine.metadata 
-> 
/var/run/vdsm/storage/336dc4a3-f65c-4a67-bc42-1f73597564cf/49d6ee16-cfa0-47f2-b461-125bc6f614db/89ee314d-33ce-43fb-9a66-0852c5f675d3


[root@host01 ~]# dd if=/dev/zero 
of=/var/run/vdsm/storage/336dc4a3-f65c-4a67-bc42-1f73597564cf/49d6ee16-cfa0-47f2-b461-125bc6f614db/89ee314d-33ce-43fb-9a66-0852c5f675d3 
bs=1M
dd: error writing 
‘/var/run/vdsm/storage/336dc4a3-f65c-4a67-bc42-1f73597564cf/49d6ee16-cfa0-47f2-b461-125bc6f614db/89ee314d-33ce-43fb-9a66-0852c5f675d3’: 
No space left on device

129+0 records in
128+0 records out
134217728 bytes (134 MB) copied, 0.246691 s, 544 MB/s

[root@host01 ~]# systemctl start ovirt-ha-broker

[root@host01 ~]# systemctl start ovirt-ha-agent

[root@host01 ~]# hosted-engine --set-maintenance --mode=none

(Found 2 metadata files but the first one is red when I used 'ls -al' so 
I assume it is a leftover from the previous failed installation and 
didn't touch it)


BTW, how to properly clean the FC storage before using it with oVirt? I 
used "parted /dev/mapper/wwid mklabel msdos" to destroy the partition 
table. Isn't that enough?


On 20/4/2559 15:11, Martin Sivak wrote:

Assuming you never deployed a host with ID 52, this is likely a result of a
corruption or dirt or something like that.
I see that you use FC storage. In previous versions, we did not clean such
storage, so you might have dirt left.

This is the exact reason for an error like yours. Using dirty block
storage. Please stop all hosted engine tooling (both agent and broker)
and fill the metadata drive with zeros.

You will have to find the proper hosted-engine.metadata file (which
will be a symlink) under /rhev:

Example:

[root@dev-03 rhev]# find . -name hosted-engine.metadata

./data-center/mnt/str-01.rhev.lab.eng.brq.redhat.com:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata

[root@dev-03 rhev]# ls -al
./data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata

lrwxrwxrwx. 1 vdsm kvm 201 Mar 15 15:00
./data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
-> 
/rhev/data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/images/6ab3f215-f234-4cd4-b9d4-8680767c3d99/dcbfa48d-8543-42d1-93dc-aa40855c4855

And use (for example) dd if=/dev/zero of=/path/to/metadata bs=1M to
clean it - But be CAREFUL to not touch any other file or disk you
might find.

Then restart the hosted engine tools and all should be fine.



Martin


On Wed, Apr 20, 2016 at 8:20 AM, Yedidyah Bar David  wrote:

On Wed, Apr 20, 2016 at 7:15 AM, Wee Sritippho  wrote:

Hi,

I used CentOS-7-x86_64-Minimal-1511.iso to install the hosts and the engine.

The 1st host and the hosted-engine were installed successfully, but the 2nd
host failed with this error message:

"Failed to execute stage 'Setup validation': Metadata version 2 from host 52
too new for this agent (highest compatible version: 1)"

Assuming you never deployed a host with ID 52, this is likely a result of a
corruption or dirt or something like that.

What do you get on host 1 running 'hosted-engine --vm-status'?

I see that you use FC storage. In previous versions, we did not clean such
storage, so you might have dirt left. See also [1]. You can try cleaning
using [2].

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1238823
[2] 
https://www.ovirt.org/documentation/how-to/hosted-engine/#lockspace-corrupted-recovery-procedure


Here is the package versions:

[root@host02 ~]# rpm -qa | grep ovirt
libgovirt-0.3.3-1.el7_2.1.x86_64
ovirt-vmconsole-1.0.0-1.el7.centos.noarch
ovirt-vmconsole-host-1.0.0-1.el7.centos.noarch
ovirt-host-deploy-1.4.1-1.el7.centos.noarch
ovirt-hosted-engine-ha-1.3.5.1-1.el7.cent

Re: [ovirt-users] [hosted-engine] Metadata too new error when adding 2nd host

2016-04-20 Thread Martin Sivak
> And we also do not clean on upgrades... Perhaps we can? Should? Optionally?
>

We can't. We do not execute any setup tool during upgrade and the
clean procedure
requires that all hosted engine tooling is shut down.

Martin

On Wed, Apr 20, 2016 at 11:40 AM, Yedidyah Bar David  wrote:
> On Wed, Apr 20, 2016 at 11:40 AM, Martin Sivak  wrote:
>>> Doesn't cleaning sanlock lockspace require also to stop sanlock itself?
>>> I guess it's supposed to be able to handle this, but perhaps users want
>>> to clean the lockspace because dirt there causes also problems with
>>> sanlock, no?
>>
>> Sanlock can be up, but the lockspace has to be unused.
>>
>>> So the only tool we have to clean metadata is '–clean-metadata', which
>>> works one-by-one?
>>
>> Correct, it needs to acquire the lock first to make sure nobody is writing.
>>
>> The dirty disk issue should not be happening anymore, we added an
>> equivalent of the DD to hosted engine setup. But we might have a bug there
>> of course.
>
> And we also do not clean on upgrades... Perhaps we can? Should? Optionally?
>
>>
>> Martin
>>
>> On Wed, Apr 20, 2016 at 10:34 AM, Yedidyah Bar David  wrote:
>>> On Wed, Apr 20, 2016 at 11:20 AM, Martin Sivak  wrote:
> after moving to global maintenance.

 Good point.

> Martin - any advantage of this over '–reinitialize-lockspace'? Besides
> that it works also in older versions? Care to add this to the howto page?

 Reinitialize lockspace clears the sanlock lockspace, not the metadata
 file. Those are two different places.
>>>
>>> So the only tool we have to clean metadata is '–clean-metadata', which
>>> works one-by-one?
>>>
>>> Doesn't cleaning sanlock lockspace require also to stop sanlock itself?
>>> I guess it's supposed to be able to handle this, but perhaps users want
>>> to clean the lockspace because dirt there causes also problems with
>>> sanlock, no?
>>>

> Care to add this to the howto page?

 Yeah, I can do that.
>>>
>>> Thanks!
>>>

 Martin

 On Wed, Apr 20, 2016 at 10:17 AM, Yedidyah Bar David  
 wrote:
> On Wed, Apr 20, 2016 at 11:11 AM, Martin Sivak  wrote:
>>> Assuming you never deployed a host with ID 52, this is likely a result 
>>> of a
>>> corruption or dirt or something like that.
>>
>>> I see that you use FC storage. In previous versions, we did not clean 
>>> such
>>> storage, so you might have dirt left.
>>
>> This is the exact reason for an error like yours. Using dirty block
>> storage. Please stop all hosted engine tooling (both agent and broker)
>> and fill the metadata drive with zeros.
>
> after moving to global maintenance.
>
> Martin - any advantage of this over '–reinitialize-lockspace'? Besides
> that it works also in older versions? Care to add this to the howto page?
> Thanks!
>
>>
>> You will have to find the proper hosted-engine.metadata file (which
>> will be a symlink) under /rhev:
>>
>> Example:
>>
>> [root@dev-03 rhev]# find . -name hosted-engine.metadata
>>
>> ./data-center/mnt/str-01.rhev.lab.eng.brq.redhat.com:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
>>
>> [root@dev-03 rhev]# ls -al
>> ./data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
>>
>> lrwxrwxrwx. 1 vdsm kvm 201 Mar 15 15:00
>> ./data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
>> -> 
>> /rhev/data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/images/6ab3f215-f234-4cd4-b9d4-8680767c3d99/dcbfa48d-8543-42d1-93dc-aa40855c4855
>>
>> And use (for example) dd if=/dev/zero of=/path/to/metadata bs=1M to
>> clean it - But be CAREFUL to not touch any other file or disk you
>> might find.
>>
>> Then restart the hosted engine tools and all should be fine.
>>
>>
>>
>> Martin
>>
>>
>> On Wed, Apr 20, 2016 at 8:20 AM, Yedidyah Bar David  
>> wrote:
>>> On Wed, Apr 20, 2016 at 7:15 AM, Wee Sritippho  
>>> wrote:
 Hi,

 I used CentOS-7-x86_64-Minimal-1511.iso to install the hosts and the 
 engine.

 The 1st host and the hosted-engine were installed successfully, but 
 the 2nd
 host failed with this error message:

 "Failed to execute stage 'Setup validation': Metadata version 2 from 
 host 52
 too new for this agent (highest compatible version: 1)"
>>>
>>> Assuming you never deployed a host with ID 52, this is likely a result 
>>> of a
>>> corruption or dirt or something like that.
>>>
>>> What do you get on host 1 running 'hosted-engine --vm-status'?
>>>
>>> I see that you use FC storage. In previous versions, we d

Re: [ovirt-users] [hosted-engine] Metadata too new error when adding 2nd host

2016-04-20 Thread Yedidyah Bar David
On Wed, Apr 20, 2016 at 11:40 AM, Martin Sivak  wrote:
>> Doesn't cleaning sanlock lockspace require also to stop sanlock itself?
>> I guess it's supposed to be able to handle this, but perhaps users want
>> to clean the lockspace because dirt there causes also problems with
>> sanlock, no?
>
> Sanlock can be up, but the lockspace has to be unused.
>
>> So the only tool we have to clean metadata is '–clean-metadata', which
>> works one-by-one?
>
> Correct, it needs to acquire the lock first to make sure nobody is writing.
>
> The dirty disk issue should not be happening anymore, we added an
> equivalent of the DD to hosted engine setup. But we might have a bug there
> of course.

And we also do not clean on upgrades... Perhaps we can? Should? Optionally?

>
> Martin
>
> On Wed, Apr 20, 2016 at 10:34 AM, Yedidyah Bar David  wrote:
>> On Wed, Apr 20, 2016 at 11:20 AM, Martin Sivak  wrote:
 after moving to global maintenance.
>>>
>>> Good point.
>>>
 Martin - any advantage of this over '–reinitialize-lockspace'? Besides
 that it works also in older versions? Care to add this to the howto page?
>>>
>>> Reinitialize lockspace clears the sanlock lockspace, not the metadata
>>> file. Those are two different places.
>>
>> So the only tool we have to clean metadata is '–clean-metadata', which
>> works one-by-one?
>>
>> Doesn't cleaning sanlock lockspace require also to stop sanlock itself?
>> I guess it's supposed to be able to handle this, but perhaps users want
>> to clean the lockspace because dirt there causes also problems with
>> sanlock, no?
>>
>>>
 Care to add this to the howto page?
>>>
>>> Yeah, I can do that.
>>
>> Thanks!
>>
>>>
>>> Martin
>>>
>>> On Wed, Apr 20, 2016 at 10:17 AM, Yedidyah Bar David  
>>> wrote:
 On Wed, Apr 20, 2016 at 11:11 AM, Martin Sivak  wrote:
>> Assuming you never deployed a host with ID 52, this is likely a result 
>> of a
>> corruption or dirt or something like that.
>
>> I see that you use FC storage. In previous versions, we did not clean 
>> such
>> storage, so you might have dirt left.
>
> This is the exact reason for an error like yours. Using dirty block
> storage. Please stop all hosted engine tooling (both agent and broker)
> and fill the metadata drive with zeros.

 after moving to global maintenance.

 Martin - any advantage of this over '–reinitialize-lockspace'? Besides
 that it works also in older versions? Care to add this to the howto page?
 Thanks!

>
> You will have to find the proper hosted-engine.metadata file (which
> will be a symlink) under /rhev:
>
> Example:
>
> [root@dev-03 rhev]# find . -name hosted-engine.metadata
>
> ./data-center/mnt/str-01.rhev.lab.eng.brq.redhat.com:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
>
> [root@dev-03 rhev]# ls -al
> ./data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
>
> lrwxrwxrwx. 1 vdsm kvm 201 Mar 15 15:00
> ./data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
> -> 
> /rhev/data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/images/6ab3f215-f234-4cd4-b9d4-8680767c3d99/dcbfa48d-8543-42d1-93dc-aa40855c4855
>
> And use (for example) dd if=/dev/zero of=/path/to/metadata bs=1M to
> clean it - But be CAREFUL to not touch any other file or disk you
> might find.
>
> Then restart the hosted engine tools and all should be fine.
>
>
>
> Martin
>
>
> On Wed, Apr 20, 2016 at 8:20 AM, Yedidyah Bar David  
> wrote:
>> On Wed, Apr 20, 2016 at 7:15 AM, Wee Sritippho  
>> wrote:
>>> Hi,
>>>
>>> I used CentOS-7-x86_64-Minimal-1511.iso to install the hosts and the 
>>> engine.
>>>
>>> The 1st host and the hosted-engine were installed successfully, but the 
>>> 2nd
>>> host failed with this error message:
>>>
>>> "Failed to execute stage 'Setup validation': Metadata version 2 from 
>>> host 52
>>> too new for this agent (highest compatible version: 1)"
>>
>> Assuming you never deployed a host with ID 52, this is likely a result 
>> of a
>> corruption or dirt or something like that.
>>
>> What do you get on host 1 running 'hosted-engine --vm-status'?
>>
>> I see that you use FC storage. In previous versions, we did not clean 
>> such
>> storage, so you might have dirt left. See also [1]. You can try cleaning
>> using [2].
>>
>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1238823
>> [2] 
>> https://www.ovirt.org/documentation/how-to/hosted-engine/#lockspace-corrupted-recovery-procedure
>>
>>>
>>> Here is the package versions:
>>>
>>> [root@host02 ~]# rpm -qa | grep ovir

Re: [ovirt-users] [hosted-engine] Metadata too new error when adding 2nd host

2016-04-20 Thread Martin Sivak
> Doesn't cleaning sanlock lockspace require also to stop sanlock itself?
> I guess it's supposed to be able to handle this, but perhaps users want
> to clean the lockspace because dirt there causes also problems with
> sanlock, no?

Sanlock can be up, but the lockspace has to be unused.

> So the only tool we have to clean metadata is '–clean-metadata', which
> works one-by-one?

Correct, it needs to acquire the lock first to make sure nobody is writing.

The dirty disk issue should not be happening anymore, we added an
equivalent of the DD to hosted engine setup. But we might have a bug there
of course.

Martin

On Wed, Apr 20, 2016 at 10:34 AM, Yedidyah Bar David  wrote:
> On Wed, Apr 20, 2016 at 11:20 AM, Martin Sivak  wrote:
>>> after moving to global maintenance.
>>
>> Good point.
>>
>>> Martin - any advantage of this over '–reinitialize-lockspace'? Besides
>>> that it works also in older versions? Care to add this to the howto page?
>>
>> Reinitialize lockspace clears the sanlock lockspace, not the metadata
>> file. Those are two different places.
>
> So the only tool we have to clean metadata is '–clean-metadata', which
> works one-by-one?
>
> Doesn't cleaning sanlock lockspace require also to stop sanlock itself?
> I guess it's supposed to be able to handle this, but perhaps users want
> to clean the lockspace because dirt there causes also problems with
> sanlock, no?
>
>>
>>> Care to add this to the howto page?
>>
>> Yeah, I can do that.
>
> Thanks!
>
>>
>> Martin
>>
>> On Wed, Apr 20, 2016 at 10:17 AM, Yedidyah Bar David  wrote:
>>> On Wed, Apr 20, 2016 at 11:11 AM, Martin Sivak  wrote:
> Assuming you never deployed a host with ID 52, this is likely a result of 
> a
> corruption or dirt or something like that.

> I see that you use FC storage. In previous versions, we did not clean such
> storage, so you might have dirt left.

 This is the exact reason for an error like yours. Using dirty block
 storage. Please stop all hosted engine tooling (both agent and broker)
 and fill the metadata drive with zeros.
>>>
>>> after moving to global maintenance.
>>>
>>> Martin - any advantage of this over '–reinitialize-lockspace'? Besides
>>> that it works also in older versions? Care to add this to the howto page?
>>> Thanks!
>>>

 You will have to find the proper hosted-engine.metadata file (which
 will be a symlink) under /rhev:

 Example:

 [root@dev-03 rhev]# find . -name hosted-engine.metadata

 ./data-center/mnt/str-01.rhev.lab.eng.brq.redhat.com:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata

 [root@dev-03 rhev]# ls -al
 ./data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata

 lrwxrwxrwx. 1 vdsm kvm 201 Mar 15 15:00
 ./data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
 -> 
 /rhev/data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/images/6ab3f215-f234-4cd4-b9d4-8680767c3d99/dcbfa48d-8543-42d1-93dc-aa40855c4855

 And use (for example) dd if=/dev/zero of=/path/to/metadata bs=1M to
 clean it - But be CAREFUL to not touch any other file or disk you
 might find.

 Then restart the hosted engine tools and all should be fine.



 Martin


 On Wed, Apr 20, 2016 at 8:20 AM, Yedidyah Bar David  
 wrote:
> On Wed, Apr 20, 2016 at 7:15 AM, Wee Sritippho  wrote:
>> Hi,
>>
>> I used CentOS-7-x86_64-Minimal-1511.iso to install the hosts and the 
>> engine.
>>
>> The 1st host and the hosted-engine were installed successfully, but the 
>> 2nd
>> host failed with this error message:
>>
>> "Failed to execute stage 'Setup validation': Metadata version 2 from 
>> host 52
>> too new for this agent (highest compatible version: 1)"
>
> Assuming you never deployed a host with ID 52, this is likely a result of 
> a
> corruption or dirt or something like that.
>
> What do you get on host 1 running 'hosted-engine --vm-status'?
>
> I see that you use FC storage. In previous versions, we did not clean such
> storage, so you might have dirt left. See also [1]. You can try cleaning
> using [2].
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1238823
> [2] 
> https://www.ovirt.org/documentation/how-to/hosted-engine/#lockspace-corrupted-recovery-procedure
>
>>
>> Here is the package versions:
>>
>> [root@host02 ~]# rpm -qa | grep ovirt
>> libgovirt-0.3.3-1.el7_2.1.x86_64
>> ovirt-vmconsole-1.0.0-1.el7.centos.noarch
>> ovirt-vmconsole-host-1.0.0-1.el7.centos.noarch
>> ovirt-host-deploy-1.4.1-1.el7.centos.noarch
>> ovirt-hosted-engine-ha-1.3.5.1-1.el7.centos.noarch
>> ovirt-hosted-engine-setup-1.3.4.0-1.el

Re: [ovirt-users] [hosted-engine] Metadata too new error when adding 2nd host

2016-04-20 Thread Yedidyah Bar David
On Wed, Apr 20, 2016 at 11:20 AM, Martin Sivak  wrote:
>> after moving to global maintenance.
>
> Good point.
>
>> Martin - any advantage of this over '–reinitialize-lockspace'? Besides
>> that it works also in older versions? Care to add this to the howto page?
>
> Reinitialize lockspace clears the sanlock lockspace, not the metadata
> file. Those are two different places.

So the only tool we have to clean metadata is '–clean-metadata', which
works one-by-one?

Doesn't cleaning sanlock lockspace require also to stop sanlock itself?
I guess it's supposed to be able to handle this, but perhaps users want
to clean the lockspace because dirt there causes also problems with
sanlock, no?

>
>> Care to add this to the howto page?
>
> Yeah, I can do that.

Thanks!

>
> Martin
>
> On Wed, Apr 20, 2016 at 10:17 AM, Yedidyah Bar David  wrote:
>> On Wed, Apr 20, 2016 at 11:11 AM, Martin Sivak  wrote:
 Assuming you never deployed a host with ID 52, this is likely a result of a
 corruption or dirt or something like that.
>>>
 I see that you use FC storage. In previous versions, we did not clean such
 storage, so you might have dirt left.
>>>
>>> This is the exact reason for an error like yours. Using dirty block
>>> storage. Please stop all hosted engine tooling (both agent and broker)
>>> and fill the metadata drive with zeros.
>>
>> after moving to global maintenance.
>>
>> Martin - any advantage of this over '–reinitialize-lockspace'? Besides
>> that it works also in older versions? Care to add this to the howto page?
>> Thanks!
>>
>>>
>>> You will have to find the proper hosted-engine.metadata file (which
>>> will be a symlink) under /rhev:
>>>
>>> Example:
>>>
>>> [root@dev-03 rhev]# find . -name hosted-engine.metadata
>>>
>>> ./data-center/mnt/str-01.rhev.lab.eng.brq.redhat.com:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
>>>
>>> [root@dev-03 rhev]# ls -al
>>> ./data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
>>>
>>> lrwxrwxrwx. 1 vdsm kvm 201 Mar 15 15:00
>>> ./data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
>>> -> 
>>> /rhev/data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/images/6ab3f215-f234-4cd4-b9d4-8680767c3d99/dcbfa48d-8543-42d1-93dc-aa40855c4855
>>>
>>> And use (for example) dd if=/dev/zero of=/path/to/metadata bs=1M to
>>> clean it - But be CAREFUL to not touch any other file or disk you
>>> might find.
>>>
>>> Then restart the hosted engine tools and all should be fine.
>>>
>>>
>>>
>>> Martin
>>>
>>>
>>> On Wed, Apr 20, 2016 at 8:20 AM, Yedidyah Bar David  wrote:
 On Wed, Apr 20, 2016 at 7:15 AM, Wee Sritippho  wrote:
> Hi,
>
> I used CentOS-7-x86_64-Minimal-1511.iso to install the hosts and the 
> engine.
>
> The 1st host and the hosted-engine were installed successfully, but the 
> 2nd
> host failed with this error message:
>
> "Failed to execute stage 'Setup validation': Metadata version 2 from host 
> 52
> too new for this agent (highest compatible version: 1)"

 Assuming you never deployed a host with ID 52, this is likely a result of a
 corruption or dirt or something like that.

 What do you get on host 1 running 'hosted-engine --vm-status'?

 I see that you use FC storage. In previous versions, we did not clean such
 storage, so you might have dirt left. See also [1]. You can try cleaning
 using [2].

 [1] https://bugzilla.redhat.com/show_bug.cgi?id=1238823
 [2] 
 https://www.ovirt.org/documentation/how-to/hosted-engine/#lockspace-corrupted-recovery-procedure

>
> Here is the package versions:
>
> [root@host02 ~]# rpm -qa | grep ovirt
> libgovirt-0.3.3-1.el7_2.1.x86_64
> ovirt-vmconsole-1.0.0-1.el7.centos.noarch
> ovirt-vmconsole-host-1.0.0-1.el7.centos.noarch
> ovirt-host-deploy-1.4.1-1.el7.centos.noarch
> ovirt-hosted-engine-ha-1.3.5.1-1.el7.centos.noarch
> ovirt-hosted-engine-setup-1.3.4.0-1.el7.centos.noarch
> ovirt-release36-007-1.noarch
> ovirt-engine-sdk-python-3.6.3.0-1.el7.centos.noarch
> ovirt-setup-lib-1.0.1-1.el7.centos.noarch
>
> [root@engine ~]# rpm -qa | grep ovirt
> ovirt-engine-setup-base-3.6.4.1-1.el7.centos.noarch
> ovirt-engine-setup-plugin-ovirt-engine-common-3.6.4.1-1.el7.centos.noarch
> ovirt-vmconsole-proxy-1.0.0-1.el7.centos.noarch
> ovirt-engine-tools-3.6.4.1-1.el7.centos.noarch
> ovirt-engine-vmconsole-proxy-helper-3.6.4.1-1.el7.centos.noarch
> ovirt-host-deploy-1.4.1-1.el7.centos.noarch
> ovirt-release36-007-1.noarch
> ovirt-engine-sdk-python-3.6.3.0-1.el7.centos.noarch
> ovirt-iso-uploader-3.6.0-1.el7.centos.noarch
> ovirt-engine-extensions-api-impl-3.6.4.1-1.el7.centos.noarch
> ovirt-setup-lib-1.0.1-1.el7.centos.n

Re: [ovirt-users] [hosted-engine] Metadata too new error when adding 2nd host

2016-04-20 Thread Martin Sivak
> after moving to global maintenance.

Good point.

> Martin - any advantage of this over '–reinitialize-lockspace'? Besides
> that it works also in older versions? Care to add this to the howto page?

Reinitialize lockspace clears the sanlock lockspace, not the metadata
file. Those are two different places.

> Care to add this to the howto page?

Yeah, I can do that.

Martin

On Wed, Apr 20, 2016 at 10:17 AM, Yedidyah Bar David  wrote:
> On Wed, Apr 20, 2016 at 11:11 AM, Martin Sivak  wrote:
>>> Assuming you never deployed a host with ID 52, this is likely a result of a
>>> corruption or dirt or something like that.
>>
>>> I see that you use FC storage. In previous versions, we did not clean such
>>> storage, so you might have dirt left.
>>
>> This is the exact reason for an error like yours. Using dirty block
>> storage. Please stop all hosted engine tooling (both agent and broker)
>> and fill the metadata drive with zeros.
>
> after moving to global maintenance.
>
> Martin - any advantage of this over '–reinitialize-lockspace'? Besides
> that it works also in older versions? Care to add this to the howto page?
> Thanks!
>
>>
>> You will have to find the proper hosted-engine.metadata file (which
>> will be a symlink) under /rhev:
>>
>> Example:
>>
>> [root@dev-03 rhev]# find . -name hosted-engine.metadata
>>
>> ./data-center/mnt/str-01.rhev.lab.eng.brq.redhat.com:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
>>
>> [root@dev-03 rhev]# ls -al
>> ./data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
>>
>> lrwxrwxrwx. 1 vdsm kvm 201 Mar 15 15:00
>> ./data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
>> -> 
>> /rhev/data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/images/6ab3f215-f234-4cd4-b9d4-8680767c3d99/dcbfa48d-8543-42d1-93dc-aa40855c4855
>>
>> And use (for example) dd if=/dev/zero of=/path/to/metadata bs=1M to
>> clean it - But be CAREFUL to not touch any other file or disk you
>> might find.
>>
>> Then restart the hosted engine tools and all should be fine.
>>
>>
>>
>> Martin
>>
>>
>> On Wed, Apr 20, 2016 at 8:20 AM, Yedidyah Bar David  wrote:
>>> On Wed, Apr 20, 2016 at 7:15 AM, Wee Sritippho  wrote:
 Hi,

 I used CentOS-7-x86_64-Minimal-1511.iso to install the hosts and the 
 engine.

 The 1st host and the hosted-engine were installed successfully, but the 2nd
 host failed with this error message:

 "Failed to execute stage 'Setup validation': Metadata version 2 from host 
 52
 too new for this agent (highest compatible version: 1)"
>>>
>>> Assuming you never deployed a host with ID 52, this is likely a result of a
>>> corruption or dirt or something like that.
>>>
>>> What do you get on host 1 running 'hosted-engine --vm-status'?
>>>
>>> I see that you use FC storage. In previous versions, we did not clean such
>>> storage, so you might have dirt left. See also [1]. You can try cleaning
>>> using [2].
>>>
>>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1238823
>>> [2] 
>>> https://www.ovirt.org/documentation/how-to/hosted-engine/#lockspace-corrupted-recovery-procedure
>>>

 Here is the package versions:

 [root@host02 ~]# rpm -qa | grep ovirt
 libgovirt-0.3.3-1.el7_2.1.x86_64
 ovirt-vmconsole-1.0.0-1.el7.centos.noarch
 ovirt-vmconsole-host-1.0.0-1.el7.centos.noarch
 ovirt-host-deploy-1.4.1-1.el7.centos.noarch
 ovirt-hosted-engine-ha-1.3.5.1-1.el7.centos.noarch
 ovirt-hosted-engine-setup-1.3.4.0-1.el7.centos.noarch
 ovirt-release36-007-1.noarch
 ovirt-engine-sdk-python-3.6.3.0-1.el7.centos.noarch
 ovirt-setup-lib-1.0.1-1.el7.centos.noarch

 [root@engine ~]# rpm -qa | grep ovirt
 ovirt-engine-setup-base-3.6.4.1-1.el7.centos.noarch
 ovirt-engine-setup-plugin-ovirt-engine-common-3.6.4.1-1.el7.centos.noarch
 ovirt-vmconsole-proxy-1.0.0-1.el7.centos.noarch
 ovirt-engine-tools-3.6.4.1-1.el7.centos.noarch
 ovirt-engine-vmconsole-proxy-helper-3.6.4.1-1.el7.centos.noarch
 ovirt-host-deploy-1.4.1-1.el7.centos.noarch
 ovirt-release36-007-1.noarch
 ovirt-engine-sdk-python-3.6.3.0-1.el7.centos.noarch
 ovirt-iso-uploader-3.6.0-1.el7.centos.noarch
 ovirt-engine-extensions-api-impl-3.6.4.1-1.el7.centos.noarch
 ovirt-setup-lib-1.0.1-1.el7.centos.noarch
 ovirt-host-deploy-java-1.4.1-1.el7.centos.noarch
 ovirt-engine-cli-3.6.2.0-1.el7.centos.noarch
 ovirt-engine-setup-plugin-websocket-proxy-3.6.4.1-1.el7.centos.noarch
 ovirt-vmconsole-1.0.0-1.el7.centos.noarch
 ovirt-engine-backend-3.6.4.1-1.el7.centos.noarch
 ovirt-engine-dbscripts-3.6.4.1-1.el7.centos.noarch
 ovirt-engine-webadmin-portal-3.6.4.1-1.el7.centos.noarch
 ovirt-engine-setup-3.6.4.1-1.el7.centos.noarch
 ovirt-engine-3.6.4.1-1.el7.centos.noarch
 ovir

Re: [ovirt-users] [hosted-engine] Metadata too new error when adding 2nd host

2016-04-20 Thread Yedidyah Bar David
On Wed, Apr 20, 2016 at 11:11 AM, Martin Sivak  wrote:
>> Assuming you never deployed a host with ID 52, this is likely a result of a
>> corruption or dirt or something like that.
>
>> I see that you use FC storage. In previous versions, we did not clean such
>> storage, so you might have dirt left.
>
> This is the exact reason for an error like yours. Using dirty block
> storage. Please stop all hosted engine tooling (both agent and broker)
> and fill the metadata drive with zeros.

after moving to global maintenance.

Martin - any advantage of this over '–reinitialize-lockspace'? Besides
that it works also in older versions? Care to add this to the howto page?
Thanks!

>
> You will have to find the proper hosted-engine.metadata file (which
> will be a symlink) under /rhev:
>
> Example:
>
> [root@dev-03 rhev]# find . -name hosted-engine.metadata
>
> ./data-center/mnt/str-01.rhev.lab.eng.brq.redhat.com:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
>
> [root@dev-03 rhev]# ls -al
> ./data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
>
> lrwxrwxrwx. 1 vdsm kvm 201 Mar 15 15:00
> ./data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
> -> 
> /rhev/data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/images/6ab3f215-f234-4cd4-b9d4-8680767c3d99/dcbfa48d-8543-42d1-93dc-aa40855c4855
>
> And use (for example) dd if=/dev/zero of=/path/to/metadata bs=1M to
> clean it - But be CAREFUL to not touch any other file or disk you
> might find.
>
> Then restart the hosted engine tools and all should be fine.
>
>
>
> Martin
>
>
> On Wed, Apr 20, 2016 at 8:20 AM, Yedidyah Bar David  wrote:
>> On Wed, Apr 20, 2016 at 7:15 AM, Wee Sritippho  wrote:
>>> Hi,
>>>
>>> I used CentOS-7-x86_64-Minimal-1511.iso to install the hosts and the engine.
>>>
>>> The 1st host and the hosted-engine were installed successfully, but the 2nd
>>> host failed with this error message:
>>>
>>> "Failed to execute stage 'Setup validation': Metadata version 2 from host 52
>>> too new for this agent (highest compatible version: 1)"
>>
>> Assuming you never deployed a host with ID 52, this is likely a result of a
>> corruption or dirt or something like that.
>>
>> What do you get on host 1 running 'hosted-engine --vm-status'?
>>
>> I see that you use FC storage. In previous versions, we did not clean such
>> storage, so you might have dirt left. See also [1]. You can try cleaning
>> using [2].
>>
>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1238823
>> [2] 
>> https://www.ovirt.org/documentation/how-to/hosted-engine/#lockspace-corrupted-recovery-procedure
>>
>>>
>>> Here is the package versions:
>>>
>>> [root@host02 ~]# rpm -qa | grep ovirt
>>> libgovirt-0.3.3-1.el7_2.1.x86_64
>>> ovirt-vmconsole-1.0.0-1.el7.centos.noarch
>>> ovirt-vmconsole-host-1.0.0-1.el7.centos.noarch
>>> ovirt-host-deploy-1.4.1-1.el7.centos.noarch
>>> ovirt-hosted-engine-ha-1.3.5.1-1.el7.centos.noarch
>>> ovirt-hosted-engine-setup-1.3.4.0-1.el7.centos.noarch
>>> ovirt-release36-007-1.noarch
>>> ovirt-engine-sdk-python-3.6.3.0-1.el7.centos.noarch
>>> ovirt-setup-lib-1.0.1-1.el7.centos.noarch
>>>
>>> [root@engine ~]# rpm -qa | grep ovirt
>>> ovirt-engine-setup-base-3.6.4.1-1.el7.centos.noarch
>>> ovirt-engine-setup-plugin-ovirt-engine-common-3.6.4.1-1.el7.centos.noarch
>>> ovirt-vmconsole-proxy-1.0.0-1.el7.centos.noarch
>>> ovirt-engine-tools-3.6.4.1-1.el7.centos.noarch
>>> ovirt-engine-vmconsole-proxy-helper-3.6.4.1-1.el7.centos.noarch
>>> ovirt-host-deploy-1.4.1-1.el7.centos.noarch
>>> ovirt-release36-007-1.noarch
>>> ovirt-engine-sdk-python-3.6.3.0-1.el7.centos.noarch
>>> ovirt-iso-uploader-3.6.0-1.el7.centos.noarch
>>> ovirt-engine-extensions-api-impl-3.6.4.1-1.el7.centos.noarch
>>> ovirt-setup-lib-1.0.1-1.el7.centos.noarch
>>> ovirt-host-deploy-java-1.4.1-1.el7.centos.noarch
>>> ovirt-engine-cli-3.6.2.0-1.el7.centos.noarch
>>> ovirt-engine-setup-plugin-websocket-proxy-3.6.4.1-1.el7.centos.noarch
>>> ovirt-vmconsole-1.0.0-1.el7.centos.noarch
>>> ovirt-engine-backend-3.6.4.1-1.el7.centos.noarch
>>> ovirt-engine-dbscripts-3.6.4.1-1.el7.centos.noarch
>>> ovirt-engine-webadmin-portal-3.6.4.1-1.el7.centos.noarch
>>> ovirt-engine-setup-3.6.4.1-1.el7.centos.noarch
>>> ovirt-engine-3.6.4.1-1.el7.centos.noarch
>>> ovirt-engine-setup-plugin-vmconsole-proxy-helper-3.6.4.1-1.el7.centos.noarch
>>> ovirt-guest-agent-common-1.0.11-1.el7.noarch
>>> ovirt-engine-wildfly-8.2.1-1.el7.x86_64
>>> ovirt-engine-wildfly-overlay-8.0.5-1.el7.noarch
>>> ovirt-engine-websocket-proxy-3.6.4.1-1.el7.centos.noarch
>>> ovirt-engine-restapi-3.6.4.1-1.el7.centos.noarch
>>> ovirt-engine-userportal-3.6.4.1-1.el7.centos.noarch
>>> ovirt-engine-setup-plugin-ovirt-engine-3.6.4.1-1.el7.centos.noarch
>>> ovirt-image-uploader-3.6.0-1.el7.centos.noarch
>>> ovirt-engine-extension-aaa-jdbc-1.0.6-1.el7.noarch

Re: [ovirt-users] [hosted-engine] Metadata too new error when adding 2nd host

2016-04-20 Thread Martin Sivak
> Assuming you never deployed a host with ID 52, this is likely a result of a
> corruption or dirt or something like that.

> I see that you use FC storage. In previous versions, we did not clean such
> storage, so you might have dirt left.

This is the exact reason for an error like yours. Using dirty block
storage. Please stop all hosted engine tooling (both agent and broker)
and fill the metadata drive with zeros.

You will have to find the proper hosted-engine.metadata file (which
will be a symlink) under /rhev:

Example:

[root@dev-03 rhev]# find . -name hosted-engine.metadata

./data-center/mnt/str-01.rhev.lab.eng.brq.redhat.com:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata

[root@dev-03 rhev]# ls -al
./data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata

lrwxrwxrwx. 1 vdsm kvm 201 Mar 15 15:00
./data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/ha_agent/hosted-engine.metadata
-> 
/rhev/data-center/mnt/str-01:_mnt_export_nfs_lv2_msivak/868a1a4e-9f94-42f5-af23-8f884b3c53d5/images/6ab3f215-f234-4cd4-b9d4-8680767c3d99/dcbfa48d-8543-42d1-93dc-aa40855c4855

And use (for example) dd if=/dev/zero of=/path/to/metadata bs=1M to
clean it - But be CAREFUL to not touch any other file or disk you
might find.

Then restart the hosted engine tools and all should be fine.



Martin


On Wed, Apr 20, 2016 at 8:20 AM, Yedidyah Bar David  wrote:
> On Wed, Apr 20, 2016 at 7:15 AM, Wee Sritippho  wrote:
>> Hi,
>>
>> I used CentOS-7-x86_64-Minimal-1511.iso to install the hosts and the engine.
>>
>> The 1st host and the hosted-engine were installed successfully, but the 2nd
>> host failed with this error message:
>>
>> "Failed to execute stage 'Setup validation': Metadata version 2 from host 52
>> too new for this agent (highest compatible version: 1)"
>
> Assuming you never deployed a host with ID 52, this is likely a result of a
> corruption or dirt or something like that.
>
> What do you get on host 1 running 'hosted-engine --vm-status'?
>
> I see that you use FC storage. In previous versions, we did not clean such
> storage, so you might have dirt left. See also [1]. You can try cleaning
> using [2].
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1238823
> [2] 
> https://www.ovirt.org/documentation/how-to/hosted-engine/#lockspace-corrupted-recovery-procedure
>
>>
>> Here is the package versions:
>>
>> [root@host02 ~]# rpm -qa | grep ovirt
>> libgovirt-0.3.3-1.el7_2.1.x86_64
>> ovirt-vmconsole-1.0.0-1.el7.centos.noarch
>> ovirt-vmconsole-host-1.0.0-1.el7.centos.noarch
>> ovirt-host-deploy-1.4.1-1.el7.centos.noarch
>> ovirt-hosted-engine-ha-1.3.5.1-1.el7.centos.noarch
>> ovirt-hosted-engine-setup-1.3.4.0-1.el7.centos.noarch
>> ovirt-release36-007-1.noarch
>> ovirt-engine-sdk-python-3.6.3.0-1.el7.centos.noarch
>> ovirt-setup-lib-1.0.1-1.el7.centos.noarch
>>
>> [root@engine ~]# rpm -qa | grep ovirt
>> ovirt-engine-setup-base-3.6.4.1-1.el7.centos.noarch
>> ovirt-engine-setup-plugin-ovirt-engine-common-3.6.4.1-1.el7.centos.noarch
>> ovirt-vmconsole-proxy-1.0.0-1.el7.centos.noarch
>> ovirt-engine-tools-3.6.4.1-1.el7.centos.noarch
>> ovirt-engine-vmconsole-proxy-helper-3.6.4.1-1.el7.centos.noarch
>> ovirt-host-deploy-1.4.1-1.el7.centos.noarch
>> ovirt-release36-007-1.noarch
>> ovirt-engine-sdk-python-3.6.3.0-1.el7.centos.noarch
>> ovirt-iso-uploader-3.6.0-1.el7.centos.noarch
>> ovirt-engine-extensions-api-impl-3.6.4.1-1.el7.centos.noarch
>> ovirt-setup-lib-1.0.1-1.el7.centos.noarch
>> ovirt-host-deploy-java-1.4.1-1.el7.centos.noarch
>> ovirt-engine-cli-3.6.2.0-1.el7.centos.noarch
>> ovirt-engine-setup-plugin-websocket-proxy-3.6.4.1-1.el7.centos.noarch
>> ovirt-vmconsole-1.0.0-1.el7.centos.noarch
>> ovirt-engine-backend-3.6.4.1-1.el7.centos.noarch
>> ovirt-engine-dbscripts-3.6.4.1-1.el7.centos.noarch
>> ovirt-engine-webadmin-portal-3.6.4.1-1.el7.centos.noarch
>> ovirt-engine-setup-3.6.4.1-1.el7.centos.noarch
>> ovirt-engine-3.6.4.1-1.el7.centos.noarch
>> ovirt-engine-setup-plugin-vmconsole-proxy-helper-3.6.4.1-1.el7.centos.noarch
>> ovirt-guest-agent-common-1.0.11-1.el7.noarch
>> ovirt-engine-wildfly-8.2.1-1.el7.x86_64
>> ovirt-engine-wildfly-overlay-8.0.5-1.el7.noarch
>> ovirt-engine-websocket-proxy-3.6.4.1-1.el7.centos.noarch
>> ovirt-engine-restapi-3.6.4.1-1.el7.centos.noarch
>> ovirt-engine-userportal-3.6.4.1-1.el7.centos.noarch
>> ovirt-engine-setup-plugin-ovirt-engine-3.6.4.1-1.el7.centos.noarch
>> ovirt-image-uploader-3.6.0-1.el7.centos.noarch
>> ovirt-engine-extension-aaa-jdbc-1.0.6-1.el7.noarch
>> ovirt-engine-lib-3.6.4.1-1.el7.centos.noarch
>>
>>
>> Here are the log files:
>> https://gist.github.com/weeix/1743f88d3afe1f405889a67ed4011141
>>
>> --
>> Wee
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> --
> Didi
> ___

Re: [ovirt-users] [hosted-engine] Metadata too new error when adding 2nd host

2016-04-19 Thread วีร์ ศรีทิพโพธิ์
I'll try to clean to storage then report back.

This is what I got from "hosted-engine --vm-status" in host01:

[root@host01 ~]# hosted-engine --vm-status
Traceback (most recent call last):
  File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
  File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/vm_status.py", line 
117, in 
if not status_checker.print_status():
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/vm_status.py", line 
60, in print_status
all_host_stats = ha_cli.get_all_host_stats()
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", 
line 160, in get_all_host_stats
return self.get_all_stats(self.StatModes.HOST)
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", 
line 107, in get_all_stats
stats = self._parse_stats(stats, mode)
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", 
line 146, in _parse_stats
md = metadata.parse_metadata_to_dict(host_id, data)
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/metadata.py", line 
156, in parse_metadata_to_dict
constants.METADATA_FEATURE_VERSION))
ovirt_hosted_engine_ha.lib.exceptions.FatalMetadataError: Metadata version 2 
from host 52 too new for this agent (highest compatible version: 1)

- ข้อความดั้งเดิม -
จาก: "Yedidyah Bar David" 
ถึง: "Wee Sritippho" 
สำเนา: "users" 
ส่งแล้ว: พุธ, 20 เมษายน, 2016 1:20:44 PM
เรื่อง: Re: [ovirt-users] [hosted-engine] Metadata too new error when adding 
2nd host

On Wed, Apr 20, 2016 at 7:15 AM, Wee Sritippho  wrote:
> Hi,
>
> I used CentOS-7-x86_64-Minimal-1511.iso to install the hosts and the engine.
>
> The 1st host and the hosted-engine were installed successfully, but the 2nd
> host failed with this error message:
>
> "Failed to execute stage 'Setup validation': Metadata version 2 from host 52
> too new for this agent (highest compatible version: 1)"

Assuming you never deployed a host with ID 52, this is likely a result of a
corruption or dirt or something like that.

What do you get on host 1 running 'hosted-engine --vm-status'?

I see that you use FC storage. In previous versions, we did not clean such
storage, so you might have dirt left. See also [1]. You can try cleaning
using [2].

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1238823
[2] 
https://www.ovirt.org/documentation/how-to/hosted-engine/#lockspace-corrupted-recovery-procedure

>
> Here is the package versions:
>
> [root@host02 ~]# rpm -qa | grep ovirt
> libgovirt-0.3.3-1.el7_2.1.x86_64
> ovirt-vmconsole-1.0.0-1.el7.centos.noarch
> ovirt-vmconsole-host-1.0.0-1.el7.centos.noarch
> ovirt-host-deploy-1.4.1-1.el7.centos.noarch
> ovirt-hosted-engine-ha-1.3.5.1-1.el7.centos.noarch
> ovirt-hosted-engine-setup-1.3.4.0-1.el7.centos.noarch
> ovirt-release36-007-1.noarch
> ovirt-engine-sdk-python-3.6.3.0-1.el7.centos.noarch
> ovirt-setup-lib-1.0.1-1.el7.centos.noarch
>
> [root@engine ~]# rpm -qa | grep ovirt
> ovirt-engine-setup-base-3.6.4.1-1.el7.centos.noarch
> ovirt-engine-setup-plugin-ovirt-engine-common-3.6.4.1-1.el7.centos.noarch
> ovirt-vmconsole-proxy-1.0.0-1.el7.centos.noarch
> ovirt-engine-tools-3.6.4.1-1.el7.centos.noarch
> ovirt-engine-vmconsole-proxy-helper-3.6.4.1-1.el7.centos.noarch
> ovirt-host-deploy-1.4.1-1.el7.centos.noarch
> ovirt-release36-007-1.noarch
> ovirt-engine-sdk-python-3.6.3.0-1.el7.centos.noarch
> ovirt-iso-uploader-3.6.0-1.el7.centos.noarch
> ovirt-engine-extensions-api-impl-3.6.4.1-1.el7.centos.noarch
> ovirt-setup-lib-1.0.1-1.el7.centos.noarch
> ovirt-host-deploy-java-1.4.1-1.el7.centos.noarch
> ovirt-engine-cli-3.6.2.0-1.el7.centos.noarch
> ovirt-engine-setup-plugin-websocket-proxy-3.6.4.1-1.el7.centos.noarch
> ovirt-vmconsole-1.0.0-1.el7.centos.noarch
> ovirt-engine-backend-3.6.4.1-1.el7.centos.noarch
> ovirt-engine-dbscripts-3.6.4.1-1.el7.centos.noarch
> ovirt-engine-webadmin-portal-3.6.4.1-1.el7.centos.noarch
> ovirt-engine-setup-3.6.4.1-1.el7.centos.noarch
> ovirt-engine-3.6.4.1-1.el7.centos.noarch
> ovirt-engine-setup-plugin-vmconsole-proxy-helper-3.6.4.1-1.el7.centos.noarch
> ovirt-guest-agent-common-1.0.11-1.el7.noarch
> ovirt-engine-wildfly-8.2.1-1.el7.x86_64
> ovirt-engine-wildfly-overlay-8.0.5-1.el7.noarch
> ovirt-engine-websocket-proxy-3.6.4.1-1.el7.centos.noarch
> ovirt-engine-restapi-3.6.4.1-1.el7.centos.noarch
> ovirt-engine-userportal-3.6.4.1-1.el7.centos.noarch
> ovirt-engine-setup-plugin-ovirt-engine-3.6.4.1-1.el7.centos.noarch
> ovirt-image-uploader-

Re: [ovirt-users] [hosted-engine] Metadata too new error when adding 2nd host

2016-04-19 Thread Yedidyah Bar David
On Wed, Apr 20, 2016 at 7:15 AM, Wee Sritippho  wrote:
> Hi,
>
> I used CentOS-7-x86_64-Minimal-1511.iso to install the hosts and the engine.
>
> The 1st host and the hosted-engine were installed successfully, but the 2nd
> host failed with this error message:
>
> "Failed to execute stage 'Setup validation': Metadata version 2 from host 52
> too new for this agent (highest compatible version: 1)"

Assuming you never deployed a host with ID 52, this is likely a result of a
corruption or dirt or something like that.

What do you get on host 1 running 'hosted-engine --vm-status'?

I see that you use FC storage. In previous versions, we did not clean such
storage, so you might have dirt left. See also [1]. You can try cleaning
using [2].

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1238823
[2] 
https://www.ovirt.org/documentation/how-to/hosted-engine/#lockspace-corrupted-recovery-procedure

>
> Here is the package versions:
>
> [root@host02 ~]# rpm -qa | grep ovirt
> libgovirt-0.3.3-1.el7_2.1.x86_64
> ovirt-vmconsole-1.0.0-1.el7.centos.noarch
> ovirt-vmconsole-host-1.0.0-1.el7.centos.noarch
> ovirt-host-deploy-1.4.1-1.el7.centos.noarch
> ovirt-hosted-engine-ha-1.3.5.1-1.el7.centos.noarch
> ovirt-hosted-engine-setup-1.3.4.0-1.el7.centos.noarch
> ovirt-release36-007-1.noarch
> ovirt-engine-sdk-python-3.6.3.0-1.el7.centos.noarch
> ovirt-setup-lib-1.0.1-1.el7.centos.noarch
>
> [root@engine ~]# rpm -qa | grep ovirt
> ovirt-engine-setup-base-3.6.4.1-1.el7.centos.noarch
> ovirt-engine-setup-plugin-ovirt-engine-common-3.6.4.1-1.el7.centos.noarch
> ovirt-vmconsole-proxy-1.0.0-1.el7.centos.noarch
> ovirt-engine-tools-3.6.4.1-1.el7.centos.noarch
> ovirt-engine-vmconsole-proxy-helper-3.6.4.1-1.el7.centos.noarch
> ovirt-host-deploy-1.4.1-1.el7.centos.noarch
> ovirt-release36-007-1.noarch
> ovirt-engine-sdk-python-3.6.3.0-1.el7.centos.noarch
> ovirt-iso-uploader-3.6.0-1.el7.centos.noarch
> ovirt-engine-extensions-api-impl-3.6.4.1-1.el7.centos.noarch
> ovirt-setup-lib-1.0.1-1.el7.centos.noarch
> ovirt-host-deploy-java-1.4.1-1.el7.centos.noarch
> ovirt-engine-cli-3.6.2.0-1.el7.centos.noarch
> ovirt-engine-setup-plugin-websocket-proxy-3.6.4.1-1.el7.centos.noarch
> ovirt-vmconsole-1.0.0-1.el7.centos.noarch
> ovirt-engine-backend-3.6.4.1-1.el7.centos.noarch
> ovirt-engine-dbscripts-3.6.4.1-1.el7.centos.noarch
> ovirt-engine-webadmin-portal-3.6.4.1-1.el7.centos.noarch
> ovirt-engine-setup-3.6.4.1-1.el7.centos.noarch
> ovirt-engine-3.6.4.1-1.el7.centos.noarch
> ovirt-engine-setup-plugin-vmconsole-proxy-helper-3.6.4.1-1.el7.centos.noarch
> ovirt-guest-agent-common-1.0.11-1.el7.noarch
> ovirt-engine-wildfly-8.2.1-1.el7.x86_64
> ovirt-engine-wildfly-overlay-8.0.5-1.el7.noarch
> ovirt-engine-websocket-proxy-3.6.4.1-1.el7.centos.noarch
> ovirt-engine-restapi-3.6.4.1-1.el7.centos.noarch
> ovirt-engine-userportal-3.6.4.1-1.el7.centos.noarch
> ovirt-engine-setup-plugin-ovirt-engine-3.6.4.1-1.el7.centos.noarch
> ovirt-image-uploader-3.6.0-1.el7.centos.noarch
> ovirt-engine-extension-aaa-jdbc-1.0.6-1.el7.noarch
> ovirt-engine-lib-3.6.4.1-1.el7.centos.noarch
>
>
> Here are the log files:
> https://gist.github.com/weeix/1743f88d3afe1f405889a67ed4011141
>
> --
> Wee
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users



-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] [hosted-engine] Metadata too new error when adding 2nd host

2016-04-19 Thread Wee Sritippho

Hi,

I used CentOS-7-x86_64-Minimal-1511.iso to install the hosts and the engine.

The 1st host and the hosted-engine were installed successfully, but the 
2nd host failed with this error message:


"Failed to execute stage 'Setup validation': Metadata version 2 from 
host 52 too new for this agent (highest compatible version: 1)"


Here is the package versions:

[root@host02 ~]# rpm -qa | grep ovirt
libgovirt-0.3.3-1.el7_2.1.x86_64
ovirt-vmconsole-1.0.0-1.el7.centos.noarch
ovirt-vmconsole-host-1.0.0-1.el7.centos.noarch
ovirt-host-deploy-1.4.1-1.el7.centos.noarch
ovirt-hosted-engine-ha-1.3.5.1-1.el7.centos.noarch
ovirt-hosted-engine-setup-1.3.4.0-1.el7.centos.noarch
ovirt-release36-007-1.noarch
ovirt-engine-sdk-python-3.6.3.0-1.el7.centos.noarch
ovirt-setup-lib-1.0.1-1.el7.centos.noarch

[root@engine ~]# rpm -qa | grep ovirt
ovirt-engine-setup-base-3.6.4.1-1.el7.centos.noarch
ovirt-engine-setup-plugin-ovirt-engine-common-3.6.4.1-1.el7.centos.noarch
ovirt-vmconsole-proxy-1.0.0-1.el7.centos.noarch
ovirt-engine-tools-3.6.4.1-1.el7.centos.noarch
ovirt-engine-vmconsole-proxy-helper-3.6.4.1-1.el7.centos.noarch
ovirt-host-deploy-1.4.1-1.el7.centos.noarch
ovirt-release36-007-1.noarch
ovirt-engine-sdk-python-3.6.3.0-1.el7.centos.noarch
ovirt-iso-uploader-3.6.0-1.el7.centos.noarch
ovirt-engine-extensions-api-impl-3.6.4.1-1.el7.centos.noarch
ovirt-setup-lib-1.0.1-1.el7.centos.noarch
ovirt-host-deploy-java-1.4.1-1.el7.centos.noarch
ovirt-engine-cli-3.6.2.0-1.el7.centos.noarch
ovirt-engine-setup-plugin-websocket-proxy-3.6.4.1-1.el7.centos.noarch
ovirt-vmconsole-1.0.0-1.el7.centos.noarch
ovirt-engine-backend-3.6.4.1-1.el7.centos.noarch
ovirt-engine-dbscripts-3.6.4.1-1.el7.centos.noarch
ovirt-engine-webadmin-portal-3.6.4.1-1.el7.centos.noarch
ovirt-engine-setup-3.6.4.1-1.el7.centos.noarch
ovirt-engine-3.6.4.1-1.el7.centos.noarch
ovirt-engine-setup-plugin-vmconsole-proxy-helper-3.6.4.1-1.el7.centos.noarch
ovirt-guest-agent-common-1.0.11-1.el7.noarch
ovirt-engine-wildfly-8.2.1-1.el7.x86_64
ovirt-engine-wildfly-overlay-8.0.5-1.el7.noarch
ovirt-engine-websocket-proxy-3.6.4.1-1.el7.centos.noarch
ovirt-engine-restapi-3.6.4.1-1.el7.centos.noarch
ovirt-engine-userportal-3.6.4.1-1.el7.centos.noarch
ovirt-engine-setup-plugin-ovirt-engine-3.6.4.1-1.el7.centos.noarch
ovirt-image-uploader-3.6.0-1.el7.centos.noarch
ovirt-engine-extension-aaa-jdbc-1.0.6-1.el7.noarch
ovirt-engine-lib-3.6.4.1-1.el7.centos.noarch


Here are the log files: 
https://gist.github.com/weeix/1743f88d3afe1f405889a67ed4011141


--
Wee

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users