Re: [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements

2017-07-24 Thread Kasturi Narra
These errors are because not having glusternw assigned to the correct
interface. Once you attach that these errors should go away.  This has
nothing to do with the problem you are seeing.

sahina any idea about engine not showing the correct volume info ?

On Mon, Jul 24, 2017 at 7:30 PM, yayo (j)  wrote:

> Hi,
>
> UI refreshed but problem still remain ...
>
> No specific error, I've only these errors but I've read that there is no
> problem if I have this kind of errors:
>
>
> 2017-07-24 15:53:59,823+02 INFO  [org.ovirt.engine.core.vdsbro
> ker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler2)
> [b7590c4] START, GlusterServersListVDSCommand(HostName =
> node01.localdomain.local, VdsIdVDSCommandParametersBase:{runAsync='true',
> hostId='4c89baa5-e8f7-4132-a4b3-af332247570c'}), log id: 29a62417
> 2017-07-24 15:54:01,066+02 INFO  [org.ovirt.engine.core.vdsbro
> ker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler2)
> [b7590c4] FINISH, GlusterServersListVDSCommand, return: 
> [10.10.20.80/24:CONNECTED,
> node02.localdomain.local:CONNECTED, gdnode04:CONNECTED], log id: 29a62417
> 2017-07-24 15:54:01,076+02 INFO  [org.ovirt.engine.core.vdsbro
> ker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler2)
> [b7590c4] START, GlusterVolumesListVDSCommand(HostName =
> node01.localdomain.local, GlusterVolumesListVDSParameters:{runAsync='true',
> hostId='4c89baa5-e8f7-4132-a4b3-af332247570c'}), log id: 7fce25d3
> 2017-07-24 15:54:02,209+02 WARN  [org.ovirt.engine.core.vdsbro
> ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2) [b7590c4]
> Could not associate brick 'gdnode01:/gluster/engine/brick' of volume
> 'd19c19e3-910d-437b-8ba7-4f2a23d17515' with correct network as no gluster
> network found in cluster '0002-0002-0002-0002-017a'
> 2017-07-24 15:54:02,212+02 WARN  [org.ovirt.engine.core.vdsbro
> ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2) [b7590c4]
> Could not associate brick 'gdnode02:/gluster/engine/brick' of volume
> 'd19c19e3-910d-437b-8ba7-4f2a23d17515' with correct network as no gluster
> network found in cluster '0002-0002-0002-0002-017a'
> 2017-07-24 15:54:02,215+02 WARN  [org.ovirt.engine.core.vdsbro
> ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2) [b7590c4]
> Could not associate brick 'gdnode04:/gluster/engine/brick' of volume
> 'd19c19e3-910d-437b-8ba7-4f2a23d17515' with correct network as no gluster
> network found in cluster '0002-0002-0002-0002-017a'
> 2017-07-24 15:54:02,218+02 WARN  [org.ovirt.engine.core.vdsbro
> ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2) [b7590c4]
> Could not associate brick 'gdnode01:/gluster/data/brick' of volume
> 'c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d' with correct network as no gluster
> network found in cluster '0002-0002-0002-0002-017a'
> 2017-07-24 15:54:02,221+02 WARN  [org.ovirt.engine.core.vdsbro
> ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2) [b7590c4]
> Could not associate brick 'gdnode02:/gluster/data/brick' of volume
> 'c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d' with correct network as no gluster
> network found in cluster '0002-0002-0002-0002-017a'
> 2017-07-24 15:54:02,224+02 WARN  [org.ovirt.engine.core.vdsbro
> ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2) [b7590c4]
> Could not associate brick 'gdnode04:/gluster/data/brick' of volume
> 'c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d' with correct network as no gluster
> network found in cluster '0002-0002-0002-0002-017a'
> 2017-07-24 15:54:02,224+02 INFO  [org.ovirt.engine.core.vdsbro
> ker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler2)
> [b7590c4] FINISH, GlusterVolumesListVDSCommand, return: {d19c19e3-910d-437
> b-8ba7-4f2a23d17515=org.ovirt.engine.core.common.businessentities.gluste
> r.GlusterVolumeEntity@fdc91062, c7a5dfc9-3e72-4ea1-843e-c8275d
> 4a7c2d=org.ovirt.engine.core.common.businessentities.gluste
> r.GlusterVolumeEntity@999a6f23}, log id: 7fce25d3
>
>
> Thank you
>
>
> 2017-07-24 8:12 GMT+02:00 Kasturi Narra :
>
>> Hi,
>>
>>Regarding the UI showing incorrect information about engine and data
>> volumes, can you please refresh the UI and see if the issue persists  plus
>> any errors in the engine.log files ?
>>
>> Thanks
>> kasturi
>>
>> On Sat, Jul 22, 2017 at 11:43 AM, Ravishankar N 
>> wrote:
>>
>>>
>>> On 07/21/2017 11:41 PM, yayo (j) wrote:
>>>
>>> Hi,
>>>
>>> Sorry for follow up again, but, checking the ovirt interface I've found
>>> that ovirt report the "engine" volume as an "arbiter" configuration and the
>>> "data" volume as full replicated volume. Check these screenshots:
>>>
>>>
>>> This is probably some refresh bug in the UI, Sahina might be able to
>>> tell you.
>>>
>>>
>>> https://drive.google.com/drive/folders/0ByUV7xQtP1gCTE8tUTFf
>>> VmR5aDQ?usp=sharing
>>>
>>> But the "gluster volume info" command 

[ovirt-users] Problems with oVirt3.5 engine + CentOS6 Host

2017-07-24 Thread Antonio Sebastian Salles M.
Hello friends,

Since a few days I've been trying to register a CentOS 6 host on an
oVirt 3.5 engine, but I have not been able to finish the process
successfully.
I copied the ssh keys and lifted vdsmd without problems, then tried
via firefox with admin portal to integrate it, but without success.
Could you help me? This is the error ... Thank you very much!

[root@ovirt ovirt-engine]# tail -f -n0 /var/log/ovirt-engine/engine.log
2017-07-19 17:21:25,078 WARN
[org.ovirt.engine.core.compat.backendcompat.PropertyInfo]
(ajp--127.0.0.1-8702-3) Unable to get value of property: vdsName for
class org.ovirt.engine.core.common.businessentities.VdsStatic
2017-07-19 17:21:25,079 WARN
[org.ovirt.engine.core.compat.backendcompat.PropertyInfo]
(ajp--127.0.0.1-8702-3) Unable to get value of property: vdsName for
class org.ovirt.engine.core.common.businessentities.VdsStatic
2017-07-19 17:21:25,079 INFO
[org.ovirt.engine.core.bll.InstallVdsCommand] (ajp--127.0.0.1-8702-3)
[7a173239] Running command: InstallVdsCommand internal: false.
Entities affected :  ID: 9b82c66f-46c8-49dc-8ab3-2b5e27c7bdd5 Type:
VDSAction group EDIT_HOST_CONFIGURATION with role type ADMIN
2017-07-19 17:21:25,088 WARN
[org.ovirt.engine.core.compat.backendcompat.PropertyInfo]
(ajp--127.0.0.1-8702-3) Unable to get value of property: vdsName for
class org.ovirt.engine.core.common.businessentities.VdsStatic
2017-07-19 17:21:25,088 WARN
[org.ovirt.engine.core.compat.backendcompat.PropertyInfo]
(ajp--127.0.0.1-8702-3) Unable to get value of property: vdsName for
class org.ovirt.engine.core.common.businessentities.VdsStatic
2017-07-19 17:21:25,090 INFO
[org.ovirt.engine.core.bll.InstallVdsInternalCommand]
(ajp--127.0.0.1-8702-3) [7a173239] Lock Acquired to object EngineLock
[exclusiveLocks= key: 9b82c66f-46c8-49dc-8ab3-2b5e27c7bdd5 value: VDS
, sharedLocks= ]
2017-07-19 17:21:25,093 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ajp--127.0.0.1-8702-3) [7a173239] Correlation ID: null, Call Stack:
null, Custom Event ID: -1, Message: Failed to verify Power Management
configuration for Host kvm2.segic.cl.
2017-07-19 17:21:25,095 INFO
[org.ovirt.engine.core.bll.InstallVdsInternalCommand]
(org.ovirt.thread.pool-8-thread-32) [7a173239] Running command:
InstallVdsInternalCommand internal: true. Entities affected :  ID:
9b82c66f-46c8-49dc-8ab3-2b5e27c7bdd5 Type: VDS
2017-07-19 17:21:25,095 INFO
[org.ovirt.engine.core.bll.InstallVdsInternalCommand]
(org.ovirt.thread.pool-8-thread-32) [7a173239] Before Installation
host 9b82c66f-46c8-49dc-8ab3-2b5e27c7bdd5, kvm2.segic.cl
2017-07-19 17:21:25,105 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ajp--127.0.0.1-8702-3) [7a173239] Correlation ID: 7a173239, Call
Stack: null, Custom Event ID: -1, Message: Host kvm2.segic.cl
configuration was updated by admin@internal.
2017-07-19 17:21:25,106 INFO
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(org.ovirt.thread.pool-8-thread-32) [7a173239] START,
SetVdsStatusVDSCommand(HostName = kvm2.segic.cl, HostId =
9b82c66f-46c8-49dc-8ab3-2b5e27c7bdd5, status=Installing,
nonOperationalReason=NONE, stopSpmFailureLogged=false), log id: 8c3992
2017-07-19 17:21:25,109 INFO
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(org.ovirt.thread.pool-8-thread-32) [7a173239] FINISH,
SetVdsStatusVDSCommand, log id: 8c3992
2017-07-19 17:21:25,133 INFO
[org.ovirt.engine.core.bll.InstallerMessages]
(org.ovirt.thread.pool-8-thread-32) [7a173239] Installation
158.170.39.12: Connected to host 158.170.39.12 with SSH key
fingerprint: 16:2b:79:78:60:ea:d2:24:0a:8d:7c:2f:2e:8e:20:51
2017-07-19 17:21:25,138 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-32) [7a173239] Correlation ID:
7a173239, Call Stack: null, Custom Event ID: -1, Message: Installing
Host kvm2.segic.cl. Connected to host 158.170.39.12 with SSH key
fingerprint: 16:2b:79:78:60:ea:d2:24:0a:8d:7c:2f:2e:8e:20:51.
2017-07-19 17:21:25,223 INFO  [org.ovirt.engine.core.bll.VdsDeploy]
(org.ovirt.thread.pool-8-thread-32) [7a173239] Installation of
158.170.39.12. Executing command via SSH umask 0077;
MYTMP="$(TMPDIR="${OVIRT_TMPDIR}" mktemp -d -t ovirt-XX)";
trap "chmod -R u+rwX \"${MYTMP}\" > /dev/null 2>&1; rm -fr
\"${MYTMP}\" > /dev/null 2>&1" 0; tar --warning=no-timestamp -C
"${MYTMP}" -x && "${MYTMP}"/setup DIALOG/dialect=str:machine
DIALOG/customization=bool:True <
/var/cache/ovirt-engine/ovirt-host-deploy.tar
2017-07-19 17:21:25,223 INFO
[org.ovirt.engine.core.utils.archivers.tar.CachedTar]
org.ovirt.thread.pool-8-thread-32) Tarball
'/var/cache/ovirt-engine/ovirt-host-deploy.tar' refresh
2017-07-19 17:21:25,254 INFO
[org.ovirt.engine.core.uutils.ssh.SSHDialog]
(org.ovirt.thread.pool-8-thread-32) SSH execute root@158.170.39.12
'umask 0077; MYTMP="$(TMPDIR="${OVIRT_TMPDIR}" mktemp -d -t
ovirt-XX)"; trap "chmod -R u+rwX \"${MYTMP}\" > /dev/null
2>&1; rm -fr \"${MYTMP}\" > /dev/null 2>&1" 0; tar

Re: [ovirt-users] Problemas with ovirtmgmt network used to connect VMs

2017-07-24 Thread FERNANDO FREDIANI
Hello Edward, this happened again today and I was able to check more 
details.


So:

- The VM stopped passing any network traffic.
- Checking 'brctl showmacs ovirtmgmt' it showed the VM's mac address 
missing.
- I then went to oVirt Engine, under VM's 'Network Interfaces' tab, 
clicked Edit and changed the Link State to Down then to Up and it 
recovered its connectivity.
- Another 'brctl showmacs ovirtmgmt' showed the VM's mac address learned 
again by the bridge.


This Node server has the particularity of sharing the ovirtmgmt with 
VMs. Could it possibly be the cause of the issue in any way ?


Thanks
Fernando


On 24/07/2017 09:47, FERNANDO FREDIANI wrote:


Not tried this yet Edwardh, but will do at next time it happens. THe 
source mac address should be the mac as the VM. I don't see any reason 
for it to change from within the VM ou outside.


What type of things would make the bridge stop learning a given VM mac 
address ?


Fernando


On 23/07/2017 07:51, Edward Haas wrote:
Have you tried to use tcpdump at the VM vNIC to examine if there is 
traffic trying to get out from there? And with what source mac address?


Thanks,
Edy,

On Fri, Jul 21, 2017 at 5:36 PM, FERNANDO FREDIANI 
> wrote:


Has anyone had problem when using the ovirtmgmt bridge to connect
VMs ?

I am still facing a bizarre problem where some VMs connected to
this bridge stop passing traffic. Checking the problem further I
see its mac address stops being learned by the bridge and the
problem is resolved only with a VM reboot.

When I last saw the problem I run brctl showmacs ovirtmgmt and it
shows me the VM's mac adress with agening timer 200.19. After the
VM reboot I see the same mac with agening timer 0.00.
I don't see it in another environment where the ovirtmgmt is not
used for VMs.

Does anyone have any clue about this type of behavior ?

Fernando
___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users







___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt and Foreman

2017-07-24 Thread Oved Ourfali
CC-ing Ohad and Ivan from the Foreman team to take a look.

Also, by default, RHV 4.1 will use v4 of the api, so you have to use a URL
in Foreman that uses v3 (as Foreman doesn't support v4 yet).

I assume that's not your issue, otherwise you would have encountered more
basic issues.

Also, can you please share your logs from both environments?

Ohad/Ivan, any clue?

Thanks,
Oved

On Jul 24, 2017 18:08, "Davide Ferrari"  wrote:

Hello list


is anybody successfully using oVirt + Foreman for VM creation +
provisioning?

I'm using Foremn (latest version, 1.15.2) with latest oVirt version (4.1.3)
but I'm encountering several problem, especially related to disks. For
example:

- cannot create a VM with multiple disks though Foreman CLI (hammer)

- if I create a multidisk VM from Foreman, the second disk always gets the
"bootable" flag and not the primary image, making the VMs not bootable at
all.


Any other Foreman user sharing the pain here? Foramn's list is not so
useful so I'm trying to ask here. How do you programmatically create
virtual machines with oVirt and Foreman? Should I switch do directly using
oVirt API?

Thanks in advance

Davide

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt and Foreman

2017-07-24 Thread Davide Ferrari

Hello list


is anybody successfully using oVirt + Foreman for VM creation + 
provisioning?


I'm using Foremn (latest version, 1.15.2) with latest oVirt version 
(4.1.3) but I'm encountering several problem, especially related to 
disks. For example:


- cannot create a VM with multiple disks though Foreman CLI (hammer)

- if I create a multidisk VM from Foreman, the second disk always gets 
the "bootable" flag and not the primary image, making the VMs not 
bootable at all.



Any other Foreman user sharing the pain here? Foramn's list is not so 
useful so I'm trying to ask here. How do you programmatically create 
virtual machines with oVirt and Foreman? Should I switch do directly 
using oVirt API?


Thanks in advance

Davide

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] [Italy] oVirt Live Installation and Maintenance event

2017-07-24 Thread Sandro Bonazzola
Hi,
just an highlight on an event happening in Italy on August 2nd:
Sogno (oVirtuale) di una notte di mezza estate

It will be in Bergamo and will last 2.5 hours.
During the event you'll be able to assist to a live install of an
hyperconverged deployment, configuration and maintenance of clusters.
If you're going to attend be sure to book a seat with eventbrite!

-- 

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA 

TRIED. TESTED. TRUSTED. 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements

2017-07-24 Thread yayo (j)
>
> All these ip are pingable and hosts resolvible across all 3 nodes but,
>> only the 10.10.10.0 network is the decidated network for gluster  (rosolved
>> using gdnode* host names) ... You think that remove other entries can fix
>> the problem? So, sorry, but, how can I remove other entries?
>>
> I don't think having extra entries could be a problem. Did you check the
> fuse mount logs for disconnect messages that I referred to in the other
> email?
>



* tail -f
/var/log/glusterfs/rhev-data-center-mnt-glusterSD-dvirtgluster\:engine.log*

*NODE01:*


[2017-07-24 07:34:00.799347] E [glusterfsd-mgmt.c:1908:mgmt_rpc_notify] 0
-glusterfsd-mgmt: failed to connect with remote-host: gdnode03 (Transport
endpoint is not connected)
[2017-07-24 07:44:46.687334] I [glusterfsd-mgmt.c:1926:mgmt_rpc_notify] 0
-glusterfsd-mgmt: Exhausted all volfile servers
[2017-07-24 09:04:25.951350] E [glusterfsd-mgmt.c:1908:mgmt_rpc_notify] 0
-glusterfsd-mgmt: failed to connect with remote-host: gdnode03 (Transport
endpoint is not connected)
[2017-07-24 09:15:11.839357] I [glusterfsd-mgmt.c:1926:mgmt_rpc_notify] 0
-glusterfsd-mgmt: Exhausted all volfile servers
[2017-07-24 10:34:51.231353] E [glusterfsd-mgmt.c:1908:mgmt_rpc_notify] 0
-glusterfsd-mgmt: failed to connect with remote-host: gdnode03 (Transport
endpoint is not connected)
[2017-07-24 10:45:36.991321] I [glusterfsd-mgmt.c:1926:mgmt_rpc_notify] 0
-glusterfsd-mgmt: Exhausted all volfile servers
[2017-07-24 12:05:16.383323] E [glusterfsd-mgmt.c:1908:mgmt_rpc_notify] 0
-glusterfsd-mgmt: failed to connect with remote-host: gdnode03 (Transport
endpoint is not connected)
[2017-07-24 12:16:02.271320] I [glusterfsd-mgmt.c:1926:mgmt_rpc_notify] 0
-glusterfsd-mgmt: Exhausted all volfile servers
[2017-07-24 13:35:41.535308] E [glusterfsd-mgmt.c:1908:mgmt_rpc_notify] 0
-glusterfsd-mgmt: failed to connect with remote-host: gdnode03 (Transport
endpoint is not connected)
[2017-07-24 13:46:27.423304] I [glusterfsd-mgmt.c:1926:mgmt_rpc_notify] 0
-glusterfsd-mgmt: Exhausted all volfile servers



Why again gdnode03? Was removed from gluster! was the arbiter node...


*NODE02:*


[2017-07-24 14:08:18.709209] I [MSGID: 108026] [
afr-self-heal-common.c:1254:afr_log_selfheal] 0-engine-replicate-0:
Completed data selfheal on db56ac00-fd5b-4326-a879-326ff56181de. sources=0 [
1]  sinks=2
[2017-07-24 14:08:38.746688] I [MSGID: 108026] [
afr-self-heal-metadata.c:51:__afr_selfheal_metadata_do]
0-engine-replicate-0: performing metadata selfheal on
f05b9742-2771-484a-85fc-5b6974bcef81
[2017-07-24 14:08:38.749379] I [MSGID: 108026] [
afr-self-heal-common.c:1254:afr_log_selfheal] 0-engine-replicate-0:
Completed metadata selfheal on f05b9742-2771-484a-85fc-5b6974bcef81.
sources=0 [1]  sinks=2
[2017-07-24 14:08:46.068001] I [MSGID: 108026] [
afr-self-heal-common.c:1254:afr_log_selfheal] 0-engine-replicate-0:
Completed data selfheal on db56ac00-fd5b-4326-a879-326ff56181de. sources=0 [
1]  sinks=2
The message "I [MSGID: 108026] [
afr-self-heal-metadata.c:51:__afr_selfheal_metadata_do]
0-engine-replicate-0: performing metadata selfheal on
f05b9742-2771-484a-85fc-5b6974bcef81" repeated 3 times between [2017-07-24
14:08:38.746688] and [2017-07-24 14:10:09.088625]
The message "I [MSGID: 108026] [afr-self-heal-common.c:1254:afr_log_selfheal]
0-engine-replicate-0: Completed metadata selfheal on
f05b9742-2771-484a-85fc-5b6974bcef81. sources=0 [1]  sinks=2 " repeated 3
times between [2017-07-24 14:08:38.749379] and [2017-07-24 14:10:09.091377]
[2017-07-24 14:10:19.384379] I [MSGID: 108026]
[afr-self-heal-common.c:1254:afr_log_selfheal] 0-engine-replicate-0:
Completed data selfheal on db56ac00-fd5b-4326-a879-326ff56181de. sources=0
[1]  sinks=2
[2017-07-24 14:10:39.433155] I [MSGID: 108026] [afr-self-heal-metadata.c:51:
__afr_selfheal_metadata_do] 0-engine-replicate-0: performing metadata
selfheal on f05b9742-2771-484a-85fc-5b6974bcef81
[2017-07-24 14:10:39.435847] I [MSGID: 108026]
[afr-self-heal-common.c:1254:afr_log_selfheal] 0-engine-replicate-0:
Completed metadata selfheal on f05b9742-2771-484a-85fc-5b6974bcef81.
sources=0 [1]  sinks=2



*NODE04:*


[2017-07-24 14:08:56.789598] I [MSGID: 108026] [afr-self-heal-common.c:1254
:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on
e6dfd556-340b-4b76-b47b-7b6f5bd74327. sources=[0] 1  sinks=2
[2017-07-24 14:09:17.231987] I [MSGID: 108026] [afr-self-heal-common.c:1254
:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on db56ac00
-fd5b-4326-a879-326ff56181de. sources=[0] 1  sinks=2
[2017-07-24 14:09:38.039541] I [MSGID: 108026] [afr-self-heal-common.c:1254
:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on
e6dfd556-340b-4b76-b47b-7b6f5bd74327. sources=[0] 1  sinks=2
[2017-07-24 14:09:48.875602] I [MSGID: 108026] [afr-self-heal-common.c:1254
:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on db56ac00
-fd5b-4326-a879-326ff56181de. sources=[0] 1  sinks=2
[2017-07-24 14:10:39.832068] I [MSGID: 108026] 

Re: [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements

2017-07-24 Thread yayo (j)
Hi,

UI refreshed but problem still remain ...

No specific error, I've only these errors but I've read that there is no
problem if I have this kind of errors:


2017-07-24 15:53:59,823+02 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
(DefaultQuartzScheduler2) [b7590c4] START,
GlusterServersListVDSCommand(HostName
= node01.localdomain.local, VdsIdVDSCommandParametersBase:{runAsync='true',
hostId='4c89baa5-e8f7-4132-a4b3-af332247570c'}), log id: 29a62417
2017-07-24 15:54:01,066+02 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
(DefaultQuartzScheduler2) [b7590c4] FINISH, GlusterServersListVDSCommand,
return: [10.10.20.80/24:CONNECTED, node02.localdomain.local:CONNECTED,
gdnode04:CONNECTED], log id: 29a62417
2017-07-24 15:54:01,076+02 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler2) [b7590c4] START,
GlusterVolumesListVDSCommand(HostName
= node01.localdomain.local, GlusterVolumesListVDSParameters:{runAsync='true',
hostId='4c89baa5-e8f7-4132-a4b3-af332247570c'}), log id: 7fce25d3
2017-07-24 15:54:02,209+02 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
(DefaultQuartzScheduler2) [b7590c4] Could not associate brick
'gdnode01:/gluster/engine/brick' of volume 'd19c19e3-910d-437b-8ba7-
4f2a23d17515' with correct network as no gluster network found in cluster
'0002-0002-0002-0002-017a'
2017-07-24 15:54:02,212+02 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
(DefaultQuartzScheduler2) [b7590c4] Could not associate brick
'gdnode02:/gluster/engine/brick' of volume 'd19c19e3-910d-437b-8ba7-
4f2a23d17515' with correct network as no gluster network found in cluster
'0002-0002-0002-0002-017a'
2017-07-24 15:54:02,215+02 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
(DefaultQuartzScheduler2) [b7590c4] Could not associate brick
'gdnode04:/gluster/engine/brick' of volume 'd19c19e3-910d-437b-8ba7-
4f2a23d17515' with correct network as no gluster network found in cluster
'0002-0002-0002-0002-017a'
2017-07-24 15:54:02,218+02 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
(DefaultQuartzScheduler2) [b7590c4] Could not associate brick
'gdnode01:/gluster/data/brick' of volume 'c7a5dfc9-3e72-4ea1-843e-
c8275d4a7c2d' with correct network as no gluster network found in cluster
'0002-0002-0002-0002-017a'
2017-07-24 15:54:02,221+02 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
(DefaultQuartzScheduler2) [b7590c4] Could not associate brick
'gdnode02:/gluster/data/brick' of volume 'c7a5dfc9-3e72-4ea1-843e-
c8275d4a7c2d' with correct network as no gluster network found in cluster
'0002-0002-0002-0002-017a'
2017-07-24 15:54:02,224+02 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
(DefaultQuartzScheduler2) [b7590c4] Could not associate brick
'gdnode04:/gluster/data/brick' of volume 'c7a5dfc9-3e72-4ea1-843e-
c8275d4a7c2d' with correct network as no gluster network found in cluster
'0002-0002-0002-0002-017a'
2017-07-24 15:54:02,224+02 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler2) [b7590c4] FINISH, GlusterVolumesListVDSCommand,
return: {d19c19e3-910d-437b-8ba7-4f2a23d17515=org.ovirt.engine.core.
common.businessentities.gluster.GlusterVolumeEntity@fdc91062, c7a5dfc9-3e72
-4ea1-843e-c8275d4a7c2d=org.ovirt.engine.core.common.businessentities.
gluster.GlusterVolumeEntity@999a6f23}, log id: 7fce25d3


Thank you


2017-07-24 8:12 GMT+02:00 Kasturi Narra :

> Hi,
>
>Regarding the UI showing incorrect information about engine and data
> volumes, can you please refresh the UI and see if the issue persists  plus
> any errors in the engine.log files ?
>
> Thanks
> kasturi
>
> On Sat, Jul 22, 2017 at 11:43 AM, Ravishankar N 
> wrote:
>
>>
>> On 07/21/2017 11:41 PM, yayo (j) wrote:
>>
>> Hi,
>>
>> Sorry for follow up again, but, checking the ovirt interface I've found
>> that ovirt report the "engine" volume as an "arbiter" configuration and the
>> "data" volume as full replicated volume. Check these screenshots:
>>
>>
>> This is probably some refresh bug in the UI, Sahina might be able to tell
>> you.
>>
>>
>> https://drive.google.com/drive/folders/0ByUV7xQtP1gCTE8tUTFf
>> VmR5aDQ?usp=sharing
>>
>> But the "gluster volume info" command report that all 2 volume are full
>> replicated:
>>
>>
>> *Volume Name: data*
>> *Type: Replicate*
>> *Volume ID: c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d*
>> *Status: Started*
>> *Snapshot Count: 0*
>> *Number of Bricks: 1 x 3 = 3*
>> *Transport-type: tcp*
>> *Bricks:*
>> *Brick1: gdnode01:/gluster/data/brick*
>> *Brick2: gdnode02:/gluster/data/brick*
>> *Brick3: gdnode04:/gluster/data/brick*
>> *Options Reconfigured:*
>> *nfs.disable: on*
>> *performance.readdir-ahead: on*
>> 

Re: [ovirt-users] Problemas with ovirtmgmt network used to connect VMs

2017-07-24 Thread FERNANDO FREDIANI
Not tried this yet Edwardh, but will do at next time it happens. THe 
source mac address should be the mac as the VM. I don't see any reason 
for it to change from within the VM ou outside.


What type of things would make the bridge stop learning a given VM mac 
address ?


Fernando


On 23/07/2017 07:51, Edward Haas wrote:
Have you tried to use tcpdump at the VM vNIC to examine if there is 
traffic trying to get out from there? And with what source mac address?


Thanks,
Edy,

On Fri, Jul 21, 2017 at 5:36 PM, FERNANDO FREDIANI 
> wrote:


Has anyone had problem when using the ovirtmgmt bridge to connect
VMs ?

I am still facing a bizarre problem where some VMs connected to
this bridge stop passing traffic. Checking the problem further I
see its mac address stops being learned by the bridge and the
problem is resolved only with a VM reboot.

When I last saw the problem I run brctl showmacs ovirtmgmt and it
shows me the VM's mac adress with agening timer 200.19. After the
VM reboot I see the same mac with agening timer 0.00.
I don't see it in another environment where the ovirtmgmt is not
used for VMs.

Does anyone have any clue about this type of behavior ?

Fernando
___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt 4.1 additional hosted-engine deploy setup on another host not working

2017-07-24 Thread Yaniv Kaul
On Mon, Jul 24, 2017 at 1:49 PM, TranceWorldLogic . <
tranceworldlo...@gmail.com> wrote:

> Thanks Kasturi,
>
> Would you please update note or prerequisite in below link ?
> http://www.ovirt.org/documentation/self-hosted/chap-Installing_Additional_
> Hosts_to_a_Self-Hosted_Environment/
>

We'd be happy if you could send a patch to fix it.
Y.


>
>
>
> On Mon, Jul 24, 2017 at 4:09 PM, Kasturi Narra  wrote:
>
>> Hi,
>>
>> This option appears in the host tab only when HostedEngine vm and
>> hosted_storage is present in the UI. Before adding another host make sure
>> that you add your first data domain to the UI which will automatically
>> import HostedEngine vm and hosted_storage. Once these two are imported you
>> will be able to see 'hosted-engine' sub tab in the 'Add host' / edit host
>> dialog box.
>>
>> Thanks
>> kasturi
>>
>> On Mon, Jul 24, 2017 at 4:05 PM, TranceWorldLogic . <
>> tranceworldlo...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> I want to add another host to hosted-engine.
>>> Hence I tried to follow steps as shown in below link:
>>>
>>> http://www.ovirt.org/documentation/self-hosted/chap-Installi
>>> ng_Additional_Hosts_to_a_Self-Hosted_Environment/
>>> Topic :
>>>
>>> *Adding an Additional Self-Hosted Engine Host*
>>> But I not found any additional sub-tab call hosted-engine.
>>> Even adding host I tired to edit host but still not observe.
>>>
>>> Do I need to run some command hosted-engine --deploy to add another host
>>> ?
>>> Or is it handle by GUI automatically ?
>>>
>>> Thanks,
>>> ~Rohit
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NullPointerException when changing compatibility version to 4.0

2017-07-24 Thread Michal Skrivanek

> On 24 Jul 2017, at 13:39, Marcel Hanke  wrote:
> 
> Hi,
> due to the nullpointerException i can't figur out with of the over 1000 vm is 
> failing, do you have an idea how to get the broken one?
> The one before the Exception is fine and the next one in the db also.

It’s not easy to find out. That’s why we improved the logging in [1]. Can you 
perhaps upgrade to 4.1 first? It does support the same cluster versions and 
all...

Thanks,
michal

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1418641
> 
> On Monday, July 24, 2017 01:12:01 PM Michal Skrivanek wrote:
>>> On 24 Jul 2017, at 09:37, Marcel Hanke  wrote:
>>> 
>>> Hi,
>>> no there are no pending changes. The current cluster is 3.6 right.
>>> The strange thing is, that the change to 4.0 worked on 6 other clusters
>>> before without a problem.
>> 
>> it’s often a single or few VMs which are typically quite old, not really
>> used, or obsucre config which break some assumptions and rolls back the
>> whole upgrade. Can you check the VM it failed on if it is important or not?
>> It also often helps to just edit that VM and save it again
>> 
>> Thanks,
>> michal
>> 
>>> On Monday, July 24, 2017 08:53:40 AM Michal Skrivanek wrote:
> On 24 Jul 2017, at 08:23, Marcel Hanke  wrote:
> 
> Hi,
> i'm currently running on 4.0.6.3-1.el7.centos
 
 it’s quite an old thing it’s crashing on. I wonder what is the original
 version where those VMs were created? The current cluster is 3.6, right?
 Does any of those VMs have pending changes to be applied?
 
 Thanks,
 michal
 
> On Saturday, July 22, 2017 12:21:17 PM Michal Skrivanek wrote:
>>> On 20 Jul 2017, at 15:56, Marcel Hanke  wrote:
>>> 
>>> Hi,
>>> the Log is >400MB heres a part with the Errors.
>> 
>> ok
>> and which exact version do you have?
>> 
>>> thanks Marcel
>>> 
>>> On Thursday, July 20, 2017 02:43:57 PM Eli Mesika wrote:
 Hi
 
 Please attach full engine.log
 
 On Wed, Jul 19, 2017 at 12:33 PM, Marcel Hanke
 
 
 wrote:
> Hi,
> i currently have a problem with changing one of our clusters to
> compatibility
> version 4.0.
> The Log shows a NullPointerException after several successful vms:
> 2017-07-19 11:19:45,886 ERROR
> [org.ovirt.engine.core.bll.UpdateVmCommand]
> (default task-31) [1acd2990] Error during ValidateFailure.:
> java.lang.NullPointerException
> 
> at
> 
> org.ovirt.engine.core.bll.UpdateVmCommand.validate(
> UpdateVmCommand.java:632)
> [bll.jar:]
> 
> at
> 
> org.ovirt.engine.core.bll.CommandBase.internalValidate(
> CommandBase.java:886)
> [bll.jar:]
> 
> at
> 
> org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java
> :3
> 91
> )
> [bll.jar:]
> 
> at
> org.ovirt.engine.core.bll.Backend.runAction(Backend.java:493)
> 
> [bll.jar:]
> .
> 
> On other Clusters with the exect same configuration the change to
> 4.0
> was
> successfull without a problem.
> Turning off the cluster for the change is also not possible because
> of
> 
>> 1200
> 
> Vms running on it.
> 
> Does anyone have an idea what to do, or that to look for?
> 
> Thanks
> Marcel
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>>> 
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt 4.1 additional hosted-engine deploy setup on another host not working

2017-07-24 Thread TranceWorldLogic .
Thanks Kasturi,

Would you please update note or prerequisite in below link ?
http://www.ovirt.org/documentation/self-hosted/chap-Installing_Additional_Hosts_to_a_Self-Hosted_Environment/


On Mon, Jul 24, 2017 at 4:09 PM, Kasturi Narra  wrote:

> Hi,
>
> This option appears in the host tab only when HostedEngine vm and
> hosted_storage is present in the UI. Before adding another host make sure
> that you add your first data domain to the UI which will automatically
> import HostedEngine vm and hosted_storage. Once these two are imported you
> will be able to see 'hosted-engine' sub tab in the 'Add host' / edit host
> dialog box.
>
> Thanks
> kasturi
>
> On Mon, Jul 24, 2017 at 4:05 PM, TranceWorldLogic . <
> tranceworldlo...@gmail.com> wrote:
>
>> Hi,
>>
>> I want to add another host to hosted-engine.
>> Hence I tried to follow steps as shown in below link:
>>
>> http://www.ovirt.org/documentation/self-hosted/chap-
>> Installing_Additional_Hosts_to_a_Self-Hosted_Environment/
>> Topic :
>>
>> *Adding an Additional Self-Hosted Engine Host*
>> But I not found any additional sub-tab call hosted-engine.
>> Even adding host I tired to edit host but still not observe.
>>
>> Do I need to run some command hosted-engine --deploy to add another host ?
>> Or is it handle by GUI automatically ?
>>
>> Thanks,
>> ~Rohit
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NullPointerException when changing compatibility version to 4.0

2017-07-24 Thread Marcel Hanke
Hi,
due to the nullpointerException i can't figur out with of the over 1000 vm is 
failing, do you have an idea how to get the broken one?
The one before the Exception is fine and the next one in the db also.

On Monday, July 24, 2017 01:12:01 PM Michal Skrivanek wrote:
> > On 24 Jul 2017, at 09:37, Marcel Hanke  wrote:
> > 
> > Hi,
> > no there are no pending changes. The current cluster is 3.6 right.
> > The strange thing is, that the change to 4.0 worked on 6 other clusters
> > before without a problem.
> 
> it’s often a single or few VMs which are typically quite old, not really
> used, or obsucre config which break some assumptions and rolls back the
> whole upgrade. Can you check the VM it failed on if it is important or not?
> It also often helps to just edit that VM and save it again
> 
> Thanks,
> michal
> 
> > On Monday, July 24, 2017 08:53:40 AM Michal Skrivanek wrote:
> >>> On 24 Jul 2017, at 08:23, Marcel Hanke  wrote:
> >>> 
> >>> Hi,
> >>> i'm currently running on 4.0.6.3-1.el7.centos
> >> 
> >> it’s quite an old thing it’s crashing on. I wonder what is the original
> >> version where those VMs were created? The current cluster is 3.6, right?
> >> Does any of those VMs have pending changes to be applied?
> >> 
> >> Thanks,
> >> michal
> >> 
> >>> On Saturday, July 22, 2017 12:21:17 PM Michal Skrivanek wrote:
> > On 20 Jul 2017, at 15:56, Marcel Hanke  wrote:
> > 
> > Hi,
> > the Log is >400MB heres a part with the Errors.
>  
>  ok
>  and which exact version do you have?
>  
> > thanks Marcel
> > 
> > On Thursday, July 20, 2017 02:43:57 PM Eli Mesika wrote:
> >> Hi
> >> 
> >> Please attach full engine.log
> >> 
> >> On Wed, Jul 19, 2017 at 12:33 PM, Marcel Hanke
> >> 
> >> 
> >> wrote:
> >>> Hi,
> >>> i currently have a problem with changing one of our clusters to
> >>> compatibility
> >>> version 4.0.
> >>> The Log shows a NullPointerException after several successful vms:
> >>> 2017-07-19 11:19:45,886 ERROR
> >>> [org.ovirt.engine.core.bll.UpdateVmCommand]
> >>> (default task-31) [1acd2990] Error during ValidateFailure.:
> >>> java.lang.NullPointerException
> >>> 
> >>>  at
> >>> 
> >>> org.ovirt.engine.core.bll.UpdateVmCommand.validate(
> >>> UpdateVmCommand.java:632)
> >>> [bll.jar:]
> >>> 
> >>>  at
> >>> 
> >>> org.ovirt.engine.core.bll.CommandBase.internalValidate(
> >>> CommandBase.java:886)
> >>> [bll.jar:]
> >>> 
> >>>  at
> >>> 
> >>> org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java
> >>> :3
> >>> 91
> >>> )
> >>> [bll.jar:]
> >>> 
> >>>  at
> >>>  org.ovirt.engine.core.bll.Backend.runAction(Backend.java:493)
> >>> 
> >>> [bll.jar:]
> >>> .
> >>> 
> >>> On other Clusters with the exect same configuration the change to
> >>> 4.0
> >>> was
> >>> successfull without a problem.
> >>> Turning off the cluster for the change is also not possible because
> >>> of
> >>> 
>  1200
> >>> 
> >>> Vms running on it.
> >>> 
> >>> Does anyone have an idea what to do, or that to look for?
> >>> 
> >>> Thanks
> >>> Marcel
> >>> ___
> >>> Users mailing list
> >>> Users@ovirt.org
> >>> http://lists.ovirt.org/mailman/listinfo/users
> > 
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >>> 
> >>> ___
> >>> Users mailing list
> >>> Users@ovirt.org
> >>> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt 4.1 additional hosted-engine deploy setup on another host not working

2017-07-24 Thread Kasturi Narra
Hi,

This option appears in the host tab only when HostedEngine vm and
hosted_storage is present in the UI. Before adding another host make sure
that you add your first data domain to the UI which will automatically
import HostedEngine vm and hosted_storage. Once these two are imported you
will be able to see 'hosted-engine' sub tab in the 'Add host' / edit host
dialog box.

Thanks
kasturi

On Mon, Jul 24, 2017 at 4:05 PM, TranceWorldLogic . <
tranceworldlo...@gmail.com> wrote:

> Hi,
>
> I want to add another host to hosted-engine.
> Hence I tried to follow steps as shown in below link:
>
> http://www.ovirt.org/documentation/self-hosted/chap-Installing_Additional_
> Hosts_to_a_Self-Hosted_Environment/
> Topic :
>
> *Adding an Additional Self-Hosted Engine Host*
> But I not found any additional sub-tab call hosted-engine.
> Even adding host I tired to edit host but still not observe.
>
> Do I need to run some command hosted-engine --deploy to add another host ?
> Or is it handle by GUI automatically ?
>
> Thanks,
> ~Rohit
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Ovirt 4.1 additional hosted-engine deploy setup on another host not working

2017-07-24 Thread TranceWorldLogic .
Hi,

I want to add another host to hosted-engine.
Hence I tried to follow steps as shown in below link:

http://www.ovirt.org/documentation/self-hosted/chap-Installing_Additional_Hosts_to_a_Self-Hosted_Environment/
Topic :

*Adding an Additional Self-Hosted Engine Host*
But I not found any additional sub-tab call hosted-engine.
Even adding host I tired to edit host but still not observe.

Do I need to run some command hosted-engine --deploy to add another host ?
Or is it handle by GUI automatically ?

Thanks,
~Rohit
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NullPointerException when changing compatibility version to 4.0

2017-07-24 Thread Michal Skrivanek

> On 24 Jul 2017, at 09:37, Marcel Hanke  wrote:
> 
> Hi,
> no there are no pending changes. The current cluster is 3.6 right.
> The strange thing is, that the change to 4.0 worked on 6 other clusters 
> before 
> without a problem.

it’s often a single or few VMs which are typically quite old, not really used, 
or obsucre config which break some assumptions and rolls back the whole 
upgrade. Can you check the VM it failed on if it is important or not? It also 
often helps to just edit that VM and save it again

Thanks,
michal

> 
> On Monday, July 24, 2017 08:53:40 AM Michal Skrivanek wrote:
>>> On 24 Jul 2017, at 08:23, Marcel Hanke  wrote:
>>> 
>>> Hi,
>>> i'm currently running on 4.0.6.3-1.el7.centos
>> 
>> it’s quite an old thing it’s crashing on. I wonder what is the original
>> version where those VMs were created? The current cluster is 3.6, right?
>> Does any of those VMs have pending changes to be applied?
>> 
>> Thanks,
>> michal
>> 
>>> On Saturday, July 22, 2017 12:21:17 PM Michal Skrivanek wrote:
> On 20 Jul 2017, at 15:56, Marcel Hanke  wrote:
> 
> Hi,
> the Log is >400MB heres a part with the Errors.
 
 ok
 and which exact version do you have?
 
> thanks Marcel
> 
> On Thursday, July 20, 2017 02:43:57 PM Eli Mesika wrote:
>> Hi
>> 
>> Please attach full engine.log
>> 
>> On Wed, Jul 19, 2017 at 12:33 PM, Marcel Hanke 
>> 
>> wrote:
>>> Hi,
>>> i currently have a problem with changing one of our clusters to
>>> compatibility
>>> version 4.0.
>>> The Log shows a NullPointerException after several successful vms:
>>> 2017-07-19 11:19:45,886 ERROR
>>> [org.ovirt.engine.core.bll.UpdateVmCommand]
>>> (default task-31) [1acd2990] Error during ValidateFailure.:
>>> java.lang.NullPointerException
>>> 
>>>  at
>>> 
>>> org.ovirt.engine.core.bll.UpdateVmCommand.validate(
>>> UpdateVmCommand.java:632)
>>> [bll.jar:]
>>> 
>>>  at
>>> 
>>> org.ovirt.engine.core.bll.CommandBase.internalValidate(
>>> CommandBase.java:886)
>>> [bll.jar:]
>>> 
>>>  at
>>> 
>>> org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:3
>>> 91
>>> )
>>> [bll.jar:]
>>> 
>>>  at org.ovirt.engine.core.bll.Backend.runAction(Backend.java:493)
>>> 
>>> [bll.jar:]
>>> .
>>> 
>>> On other Clusters with the exect same configuration the change to 4.0
>>> was
>>> successfull without a problem.
>>> Turning off the cluster for the change is also not possible because of
>>> 
 1200
>>> 
>>> Vms running on it.
>>> 
>>> Does anyone have an idea what to do, or that to look for?
>>> 
>>> Thanks
>>> Marcel
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>>> 
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NullPointerException when changing compatibility version to 4.0

2017-07-24 Thread Staniforth, Paul
Hello,
   I had trouble changing the compatibility level when VMs had HA 
enabled, I can't remember what the error message was but it was quite obscure. 
I had to disable HA for any VMs while I upgraded the compatibility level.

Regards,
  Paul S.

From: users-boun...@ovirt.org  on behalf of Marcel 
Hanke 
Sent: 24 July 2017 07:37
To: Michal Skrivanek
Cc: users
Subject: Re: [ovirt-users] NullPointerException when changing compatibility 
version to 4.0

Hi,
no there are no pending changes. The current cluster is 3.6 right.
The strange thing is, that the change to 4.0 worked on 6 other clusters before
without a problem.

On Monday, July 24, 2017 08:53:40 AM Michal Skrivanek wrote:
> > On 24 Jul 2017, at 08:23, Marcel Hanke  wrote:
> >
> > Hi,
> > i'm currently running on 4.0.6.3-1.el7.centos
>
> it’s quite an old thing it’s crashing on. I wonder what is the original
> version where those VMs were created? The current cluster is 3.6, right?
> Does any of those VMs have pending changes to be applied?
>
> Thanks,
> michal
>
> > On Saturday, July 22, 2017 12:21:17 PM Michal Skrivanek wrote:
> >>> On 20 Jul 2017, at 15:56, Marcel Hanke  wrote:
> >>>
> >>> Hi,
> >>> the Log is >400MB heres a part with the Errors.
> >>
> >> ok
> >> and which exact version do you have?
> >>
> >>> thanks Marcel
> >>>
> >>> On Thursday, July 20, 2017 02:43:57 PM Eli Mesika wrote:
>  Hi
> 
>  Please attach full engine.log
> 
>  On Wed, Jul 19, 2017 at 12:33 PM, Marcel Hanke 
> 
>  wrote:
> > Hi,
> > i currently have a problem with changing one of our clusters to
> > compatibility
> > version 4.0.
> > The Log shows a NullPointerException after several successful vms:
> > 2017-07-19 11:19:45,886 ERROR
> > [org.ovirt.engine.core.bll.UpdateVmCommand]
> > (default task-31) [1acd2990] Error during ValidateFailure.:
> > java.lang.NullPointerException
> >
> >   at
> >
> > org.ovirt.engine.core.bll.UpdateVmCommand.validate(
> > UpdateVmCommand.java:632)
> > [bll.jar:]
> >
> >   at
> >
> > org.ovirt.engine.core.bll.CommandBase.internalValidate(
> > CommandBase.java:886)
> > [bll.jar:]
> >
> >   at
> >
> > org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:3
> > 91
> > )
> > [bll.jar:]
> >
> >   at org.ovirt.engine.core.bll.Backend.runAction(Backend.java:493)
> >
> > [bll.jar:]
> > .
> >
> > On other Clusters with the exect same configuration the change to 4.0
> > was
> > successfull without a problem.
> > Turning off the cluster for the change is also not possible because of
> >
> >> 1200
> >
> > Vms running on it.
> >
> > Does anyone have an idea what to do, or that to look for?
> >
> > Thanks
> > Marcel
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >>>
> >>> ___
> >>> Users mailing list
> >>> Users@ovirt.org
> >>> http://lists.ovirt.org/mailman/listinfo/users
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
To view the terms under which this email is distributed, please go to:-
http://disclaimer.leedsbeckett.ac.uk/disclaimer/disclaimer.html
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NullPointerException when changing compatibility version to 4.0

2017-07-24 Thread Marcel Hanke
Hi,
no there are no pending changes. The current cluster is 3.6 right.
The strange thing is, that the change to 4.0 worked on 6 other clusters before 
without a problem.

On Monday, July 24, 2017 08:53:40 AM Michal Skrivanek wrote:
> > On 24 Jul 2017, at 08:23, Marcel Hanke  wrote:
> > 
> > Hi,
> > i'm currently running on 4.0.6.3-1.el7.centos
> 
> it’s quite an old thing it’s crashing on. I wonder what is the original
> version where those VMs were created? The current cluster is 3.6, right?
> Does any of those VMs have pending changes to be applied?
> 
> Thanks,
> michal
> 
> > On Saturday, July 22, 2017 12:21:17 PM Michal Skrivanek wrote:
> >>> On 20 Jul 2017, at 15:56, Marcel Hanke  wrote:
> >>> 
> >>> Hi,
> >>> the Log is >400MB heres a part with the Errors.
> >> 
> >> ok
> >> and which exact version do you have?
> >> 
> >>> thanks Marcel
> >>> 
> >>> On Thursday, July 20, 2017 02:43:57 PM Eli Mesika wrote:
>  Hi
>  
>  Please attach full engine.log
>  
>  On Wed, Jul 19, 2017 at 12:33 PM, Marcel Hanke 
>  
>  wrote:
> > Hi,
> > i currently have a problem with changing one of our clusters to
> > compatibility
> > version 4.0.
> > The Log shows a NullPointerException after several successful vms:
> > 2017-07-19 11:19:45,886 ERROR
> > [org.ovirt.engine.core.bll.UpdateVmCommand]
> > (default task-31) [1acd2990] Error during ValidateFailure.:
> > java.lang.NullPointerException
> > 
> >   at
> > 
> > org.ovirt.engine.core.bll.UpdateVmCommand.validate(
> > UpdateVmCommand.java:632)
> > [bll.jar:]
> > 
> >   at
> > 
> > org.ovirt.engine.core.bll.CommandBase.internalValidate(
> > CommandBase.java:886)
> > [bll.jar:]
> > 
> >   at
> > 
> > org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:3
> > 91
> > )
> > [bll.jar:]
> > 
> >   at org.ovirt.engine.core.bll.Backend.runAction(Backend.java:493)
> > 
> > [bll.jar:]
> > .
> > 
> > On other Clusters with the exect same configuration the change to 4.0
> > was
> > successfull without a problem.
> > Turning off the cluster for the change is also not possible because of
> > 
> >> 1200
> > 
> > Vms running on it.
> > 
> > Does anyone have an idea what to do, or that to look for?
> > 
> > Thanks
> > Marcel
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >>> 
> >>> ___
> >>> Users mailing list
> >>> Users@ovirt.org
> >>> http://lists.ovirt.org/mailman/listinfo/users
> > 
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements

2017-07-24 Thread Kasturi Narra
Hi,

   Regarding the UI showing incorrect information about engine and data
volumes, can you please refresh the UI and see if the issue persists  plus
any errors in the engine.log files ?

Thanks
kasturi

On Sat, Jul 22, 2017 at 11:43 AM, Ravishankar N 
wrote:

>
> On 07/21/2017 11:41 PM, yayo (j) wrote:
>
> Hi,
>
> Sorry for follow up again, but, checking the ovirt interface I've found
> that ovirt report the "engine" volume as an "arbiter" configuration and the
> "data" volume as full replicated volume. Check these screenshots:
>
>
> This is probably some refresh bug in the UI, Sahina might be able to tell
> you.
>
>
> https://drive.google.com/drive/folders/0ByUV7xQtP1gCTE8tUTFfVmR5aDQ?
> usp=sharing
>
> But the "gluster volume info" command report that all 2 volume are full
> replicated:
>
>
> *Volume Name: data*
> *Type: Replicate*
> *Volume ID: c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d*
> *Status: Started*
> *Snapshot Count: 0*
> *Number of Bricks: 1 x 3 = 3*
> *Transport-type: tcp*
> *Bricks:*
> *Brick1: gdnode01:/gluster/data/brick*
> *Brick2: gdnode02:/gluster/data/brick*
> *Brick3: gdnode04:/gluster/data/brick*
> *Options Reconfigured:*
> *nfs.disable: on*
> *performance.readdir-ahead: on*
> *transport.address-family: inet*
> *storage.owner-uid: 36*
> *performance.quick-read: off*
> *performance.read-ahead: off*
> *performance.io-cache: off*
> *performance.stat-prefetch: off*
> *performance.low-prio-threads: 32*
> *network.remote-dio: enable*
> *cluster.eager-lock: enable*
> *cluster.quorum-type: auto*
> *cluster.server-quorum-type: server*
> *cluster.data-self-heal-algorithm: full*
> *cluster.locking-scheme: granular*
> *cluster.shd-max-threads: 8*
> *cluster.shd-wait-qlength: 1*
> *features.shard: on*
> *user.cifs: off*
> *storage.owner-gid: 36*
> *features.shard-block-size: 512MB*
> *network.ping-timeout: 30*
> *performance.strict-o-direct: on*
> *cluster.granular-entry-heal: on*
> *auth.allow: **
> *server.allow-insecure: on*
>
>
>
>
>
> *Volume Name: engine*
> *Type: Replicate*
> *Volume ID: d19c19e3-910d-437b-8ba7-4f2a23d17515*
> *Status: Started*
> *Snapshot Count: 0*
> *Number of Bricks: 1 x 3 = 3*
> *Transport-type: tcp*
> *Bricks:*
> *Brick1: gdnode01:/gluster/engine/brick*
> *Brick2: gdnode02:/gluster/engine/brick*
> *Brick3: gdnode04:/gluster/engine/brick*
> *Options Reconfigured:*
> *nfs.disable: on*
> *performance.readdir-ahead: on*
> *transport.address-family: inet*
> *storage.owner-uid: 36*
> *performance.quick-read: off*
> *performance.read-ahead: off*
> *performance.io-cache: off*
> *performance.stat-prefetch: off*
> *performance.low-prio-threads: 32*
> *network.remote-dio: off*
> *cluster.eager-lock: enable*
> *cluster.quorum-type: auto*
> *cluster.server-quorum-type: server*
> *cluster.data-self-heal-algorithm: full*
> *cluster.locking-scheme: granular*
> *cluster.shd-max-threads: 8*
> *cluster.shd-wait-qlength: 1*
> *features.shard: on*
> *user.cifs: off*
> *storage.owner-gid: 36*
> *features.shard-block-size: 512MB*
> *network.ping-timeout: 30*
> *performance.strict-o-direct: on*
> *cluster.granular-entry-heal: on*
> *auth.allow: **
>
>   server.allow-insecure: on
>
>
> 2017-07-21 19:13 GMT+02:00 yayo (j) :
>
>> 2017-07-20 14:48 GMT+02:00 Ravishankar N :
>>
>>>
>>> But it does  say something. All these gfids of completed heals in the
>>> log below are the for the ones that you have given the getfattr output of.
>>> So what is likely happening is there is an intermittent connection problem
>>> between your mount and the brick process, leading to pending heals again
>>> after the heal gets completed, which is why the numbers are varying each
>>> time. You would need to check why that is the case.
>>> Hope this helps,
>>> Ravi
>>>
>>>
>>>
>>> *[2017-07-20 09:58:46.573079] I [MSGID: 108026]
>>> [afr-self-heal-common.c:1254:afr_log_selfheal] 0-engine-replicate-0:
>>> Completed data selfheal on e6dfd556-340b-4b76-b47b-7b6f5bd74327.
>>> sources=[0] 1  sinks=2*
>>> *[2017-07-20 09:59:22.995003] I [MSGID: 108026]
>>> [afr-self-heal-metadata.c:51:__afr_selfheal_metadata_do]
>>> 0-engine-replicate-0: performing metadata selfheal on
>>> f05b9742-2771-484a-85fc-5b6974bcef81*
>>> *[2017-07-20 09:59:22.999372] I [MSGID: 108026]
>>> [afr-self-heal-common.c:1254:afr_log_selfheal] 0-engine-replicate-0:
>>> Completed metadata selfheal on f05b9742-2771-484a-85fc-5b6974bcef81.
>>> sources=[0] 1  sinks=2*
>>>
>>>
>>
>> Hi,
>>
>> following your suggestion, I've checked the "peer" status and I found
>> that there is too many name for the hosts, I don't know if this can be the
>> problem or part of it:
>>
>> *gluster peer status on NODE01:*
>> *Number of Peers: 2*
>>
>> *Hostname: dnode02.localdomain.local*
>> *Uuid: 7c0ebfa3-5676-4d3f-9bfa-7fff6afea0dd*
>> *State: Peer in Cluster (Connected)*
>> *Other names:*
>> *192.168.10.52*
>> *dnode02.localdomain.local*
>> *10.10.20.90*
>> *10.10.10.20*
>>
>>
>>
>>
>> *gluster peer