Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 20.4.17 ] [ add_secondary_storage_domains ]

2017-04-20 Thread Yaniv Kaul
On Thu, Apr 20, 2017 at 1:32 PM, Nir Soffer  wrote:

>
>
> בתאריך יום ה׳, 20 באפר׳ 2017, 13:05, מאת Piotr Kliczewski ‏<
> piotr.kliczew...@gmail.com>:
>
>> On Thu, Apr 20, 2017 at 11:49 AM, Yaniv Kaul  wrote:
>> >
>> >
>> > On Thu, Apr 20, 2017 at 11:55 AM, Piotr Kliczewski
>> >  wrote:
>> >>
>> >> On Thu, Apr 20, 2017 at 10:32 AM, Yaniv Kaul  wrote:
>> >> > No, that's not the issue.
>> >> > I've seen it happening few times.
>> >> >
>> >> > 1. It always with the ISO domain (which we don't use anyway in o-s-t)
>> >> > 2. Apparently, only one host is asking for a mount:
>> >> >  authenticated mount request from 192.168.201.4:713 for
>> /exports/nfs/iso
>> >> > (/exports/nfs/iso)
>> >> >
>> >> > (/var/log/messages of the NFS server)
>> >> >
>> >> > And indeed, you can see in[1] that host1 made the request and all is
>> >> > well on
>> >> > it.
>> >> >
>> >> > However, there are connection issues with host0 which cause a
>> timeout to
>> >> > connectStorageServer():
>> >> > From[2]:
>> >> >
>> >> > 2017-04-19 18:58:58,465-04 DEBUG
>> >> > [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker]
>> (ResponseWorker)
>> >> > []
>> >> > Message received:
>> >> >
>> >> > {"jsonrpc":"2.0","error":{"code":"lago-basic-suite-
>> master-host0:192912448","message":"Vds
>> >> > timeout occured"},"id":null}
>> >> >
>> >> > 2017-04-19 18:58:58,475-04 ERROR
>> >> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.
>> AuditLogDirector]
>> >> > (org.ovirt.thread.pool-7-thread-37) [755b908a] EVENT_ID:
>> >> > VDS_BROKER_COMMAND_FAILURE(10,802), Correlation ID: null, Call
>> Stack:
>> >> > null,
>> >> > Custom Event ID: -1, Message: VDSM lago-basic-suite-master-host0
>> command
>> >> > ConnectStorageServerVDS failed: Message timeout which can be caused
>> by
>> >> > communication issues
>> >> > 2017-04-19 18:58:58,475-04 INFO
>> >> >
>> >> > [org.ovirt.engine.core.vdsbroker.vdsbroker.
>> ConnectStorageServerVDSCommand]
>> >> > (org.ovirt.thread.pool-7-thread-37) [755b908a] Command
>> >> >
>> >> > 'org.ovirt.engine.core.vdsbroker.vdsbroker.
>> ConnectStorageServerVDSCommand'
>> >> > return value '
>> >> > ServerConnectionStatusReturn:{status='Status [code=5022,
>> message=Message
>> >> > timeout which can be caused by communication issues]'}
>> >> >
>> >> >
>> >> > I wonder why, but on /var/log/messages[3], I'm seeing:
>> >> > Apr 19 18:56:58 lago-basic-suite-master-host0 journal: vdsm Executor
>> >> > WARN
>> >> > Worker blocked: > >> > {'params':
>> >> > {u'connectionParams': [{u'id': u'4ca8fc84-d872-4a7f-907f-
>> 9445bda7b6d1',
>> >> > u'connection': u'192.168.201.3:/exports/nfs/share1', u'iqn': u'',
>> >> > u'user':
>> >> > u'', u'tpgt': u'1', u'protocol_version': u'4.2', u'password':
>> >> > '',
>> >> > u'port': u''}], u'storagepoolID':
>> >> > u'----',
>> >> > u'domainType': 1}, 'jsonrpc': '2.0', 'method':
>> >> > u'StoragePool.connectStorageServer', 'id':
>> >> > u'057da9c2-1e67-4c2f-9511-7d9de250386b'} at 0x2f44110> timeout=60,
>> >> > duration=60 at 0x2f44310> task#=9 at 0x2ac11d0>
>> >> > ...
>> >> >
>> >>
>> >> I see following sequence:
>> >>
>> >> The message is sent:
>> >>
>> >> 2017-04-19 18:55:58,020-04 DEBUG
>> >> [org.ovirt.vdsm.jsonrpc.client.reactors.stomp.StompCommonClient]
>> >> (org.ovirt.thread.pool-7-thread-37) [755b908a] Message sent: SEND
>> >> destination:jms.topic.vdsm_requests
>> >> content-length:381
>> >> ovirtCorrelationId:755b908a
>> >> reply-to:jms.topic.vdsm_responses
>> >>
>> >> > >> StoragePool.connectStorageServer, params:
>> >> {storagepoolID=----, domainType=1,
>> >> connectionParams=[{password=, protocol_version=4.2, port=, iqn=,
>> >> connection=192.168.201.3:/exports/nfs/share1,
>> >> id=4ca8fc84-d872-4a7f-907f-9445bda7b6d1, user=, tpgt=1}]}>
>> >>
>> >> There is no response for specified amount of time and we timeout:
>> >>
>> >> 2017-04-19 18:58:58,465-04 DEBUG
>> >> [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker]
>> >> (ResponseWorker) [] Message received:
>> >>
>> >> {"jsonrpc":"2.0","error":{"code":"lago-basic-suite-
>> master-host0:192912448","message":"Vds
>> >> timeout occured"},"id":null}
>> >>
>> >> As Yaniv pointed here is why we never get the response:
>> >>
>> >> Apr 19 18:58:58 lago-basic-suite-master-host0 journal: vdsm Executor
>> >> WARN Worker blocked: > >> {'params': {u'connectionParams': [{u'id':
>> >> u'4ca8fc84-d872-4a7f-907f-9445bda7b6d1', u'connection':
>> >> u'192.168.201.3:/exports/nfs/share1', u'iqn': u'', u'user': u'',
>> >> u'tpgt': u'1', u'protocol_version': u'4.2', u'password': '',
>> >> u'port': u''}], u'storagepoolID':
>> >> u'----', u'domainType': 1}, 'jsonrpc':
>> >> '2.0', 'method': u'StoragePool.connectStorageServer', 'id':
>> >> u'057da9c2-1e67-4c2f-9511-7d9de250386b'} at 0x2f44110> timeout=60,
>> >> duration=180 at 0x2f44310> task#=9 at 

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 20.4.17 ] [ add_secondary_storage_domains ]

2017-04-20 Thread Nir Soffer
בתאריך יום ה׳, 20 באפר׳ 2017, 13:51, מאת Yaniv Kaul ‏:

> On Thu, Apr 20, 2017 at 1:32 PM, Nir Soffer  wrote:
>
>>
>>
>> בתאריך יום ה׳, 20 באפר׳ 2017, 13:05, מאת Piotr Kliczewski ‏<
>> piotr.kliczew...@gmail.com>:
>>
>>> On Thu, Apr 20, 2017 at 11:49 AM, Yaniv Kaul  wrote:
>>> >
>>> >
>>> > On Thu, Apr 20, 2017 at 11:55 AM, Piotr Kliczewski
>>> >  wrote:
>>> >>
>>> >> On Thu, Apr 20, 2017 at 10:32 AM, Yaniv Kaul 
>>> wrote:
>>> >> > No, that's not the issue.
>>> >> > I've seen it happening few times.
>>> >> >
>>> >> > 1. It always with the ISO domain (which we don't use anyway in
>>> o-s-t)
>>> >> > 2. Apparently, only one host is asking for a mount:
>>> >> >  authenticated mount request from 192.168.201.4:713 for
>>> /exports/nfs/iso
>>> >> > (/exports/nfs/iso)
>>> >> >
>>> >> > (/var/log/messages of the NFS server)
>>> >> >
>>> >> > And indeed, you can see in[1] that host1 made the request and all is
>>> >> > well on
>>> >> > it.
>>> >> >
>>> >> > However, there are connection issues with host0 which cause a
>>> timeout to
>>> >> > connectStorageServer():
>>> >> > From[2]:
>>> >> >
>>> >> > 2017-04-19 18:58:58,465-04 DEBUG
>>> >> > [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker]
>>> (ResponseWorker)
>>> >> > []
>>> >> > Message received:
>>> >> >
>>> >> >
>>> {"jsonrpc":"2.0","error":{"code":"lago-basic-suite-master-host0:192912448","message":"Vds
>>> >> > timeout occured"},"id":null}
>>> >> >
>>> >> > 2017-04-19 18:58:58,475-04 ERROR
>>> >> >
>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>> >> > (org.ovirt.thread.pool-7-thread-37) [755b908a] EVENT_ID:
>>> >> > VDS_BROKER_COMMAND_FAILURE(10,802), Correlation ID: null, Call
>>> Stack:
>>> >> > null,
>>> >> > Custom Event ID: -1, Message: VDSM lago-basic-suite-master-host0
>>> command
>>> >> > ConnectStorageServerVDS failed: Message timeout which can be caused
>>> by
>>> >> > communication issues
>>> >> > 2017-04-19 18:58:58,475-04 INFO
>>> >> >
>>> >> >
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
>>> >> > (org.ovirt.thread.pool-7-thread-37) [755b908a] Command
>>> >> >
>>> >> >
>>> 'org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand'
>>> >> > return value '
>>> >> > ServerConnectionStatusReturn:{status='Status [code=5022,
>>> message=Message
>>> >> > timeout which can be caused by communication issues]'}
>>> >> >
>>> >> >
>>> >> > I wonder why, but on /var/log/messages[3], I'm seeing:
>>> >> > Apr 19 18:56:58 lago-basic-suite-master-host0 journal: vdsm Executor
>>> >> > WARN
>>> >> > Worker blocked: >> >> > {'params':
>>> >> > {u'connectionParams': [{u'id':
>>> u'4ca8fc84-d872-4a7f-907f-9445bda7b6d1',
>>> >> > u'connection': u'192.168.201.3:/exports/nfs/share1', u'iqn': u'',
>>> >> > u'user':
>>> >> > u'', u'tpgt': u'1', u'protocol_version': u'4.2', u'password':
>>> >> > '',
>>> >> > u'port': u''}], u'storagepoolID':
>>> >> > u'----',
>>> >> > u'domainType': 1}, 'jsonrpc': '2.0', 'method':
>>> >> > u'StoragePool.connectStorageServer', 'id':
>>> >> > u'057da9c2-1e67-4c2f-9511-7d9de250386b'} at 0x2f44110> timeout=60,
>>> >> > duration=60 at 0x2f44310> task#=9 at 0x2ac11d0>
>>> >> > ...
>>> >> >
>>> >>
>>> >> I see following sequence:
>>> >>
>>> >> The message is sent:
>>> >>
>>> >> 2017-04-19 18:55:58,020-04 DEBUG
>>> >> [org.ovirt.vdsm.jsonrpc.client.reactors.stomp.StompCommonClient]
>>> >> (org.ovirt.thread.pool-7-thread-37) [755b908a] Message sent: SEND
>>> >> destination:jms.topic.vdsm_requests
>>> >> content-length:381
>>> >> ovirtCorrelationId:755b908a
>>> >> reply-to:jms.topic.vdsm_responses
>>> >>
>>> >> >> >> StoragePool.connectStorageServer, params:
>>> >> {storagepoolID=----, domainType=1,
>>> >> connectionParams=[{password=, protocol_version=4.2, port=, iqn=,
>>> >> connection=192.168.201.3:/exports/nfs/share1,
>>> >> id=4ca8fc84-d872-4a7f-907f-9445bda7b6d1, user=, tpgt=1}]}>
>>> >>
>>> >> There is no response for specified amount of time and we timeout:
>>> >>
>>> >> 2017-04-19 18:58:58,465-04 DEBUG
>>> >> [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker]
>>> >> (ResponseWorker) [] Message received:
>>> >>
>>> >>
>>> {"jsonrpc":"2.0","error":{"code":"lago-basic-suite-master-host0:192912448","message":"Vds
>>> >> timeout occured"},"id":null}
>>> >>
>>> >> As Yaniv pointed here is why we never get the response:
>>> >>
>>> >> Apr 19 18:58:58 lago-basic-suite-master-host0 journal: vdsm Executor
>>> >> WARN Worker blocked: >> >> {'params': {u'connectionParams': [{u'id':
>>> >> u'4ca8fc84-d872-4a7f-907f-9445bda7b6d1', u'connection':
>>> >> u'192.168.201.3:/exports/nfs/share1', u'iqn': u'', u'user': u'',
>>> >> u'tpgt': u'1', u'protocol_version': u'4.2', u'password': '',
>>> >> u'port': u''}], u'storagepoolID':
>>> >> u'----', u'domainType': 

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 20.4.17 ] [ add_secondary_storage_domains ]

2017-04-20 Thread Piotr Kliczewski
On Thu, Apr 20, 2017 at 11:49 AM, Yaniv Kaul  wrote:
>
>
> On Thu, Apr 20, 2017 at 11:55 AM, Piotr Kliczewski
>  wrote:
>>
>> On Thu, Apr 20, 2017 at 10:32 AM, Yaniv Kaul  wrote:
>> > No, that's not the issue.
>> > I've seen it happening few times.
>> >
>> > 1. It always with the ISO domain (which we don't use anyway in o-s-t)
>> > 2. Apparently, only one host is asking for a mount:
>> >  authenticated mount request from 192.168.201.4:713 for /exports/nfs/iso
>> > (/exports/nfs/iso)
>> >
>> > (/var/log/messages of the NFS server)
>> >
>> > And indeed, you can see in[1] that host1 made the request and all is
>> > well on
>> > it.
>> >
>> > However, there are connection issues with host0 which cause a timeout to
>> > connectStorageServer():
>> > From[2]:
>> >
>> > 2017-04-19 18:58:58,465-04 DEBUG
>> > [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker] (ResponseWorker)
>> > []
>> > Message received:
>> >
>> > {"jsonrpc":"2.0","error":{"code":"lago-basic-suite-master-host0:192912448","message":"Vds
>> > timeout occured"},"id":null}
>> >
>> > 2017-04-19 18:58:58,475-04 ERROR
>> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> > (org.ovirt.thread.pool-7-thread-37) [755b908a] EVENT_ID:
>> > VDS_BROKER_COMMAND_FAILURE(10,802), Correlation ID: null, Call Stack:
>> > null,
>> > Custom Event ID: -1, Message: VDSM lago-basic-suite-master-host0 command
>> > ConnectStorageServerVDS failed: Message timeout which can be caused by
>> > communication issues
>> > 2017-04-19 18:58:58,475-04 INFO
>> >
>> > [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
>> > (org.ovirt.thread.pool-7-thread-37) [755b908a] Command
>> >
>> > 'org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand'
>> > return value '
>> > ServerConnectionStatusReturn:{status='Status [code=5022, message=Message
>> > timeout which can be caused by communication issues]'}
>> >
>> >
>> > I wonder why, but on /var/log/messages[3], I'm seeing:
>> > Apr 19 18:56:58 lago-basic-suite-master-host0 journal: vdsm Executor
>> > WARN
>> > Worker blocked: > > {'params':
>> > {u'connectionParams': [{u'id': u'4ca8fc84-d872-4a7f-907f-9445bda7b6d1',
>> > u'connection': u'192.168.201.3:/exports/nfs/share1', u'iqn': u'',
>> > u'user':
>> > u'', u'tpgt': u'1', u'protocol_version': u'4.2', u'password':
>> > '',
>> > u'port': u''}], u'storagepoolID':
>> > u'----',
>> > u'domainType': 1}, 'jsonrpc': '2.0', 'method':
>> > u'StoragePool.connectStorageServer', 'id':
>> > u'057da9c2-1e67-4c2f-9511-7d9de250386b'} at 0x2f44110> timeout=60,
>> > duration=60 at 0x2f44310> task#=9 at 0x2ac11d0>
>> > ...
>> >
>>
>> I see following sequence:
>>
>> The message is sent:
>>
>> 2017-04-19 18:55:58,020-04 DEBUG
>> [org.ovirt.vdsm.jsonrpc.client.reactors.stomp.StompCommonClient]
>> (org.ovirt.thread.pool-7-thread-37) [755b908a] Message sent: SEND
>> destination:jms.topic.vdsm_requests
>> content-length:381
>> ovirtCorrelationId:755b908a
>> reply-to:jms.topic.vdsm_responses
>>
>> > StoragePool.connectStorageServer, params:
>> {storagepoolID=----, domainType=1,
>> connectionParams=[{password=, protocol_version=4.2, port=, iqn=,
>> connection=192.168.201.3:/exports/nfs/share1,
>> id=4ca8fc84-d872-4a7f-907f-9445bda7b6d1, user=, tpgt=1}]}>
>>
>> There is no response for specified amount of time and we timeout:
>>
>> 2017-04-19 18:58:58,465-04 DEBUG
>> [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker]
>> (ResponseWorker) [] Message received:
>>
>> {"jsonrpc":"2.0","error":{"code":"lago-basic-suite-master-host0:192912448","message":"Vds
>> timeout occured"},"id":null}
>>
>> As Yaniv pointed here is why we never get the response:
>>
>> Apr 19 18:58:58 lago-basic-suite-master-host0 journal: vdsm Executor
>> WARN Worker blocked: > {'params': {u'connectionParams': [{u'id':
>> u'4ca8fc84-d872-4a7f-907f-9445bda7b6d1', u'connection':
>> u'192.168.201.3:/exports/nfs/share1', u'iqn': u'', u'user': u'',
>> u'tpgt': u'1', u'protocol_version': u'4.2', u'password': '',
>> u'port': u''}], u'storagepoolID':
>> u'----', u'domainType': 1}, 'jsonrpc':
>> '2.0', 'method': u'StoragePool.connectStorageServer', 'id':
>> u'057da9c2-1e67-4c2f-9511-7d9de250386b'} at 0x2f44110> timeout=60,
>> duration=180 at 0x2f44310> task#=9 at 0x2ac11d0>
>>
>> >
>> > 3. Also, there is still the infamous unable to update response issues.
>> >
>>
>> When we see timeout on a call our default behavior is to reconnect
>> when we clean pending messages.
>> As a result when we reconnect and receive a response from the message
>> sent before disconnect
>> we say it is unknown to us.
>
>
> But the example I've given was earlier than the storage issue?

The specific message that you refer to was a ping command but it
timeout (3 secs)
before it arrived and it was removed from tracking. When it finally

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 20.4.17 ] [ add_secondary_storage_domains ]

2017-04-20 Thread Yaniv Kaul
On Thu, Apr 20, 2017 at 2:42 PM, Nir Soffer  wrote:

>
>
> בתאריך יום ה׳, 20 באפר׳ 2017, 13:51, מאת Yaniv Kaul ‏:
>
>> On Thu, Apr 20, 2017 at 1:32 PM, Nir Soffer  wrote:
>>
>>>
>>>
>>> בתאריך יום ה׳, 20 באפר׳ 2017, 13:05, מאת Piotr Kliczewski ‏<
>>> piotr.kliczew...@gmail.com>:
>>>
 On Thu, Apr 20, 2017 at 11:49 AM, Yaniv Kaul  wrote:
 >
 >
 > On Thu, Apr 20, 2017 at 11:55 AM, Piotr Kliczewski
 >  wrote:
 >>
 >> On Thu, Apr 20, 2017 at 10:32 AM, Yaniv Kaul 
 wrote:
 >> > No, that's not the issue.
 >> > I've seen it happening few times.
 >> >
 >> > 1. It always with the ISO domain (which we don't use anyway in
 o-s-t)
 >> > 2. Apparently, only one host is asking for a mount:
 >> >  authenticated mount request from 192.168.201.4:713 for
 /exports/nfs/iso
 >> > (/exports/nfs/iso)
 >> >
 >> > (/var/log/messages of the NFS server)
 >> >
 >> > And indeed, you can see in[1] that host1 made the request and all
 is
 >> > well on
 >> > it.
 >> >
 >> > However, there are connection issues with host0 which cause a
 timeout to
 >> > connectStorageServer():
 >> > From[2]:
 >> >
 >> > 2017-04-19 18:58:58,465-04 DEBUG
 >> > [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker]
 (ResponseWorker)
 >> > []
 >> > Message received:
 >> >
 >> > {"jsonrpc":"2.0","error":{"code":"lago-basic-suite-
 master-host0:192912448","message":"Vds
 >> > timeout occured"},"id":null}
 >> >
 >> > 2017-04-19 18:58:58,475-04 ERROR
 >> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.
 AuditLogDirector]
 >> > (org.ovirt.thread.pool-7-thread-37) [755b908a] EVENT_ID:
 >> > VDS_BROKER_COMMAND_FAILURE(10,802), Correlation ID: null, Call
 Stack:
 >> > null,
 >> > Custom Event ID: -1, Message: VDSM lago-basic-suite-master-host0
 command
 >> > ConnectStorageServerVDS failed: Message timeout which can be
 caused by
 >> > communication issues
 >> > 2017-04-19 18:58:58,475-04 INFO
 >> >
 >> > [org.ovirt.engine.core.vdsbroker.vdsbroker.
 ConnectStorageServerVDSCommand]
 >> > (org.ovirt.thread.pool-7-thread-37) [755b908a] Command
 >> >
 >> > 'org.ovirt.engine.core.vdsbroker.vdsbroker.
 ConnectStorageServerVDSCommand'
 >> > return value '
 >> > ServerConnectionStatusReturn:{status='Status [code=5022,
 message=Message
 >> > timeout which can be caused by communication issues]'}
 >> >
 >> >
 >> > I wonder why, but on /var/log/messages[3], I'm seeing:
 >> > Apr 19 18:56:58 lago-basic-suite-master-host0 journal: vdsm
 Executor
 >> > WARN
 >> > Worker blocked: >>> >> > {'params':
 >> > {u'connectionParams': [{u'id': u'4ca8fc84-d872-4a7f-907f-
 9445bda7b6d1',
 >> > u'connection': u'192.168.201.3:/exports/nfs/share1', u'iqn': u'',
 >> > u'user':
 >> > u'', u'tpgt': u'1', u'protocol_version': u'4.2', u'password':
 >> > '',
 >> > u'port': u''}], u'storagepoolID':
 >> > u'----',
 >> > u'domainType': 1}, 'jsonrpc': '2.0', 'method':
 >> > u'StoragePool.connectStorageServer', 'id':
 >> > u'057da9c2-1e67-4c2f-9511-7d9de250386b'} at 0x2f44110> timeout=60,
 >> > duration=60 at 0x2f44310> task#=9 at 0x2ac11d0>
 >> > ...
 >> >
 >>
 >> I see following sequence:
 >>
 >> The message is sent:
 >>
 >> 2017-04-19 18:55:58,020-04 DEBUG
 >> [org.ovirt.vdsm.jsonrpc.client.reactors.stomp.StompCommonClient]
 >> (org.ovirt.thread.pool-7-thread-37) [755b908a] Message sent: SEND
 >> destination:jms.topic.vdsm_requests
 >> content-length:381
 >> ovirtCorrelationId:755b908a
 >> reply-to:jms.topic.vdsm_responses
 >>
 >> >>> >> StoragePool.connectStorageServer, params:
 >> {storagepoolID=----, domainType=1,
 >> connectionParams=[{password=, protocol_version=4.2, port=, iqn=,
 >> connection=192.168.201.3:/exports/nfs/share1,
 >> id=4ca8fc84-d872-4a7f-907f-9445bda7b6d1, user=, tpgt=1}]}>
 >>
 >> There is no response for specified amount of time and we timeout:
 >>
 >> 2017-04-19 18:58:58,465-04 DEBUG
 >> [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker]
 >> (ResponseWorker) [] Message received:
 >>
 >> {"jsonrpc":"2.0","error":{"code":"lago-basic-suite-
 master-host0:192912448","message":"Vds
 >> timeout occured"},"id":null}
 >>
 >> As Yaniv pointed here is why we never get the response:
 >>
 >> Apr 19 18:58:58 lago-basic-suite-master-host0 journal: vdsm Executor
 >> WARN Worker blocked: >>> >>> >> {'params': {u'connectionParams': [{u'id':
 >> u'4ca8fc84-d872-4a7f-907f-9445bda7b6d1', u'connection':
 >> 

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 20.4.17 ] [ add_secondary_storage_domains ]

2017-04-20 Thread Gil Shinar
Done 

On Thu, Apr 20, 2017 at 11:32 AM, Yaniv Kaul  wrote:

> No, that's not the issue.
> I've seen it happening few times.
>
> 1. It always with the ISO domain (which we don't use anyway in o-s-t)
> 2. Apparently, only one host is asking for a mount:
>  authenticated mount request from 192.168.201.4:713 for /exports/nfs/iso
> (/exports/nfs/iso)
>
> (/var/log/messages of the NFS server)
>
> And indeed, you can see in[1] that host1 made the request and all is well
> on it.
>
> However, there are connection issues with host0 which cause a timeout to
> connectStorageServer():
> From[2]:
>
> 2017-04-19 18:58:58,465-04 DEBUG 
> [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker]
> (ResponseWorker) [] Message received: {"jsonrpc":"2.0","error":{"
> code":"lago-basic-suite-master-host0:192912448","message":"Vds timeout
> occured"},"id":null}
>
> 2017-04-19 18:58:58,475-04 ERROR 
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
> (org.ovirt.thread.pool-7-thread-37) [755b908a] EVENT_ID: 
> VDS_BROKER_COMMAND_FAILURE(10,802), Correlation ID: null, Call Stack: null, 
> Custom Event ID: -1, Message: VDSM lago-basic-suite-master-host0 command 
> ConnectStorageServerVDS failed: Message timeout which can be caused by 
> communication issues
> 2017-04-19 18:58:58,475-04 INFO  
> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
> (org.ovirt.thread.pool-7-thread-37) [755b908a] Command 
> 'org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand' 
> return value '
> ServerConnectionStatusReturn:{status='Status [code=5022, message=Message 
> timeout which can be caused by communication issues]'}
>
>
> I wonder why, but on /var/log/messages[3], I'm seeing:
> Apr 19 18:56:58 lago-basic-suite-master-host0 journal: vdsm Executor WARN
> Worker blocked:  {'params': {u'connectionParams': [{u'id': 
> u'4ca8fc84-d872-4a7f-907f-9445bda7b6d1',
> u'connection': u'192.168.201.3:/exports/nfs/share1', u'iqn': u'',
> u'user': u'', u'tpgt': u'1', u'protocol_version': u'4.2', u'password':
> '', u'port': u''}], u'storagepoolID': 
> u'----',
> u'domainType': 1}, 'jsonrpc': '2.0', 'method': 
> u'StoragePool.connectStorageServer',
> 'id': u'057da9c2-1e67-4c2f-9511-7d9de250386b'} at 0x2f44110> timeout=60,
> duration=60 at 0x2f44310> task#=9 at 0x2ac11d0>
> ...
>
>
> 3. Also, there is still the infamous unable to update response issues.
>
> {"jsonrpc":"2.0","method":"Host.ping","params":{},"id":"7cb6052f-c732-4f7c-bd2d-e48c2ae1f5e0"}�
> 2017-04-19 18:54:27,843-04 DEBUG 
> [org.ovirt.vdsm.jsonrpc.client.reactors.stomp.StompCommonClient] 
> (org.ovirt.thread.pool-7-thread-15) [62d198cc] Message sent: SEND
> destination:jms.topic.vdsm_requests
> content-length:94
> ovirtCorrelationId:62d198cc
> reply-to:jms.topic.vdsm_responses
>
>  Host.ping, params: {}>
> 2017-04-19 18:54:27,885-04 DEBUG 
> [org.ovirt.vdsm.jsonrpc.client.reactors.stomp.impl.Message] 
> (org.ovirt.thread.pool-7-thread-16) [1f9aac13] SEND
> ovirtCorrelationId:1f9aac13
> destination:jms.topic.vdsm_requests
> reply-to:jms.topic.vdsm_responses
> content-length:94
>
> ...
>
> {"jsonrpc": "2.0", "id": "7cb6052f-c732-4f7c-bd2d-e48c2ae1f5e0", "result": 
> true}�
> 2017-04-19 18:54:32,132-04 DEBUG 
> [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker] (ResponseWorker) [] 
> Message received: {"jsonrpc": "2.0", "id": 
> "7cb6052f-c732-4f7c-bd2d-e48c2ae1f5e0", "result": true}
> 2017-04-19 18:54:32,133-04 ERROR 
> [org.ovirt.vdsm.jsonrpc.client.JsonRpcClient] (ResponseWorker) [] Not able to 
> update response for "7cb6052f-c732-4f7c-bd2d-e48c2ae1f5e0"
>
>
> Would be nice to understand why.
>
>
>
> 4. Lastly, MOM is not running. Why?
>
>
> Please open a bug with the details from item #2 above.
> Y.
>
>
> [1] http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/6403/
> artifact/exported-artifacts/basic-suit-master-el7/test_
> logs/basic-suite-master/post-002_bootstrap.py/lago-basic-
> suite-master-host1/_var_log/vdsm/supervdsm.log
> [2] http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/6403/
> artifact/exported-artifacts/basic-suit-master-el7/test_
> logs/basic-suite-master/post-002_bootstrap.py/lago-basic-
> suite-master-engine/_var_log/ovirt-engine/engine.log
> [3] http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/6403/
> artifact/exported-artifacts/basic-suit-master-el7/test_
> logs/basic-suite-master/post-002_bootstrap.py/lago-basic-
> suite-master-host0/_var_log/messages
>
>
>
>
>
> On Thu, Apr 20, 2017 at 9:27 AM, Gil Shinar  wrote:
>
>> Test failed: add_secondary_storage_domains
>> Link to suspected patches:
>> Link to Job: http://jenkins.ovirt.org/job/test-repo_ovirt_experiment
>> al_master/6403
>> Link to all logs: http://jenkins.ovirt.org/job/test-repo_ovirt_experimen
>> tal_master/6403/artifact/exported-artifacts/basic-suit-
>> 

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 20.4.17 ] [ add_secondary_storage_domains ]

2017-04-20 Thread Piotr Kliczewski
On Thu, Apr 20, 2017 at 10:32 AM, Yaniv Kaul  wrote:
> No, that's not the issue.
> I've seen it happening few times.
>
> 1. It always with the ISO domain (which we don't use anyway in o-s-t)
> 2. Apparently, only one host is asking for a mount:
>  authenticated mount request from 192.168.201.4:713 for /exports/nfs/iso
> (/exports/nfs/iso)
>
> (/var/log/messages of the NFS server)
>
> And indeed, you can see in[1] that host1 made the request and all is well on
> it.
>
> However, there are connection issues with host0 which cause a timeout to
> connectStorageServer():
> From[2]:
>
> 2017-04-19 18:58:58,465-04 DEBUG
> [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker] (ResponseWorker) []
> Message received:
> {"jsonrpc":"2.0","error":{"code":"lago-basic-suite-master-host0:192912448","message":"Vds
> timeout occured"},"id":null}
>
> 2017-04-19 18:58:58,475-04 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (org.ovirt.thread.pool-7-thread-37) [755b908a] EVENT_ID:
> VDS_BROKER_COMMAND_FAILURE(10,802), Correlation ID: null, Call Stack: null,
> Custom Event ID: -1, Message: VDSM lago-basic-suite-master-host0 command
> ConnectStorageServerVDS failed: Message timeout which can be caused by
> communication issues
> 2017-04-19 18:58:58,475-04 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
> (org.ovirt.thread.pool-7-thread-37) [755b908a] Command
> 'org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand'
> return value '
> ServerConnectionStatusReturn:{status='Status [code=5022, message=Message
> timeout which can be caused by communication issues]'}
>
>
> I wonder why, but on /var/log/messages[3], I'm seeing:
> Apr 19 18:56:58 lago-basic-suite-master-host0 journal: vdsm Executor WARN
> Worker blocked:  {u'connectionParams': [{u'id': u'4ca8fc84-d872-4a7f-907f-9445bda7b6d1',
> u'connection': u'192.168.201.3:/exports/nfs/share1', u'iqn': u'', u'user':
> u'', u'tpgt': u'1', u'protocol_version': u'4.2', u'password': '',
> u'port': u''}], u'storagepoolID': u'----',
> u'domainType': 1}, 'jsonrpc': '2.0', 'method':
> u'StoragePool.connectStorageServer', 'id':
> u'057da9c2-1e67-4c2f-9511-7d9de250386b'} at 0x2f44110> timeout=60,
> duration=60 at 0x2f44310> task#=9 at 0x2ac11d0>
> ...
>

I see following sequence:

The message is sent:

2017-04-19 18:55:58,020-04 DEBUG
[org.ovirt.vdsm.jsonrpc.client.reactors.stomp.StompCommonClient]
(org.ovirt.thread.pool-7-thread-37) [755b908a] Message sent: SEND
destination:jms.topic.vdsm_requests
content-length:381
ovirtCorrelationId:755b908a
reply-to:jms.topic.vdsm_responses



There is no response for specified amount of time and we timeout:

2017-04-19 18:58:58,465-04 DEBUG
[org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker]
(ResponseWorker) [] Message received:
{"jsonrpc":"2.0","error":{"code":"lago-basic-suite-master-host0:192912448","message":"Vds
timeout occured"},"id":null}

As Yaniv pointed here is why we never get the response:

Apr 19 18:58:58 lago-basic-suite-master-host0 journal: vdsm Executor
WARN Worker blocked:  timeout=60,
duration=180 at 0x2f44310> task#=9 at 0x2ac11d0>

>
> 3. Also, there is still the infamous unable to update response issues.
>

When we see timeout on a call our default behavior is to reconnect
when we clean pending messages.
As a result when we reconnect and receive a response from the message
sent before disconnect
we say it is unknown to us.

> {"jsonrpc":"2.0","method":"Host.ping","params":{},"id":"7cb6052f-c732-4f7c-bd2d-e48c2ae1f5e0"}�
> 2017-04-19 18:54:27,843-04 DEBUG
> [org.ovirt.vdsm.jsonrpc.client.reactors.stomp.StompCommonClient]
> (org.ovirt.thread.pool-7-thread-15) [62d198cc] Message sent: SEND
> destination:jms.topic.vdsm_requests
> content-length:94
> ovirtCorrelationId:62d198cc
> reply-to:jms.topic.vdsm_responses
>
>  Host.ping, params: {}>
> 2017-04-19 18:54:27,885-04 DEBUG
> [org.ovirt.vdsm.jsonrpc.client.reactors.stomp.impl.Message]
> (org.ovirt.thread.pool-7-thread-16) [1f9aac13] SEND
> ovirtCorrelationId:1f9aac13
> destination:jms.topic.vdsm_requests
> reply-to:jms.topic.vdsm_responses
> content-length:94
>
> ...
>
> {"jsonrpc": "2.0", "id": "7cb6052f-c732-4f7c-bd2d-e48c2ae1f5e0", "result":
> true}�
> 2017-04-19 18:54:32,132-04 DEBUG
> [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker] (ResponseWorker) []
> Message received: {"jsonrpc": "2.0", "id":
> "7cb6052f-c732-4f7c-bd2d-e48c2ae1f5e0", "result": true}
> 2017-04-19 18:54:32,133-04 ERROR
> [org.ovirt.vdsm.jsonrpc.client.JsonRpcClient] (ResponseWorker) [] Not able
> to update response for "7cb6052f-c732-4f7c-bd2d-e48c2ae1f5e0"
>
>
> Would be nice to understand why.
>
>
>
> 4. Lastly, MOM is not running. Why?
>
>
> Please open a bug with the details from item #2 above.
> Y.
>
>
> [1]
> 

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 20.4.17 ] [ add_secondary_storage_domains ]

2017-04-20 Thread Yaniv Kaul
No, that's not the issue.
I've seen it happening few times.

1. It always with the ISO domain (which we don't use anyway in o-s-t)
2. Apparently, only one host is asking for a mount:
 authenticated mount request from 192.168.201.4:713 for /exports/nfs/iso
(/exports/nfs/iso)

(/var/log/messages of the NFS server)

And indeed, you can see in[1] that host1 made the request and all is well
on it.

However, there are connection issues with host0 which cause a timeout to
connectStorageServer():
From[2]:

2017-04-19 18:58:58,465-04 DEBUG
[org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker] (ResponseWorker) []
Message received:
{"jsonrpc":"2.0","error":{"code":"lago-basic-suite-master-host0:192912448","message":"Vds
timeout occured"},"id":null}

2017-04-19 18:58:58,475-04 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-7-thread-37) [755b908a] EVENT_ID:
VDS_BROKER_COMMAND_FAILURE(10,802), Correlation ID: null, Call Stack:
null, Custom Event ID: -1, Message: VDSM lago-basic-suite-master-host0
command ConnectStorageServerVDS failed: Message timeout which can be
caused by communication issues
2017-04-19 18:58:58,475-04 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(org.ovirt.thread.pool-7-thread-37) [755b908a] Command
'org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand'
return value '
ServerConnectionStatusReturn:{status='Status [code=5022,
message=Message timeout which can be caused by communication issues]'}


I wonder why, but on /var/log/messages[3], I'm seeing:
Apr 19 18:56:58 lago-basic-suite-master-host0 journal: vdsm Executor WARN
Worker blocked:  timeout=60,
duration=60 at 0x2f44310> task#=9 at 0x2ac11d0>
...


3. Also, there is still the infamous unable to update response issues.

{"jsonrpc":"2.0","method":"Host.ping","params":{},"id":"7cb6052f-c732-4f7c-bd2d-e48c2ae1f5e0"}�
2017-04-19 18:54:27,843-04 DEBUG
[org.ovirt.vdsm.jsonrpc.client.reactors.stomp.StompCommonClient]
(org.ovirt.thread.pool-7-thread-15) [62d198cc] Message sent: SEND
destination:jms.topic.vdsm_requests
content-length:94
ovirtCorrelationId:62d198cc
reply-to:jms.topic.vdsm_responses


2017-04-19 18:54:27,885-04 DEBUG
[org.ovirt.vdsm.jsonrpc.client.reactors.stomp.impl.Message]
(org.ovirt.thread.pool-7-thread-16) [1f9aac13] SEND
ovirtCorrelationId:1f9aac13
destination:jms.topic.vdsm_requests
reply-to:jms.topic.vdsm_responses
content-length:94

...

{"jsonrpc": "2.0", "id": "7cb6052f-c732-4f7c-bd2d-e48c2ae1f5e0",
"result": true}�
2017-04-19 18:54:32,132-04 DEBUG
[org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker]
(ResponseWorker) [] Message received: {"jsonrpc": "2.0", "id":
"7cb6052f-c732-4f7c-bd2d-e48c2ae1f5e0", "result": true}
2017-04-19 18:54:32,133-04 ERROR
[org.ovirt.vdsm.jsonrpc.client.JsonRpcClient] (ResponseWorker) [] Not
able to update response for "7cb6052f-c732-4f7c-bd2d-e48c2ae1f5e0"


Would be nice to understand why.



4. Lastly, MOM is not running. Why?


Please open a bug with the details from item #2 above.
Y.


[1]
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/6403/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-host1/_var_log/vdsm/supervdsm.log
[2]
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/6403/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log
[3]
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/6403/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-host0/_var_log/messages





On Thu, Apr 20, 2017 at 9:27 AM, Gil Shinar  wrote:

> Test failed: add_secondary_storage_domains
> Link to suspected patches:
> Link to Job: http://jenkins.ovirt.org/job/test-repo_ovirt_
> experimental_master/6403
> Link to all logs: http://jenkins.ovirt.org/job/test-repo_ovirt_
> experimental_master/6403/artifact/exported-artifacts/
> basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py
>
>
> Error seems to be:
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *2017-04-19 18:58:58,774-0400 ERROR (jsonrpc/2) [storage.TaskManager.Task]
> (Task='8f9699ed-cc2f-434b-aa1e-b3c8ff30324a') Unexpected error
> (task:871)Traceback (most recent call last):  File
> "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 878, in _run
>   return fn(*args, **kargs)  File
> "/usr/lib/python2.7/site-packages/vdsm/logUtils.py", line 52, in wrapper
> res = f(*args, **kwargs)  File "/usr/share/vdsm/storage/hsm.py", line 2709,
> in getStorageDomainInfodom = self.validateSdUUID(sdUUID)  File
> "/usr/share/vdsm/storage/hsm.py", line 298, in validateSdUUIDsdDom =
> sdCache.produce(sdUUID=sdUUID)  File "/usr/share/vdsm/storage/sdc.py", line
> 112, in producedomain.getRealDomain()  File
> 

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 20.4.17 ] [ add_secondary_storage_domains ]

2017-04-20 Thread Yaniv Kaul
On Thu, Apr 20, 2017 at 11:55 AM, Piotr Kliczewski <
piotr.kliczew...@gmail.com> wrote:

> On Thu, Apr 20, 2017 at 10:32 AM, Yaniv Kaul  wrote:
> > No, that's not the issue.
> > I've seen it happening few times.
> >
> > 1. It always with the ISO domain (which we don't use anyway in o-s-t)
> > 2. Apparently, only one host is asking for a mount:
> >  authenticated mount request from 192.168.201.4:713 for /exports/nfs/iso
> > (/exports/nfs/iso)
> >
> > (/var/log/messages of the NFS server)
> >
> > And indeed, you can see in[1] that host1 made the request and all is
> well on
> > it.
> >
> > However, there are connection issues with host0 which cause a timeout to
> > connectStorageServer():
> > From[2]:
> >
> > 2017-04-19 18:58:58,465-04 DEBUG
> > [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker]
> (ResponseWorker) []
> > Message received:
> > {"jsonrpc":"2.0","error":{"code":"lago-basic-suite-
> master-host0:192912448","message":"Vds
> > timeout occured"},"id":null}
> >
> > 2017-04-19 18:58:58,475-04 ERROR
> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> > (org.ovirt.thread.pool-7-thread-37) [755b908a] EVENT_ID:
> > VDS_BROKER_COMMAND_FAILURE(10,802), Correlation ID: null, Call Stack:
> null,
> > Custom Event ID: -1, Message: VDSM lago-basic-suite-master-host0 command
> > ConnectStorageServerVDS failed: Message timeout which can be caused by
> > communication issues
> > 2017-04-19 18:58:58,475-04 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.
> ConnectStorageServerVDSCommand]
> > (org.ovirt.thread.pool-7-thread-37) [755b908a] Command
> > 'org.ovirt.engine.core.vdsbroker.vdsbroker.
> ConnectStorageServerVDSCommand'
> > return value '
> > ServerConnectionStatusReturn:{status='Status [code=5022, message=Message
> > timeout which can be caused by communication issues]'}
> >
> >
> > I wonder why, but on /var/log/messages[3], I'm seeing:
> > Apr 19 18:56:58 lago-basic-suite-master-host0 journal: vdsm Executor WARN
> > Worker blocked:  {'params':
> > {u'connectionParams': [{u'id': u'4ca8fc84-d872-4a7f-907f-9445bda7b6d1',
> > u'connection': u'192.168.201.3:/exports/nfs/share1', u'iqn': u'',
> u'user':
> > u'', u'tpgt': u'1', u'protocol_version': u'4.2', u'password': '',
> > u'port': u''}], u'storagepoolID': u'----
> ',
> > u'domainType': 1}, 'jsonrpc': '2.0', 'method':
> > u'StoragePool.connectStorageServer', 'id':
> > u'057da9c2-1e67-4c2f-9511-7d9de250386b'} at 0x2f44110> timeout=60,
> > duration=60 at 0x2f44310> task#=9 at 0x2ac11d0>
> > ...
> >
>
> I see following sequence:
>
> The message is sent:
>
> 2017-04-19 18:55:58,020-04 DEBUG
> [org.ovirt.vdsm.jsonrpc.client.reactors.stomp.StompCommonClient]
> (org.ovirt.thread.pool-7-thread-37) [755b908a] Message sent: SEND
> destination:jms.topic.vdsm_requests
> content-length:381
> ovirtCorrelationId:755b908a
> reply-to:jms.topic.vdsm_responses
>
>  StoragePool.connectStorageServer, params:
> {storagepoolID=----, domainType=1,
> connectionParams=[{password=, protocol_version=4.2, port=, iqn=,
> connection=192.168.201.3:/exports/nfs/share1,
> id=4ca8fc84-d872-4a7f-907f-9445bda7b6d1, user=, tpgt=1}]}>
>
> There is no response for specified amount of time and we timeout:
>
> 2017-04-19 18:58:58,465-04 DEBUG
> [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker]
> (ResponseWorker) [] Message received:
> {"jsonrpc":"2.0","error":{"code":"lago-basic-suite-
> master-host0:192912448","message":"Vds
> timeout occured"},"id":null}
>
> As Yaniv pointed here is why we never get the response:
>
> Apr 19 18:58:58 lago-basic-suite-master-host0 journal: vdsm Executor
> WARN Worker blocked:  {'params': {u'connectionParams': [{u'id':
> u'4ca8fc84-d872-4a7f-907f-9445bda7b6d1', u'connection':
> u'192.168.201.3:/exports/nfs/share1', u'iqn': u'', u'user': u'',
> u'tpgt': u'1', u'protocol_version': u'4.2', u'password': '',
> u'port': u''}], u'storagepoolID':
> u'----', u'domainType': 1}, 'jsonrpc':
> '2.0', 'method': u'StoragePool.connectStorageServer', 'id':
> u'057da9c2-1e67-4c2f-9511-7d9de250386b'} at 0x2f44110> timeout=60,
> duration=180 at 0x2f44310> task#=9 at 0x2ac11d0>
>
> >
> > 3. Also, there is still the infamous unable to update response issues.
> >
>
> When we see timeout on a call our default behavior is to reconnect
> when we clean pending messages.
> As a result when we reconnect and receive a response from the message
> sent before disconnect
> we say it is unknown to us.
>

But the example I've given was earlier than the storage issue?
Y.


>
> > {"jsonrpc":"2.0","method":"Host.ping","params":{},"id":"
> 7cb6052f-c732-4f7c-bd2d-e48c2ae1f5e0"}�
> > 2017-04-19 18:54:27,843-04 DEBUG
> > [org.ovirt.vdsm.jsonrpc.client.reactors.stomp.StompCommonClient]
> > (org.ovirt.thread.pool-7-thread-15) [62d198cc] Message sent: SEND
> > destination:jms.topic.vdsm_requests
> > content-length:94
> > 

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 20.4.17 ] [ add_secondary_storage_domains ]

2017-04-20 Thread Nir Soffer
בתאריך יום ה׳, 20 באפר׳ 2017, 13:05, מאת Piotr Kliczewski ‏<
piotr.kliczew...@gmail.com>:

> On Thu, Apr 20, 2017 at 11:49 AM, Yaniv Kaul  wrote:
> >
> >
> > On Thu, Apr 20, 2017 at 11:55 AM, Piotr Kliczewski
> >  wrote:
> >>
> >> On Thu, Apr 20, 2017 at 10:32 AM, Yaniv Kaul  wrote:
> >> > No, that's not the issue.
> >> > I've seen it happening few times.
> >> >
> >> > 1. It always with the ISO domain (which we don't use anyway in o-s-t)
> >> > 2. Apparently, only one host is asking for a mount:
> >> >  authenticated mount request from 192.168.201.4:713 for
> /exports/nfs/iso
> >> > (/exports/nfs/iso)
> >> >
> >> > (/var/log/messages of the NFS server)
> >> >
> >> > And indeed, you can see in[1] that host1 made the request and all is
> >> > well on
> >> > it.
> >> >
> >> > However, there are connection issues with host0 which cause a timeout
> to
> >> > connectStorageServer():
> >> > From[2]:
> >> >
> >> > 2017-04-19 18:58:58,465-04 DEBUG
> >> > [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker]
> (ResponseWorker)
> >> > []
> >> > Message received:
> >> >
> >> >
> {"jsonrpc":"2.0","error":{"code":"lago-basic-suite-master-host0:192912448","message":"Vds
> >> > timeout occured"},"id":null}
> >> >
> >> > 2017-04-19 18:58:58,475-04 ERROR
> >> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> >> > (org.ovirt.thread.pool-7-thread-37) [755b908a] EVENT_ID:
> >> > VDS_BROKER_COMMAND_FAILURE(10,802), Correlation ID: null, Call Stack:
> >> > null,
> >> > Custom Event ID: -1, Message: VDSM lago-basic-suite-master-host0
> command
> >> > ConnectStorageServerVDS failed: Message timeout which can be caused by
> >> > communication issues
> >> > 2017-04-19 18:58:58,475-04 INFO
> >> >
> >> >
> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
> >> > (org.ovirt.thread.pool-7-thread-37) [755b908a] Command
> >> >
> >> >
> 'org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand'
> >> > return value '
> >> > ServerConnectionStatusReturn:{status='Status [code=5022,
> message=Message
> >> > timeout which can be caused by communication issues]'}
> >> >
> >> >
> >> > I wonder why, but on /var/log/messages[3], I'm seeing:
> >> > Apr 19 18:56:58 lago-basic-suite-master-host0 journal: vdsm Executor
> >> > WARN
> >> > Worker blocked:  >> > {'params':
> >> > {u'connectionParams': [{u'id':
> u'4ca8fc84-d872-4a7f-907f-9445bda7b6d1',
> >> > u'connection': u'192.168.201.3:/exports/nfs/share1', u'iqn': u'',
> >> > u'user':
> >> > u'', u'tpgt': u'1', u'protocol_version': u'4.2', u'password':
> >> > '',
> >> > u'port': u''}], u'storagepoolID':
> >> > u'----',
> >> > u'domainType': 1}, 'jsonrpc': '2.0', 'method':
> >> > u'StoragePool.connectStorageServer', 'id':
> >> > u'057da9c2-1e67-4c2f-9511-7d9de250386b'} at 0x2f44110> timeout=60,
> >> > duration=60 at 0x2f44310> task#=9 at 0x2ac11d0>
> >> > ...
> >> >
> >>
> >> I see following sequence:
> >>
> >> The message is sent:
> >>
> >> 2017-04-19 18:55:58,020-04 DEBUG
> >> [org.ovirt.vdsm.jsonrpc.client.reactors.stomp.StompCommonClient]
> >> (org.ovirt.thread.pool-7-thread-37) [755b908a] Message sent: SEND
> >> destination:jms.topic.vdsm_requests
> >> content-length:381
> >> ovirtCorrelationId:755b908a
> >> reply-to:jms.topic.vdsm_responses
> >>
> >>  >> StoragePool.connectStorageServer, params:
> >> {storagepoolID=----, domainType=1,
> >> connectionParams=[{password=, protocol_version=4.2, port=, iqn=,
> >> connection=192.168.201.3:/exports/nfs/share1,
> >> id=4ca8fc84-d872-4a7f-907f-9445bda7b6d1, user=, tpgt=1}]}>
> >>
> >> There is no response for specified amount of time and we timeout:
> >>
> >> 2017-04-19 18:58:58,465-04 DEBUG
> >> [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker]
> >> (ResponseWorker) [] Message received:
> >>
> >>
> {"jsonrpc":"2.0","error":{"code":"lago-basic-suite-master-host0:192912448","message":"Vds
> >> timeout occured"},"id":null}
> >>
> >> As Yaniv pointed here is why we never get the response:
> >>
> >> Apr 19 18:58:58 lago-basic-suite-master-host0 journal: vdsm Executor
> >> WARN Worker blocked:  >> {'params': {u'connectionParams': [{u'id':
> >> u'4ca8fc84-d872-4a7f-907f-9445bda7b6d1', u'connection':
> >> u'192.168.201.3:/exports/nfs/share1', u'iqn': u'', u'user': u'',
> >> u'tpgt': u'1', u'protocol_version': u'4.2', u'password': '',
> >> u'port': u''}], u'storagepoolID':
> >> u'----', u'domainType': 1}, 'jsonrpc':
> >> '2.0', 'method': u'StoragePool.connectStorageServer', 'id':
> >> u'057da9c2-1e67-4c2f-9511-7d9de250386b'} at 0x2f44110> timeout=60,
> >> duration=180 at 0x2f44310> task#=9 at 0x2ac11d0>
>

This means the connection attempt was stuck for 180 seconds. Need to check
if the mount was stuck, or maybe there is some issue in supervdsm running
this.

This is a new warning introduced lately, before a stuck 

[ovirt-devel] Fwd: Announcing Bugzilla 5 Public Beta!

2017-04-20 Thread Sandro Bonazzola
-- Forwarded message --
From: Christine Freitas 
Date: Thu, Apr 20, 2017 at 12:45 AM
Subject: Announcing Bugzilla 5 Public Beta!


Hello All,


We are pleased to announce Red Hat's Bugzilla 5 beta [1]! We’re inviting
all of you to participate.

We encourage you to test your current scripts against this new version and
take part in the beta discussions on the Fedora development list [2].
Partners and customers may also use their existing communications channels
to share feedback or questions. We ask that you provide feedback or
questions by Wednesday, May 17th.

Here is a short list of some of the changes in Bugzilla 5:


   -

   Major improvements in the WebServices interface, including a new
   REST-like endpoint, allowing clients to access data using standard HTTP
   calls for easy development.
   -

   The UI has been significantly overhauled for a modern browsing
   experience.
   -

   Performance improvements, including caching improvements to allow faster
   access to certain types of data.
   -

   Red Hat Associates, Customers and Fedora Account System users can now
   log in using SAML.
   -

   The addition of some of the Bayoteers extensions allowing features such
   as inline editing of bugs in search results, team management and scrum
   tools, etc.
   -

   Ye Olde diff viewer has been replaced with the modern diff2html diff
   viewer
   -

   Improved, updated documentation including a rewrite using the
   reStructuredText format, which allows documentation to be more easily
   converted into different formats such as HTML and PDF, etc


The official release date for Bugzilla 5 will be determined based on the
beta feedback. We will communicate to you as the beta progresses.

For more information refer to:

https://beta-bugzilla.redhat.com/page.cgi?id=whats-new.html

https://beta-bugzilla.redhat.com/page.cgi?id=release-notes.html

https://beta-bugzilla.redhat.com/page.cgi?id=faq.html

https://beta-bugzilla.redhat.com/docs/en/html/using/index.html

https://beta-bugzilla.redhat.com/docs/en/html/api/index.html

Cheers, the Red Hat Bugzilla team.

1: https://beta-bugzilla.redhat.com/

2: https://lists.fedoraproject.org/archives/list/devel%40lists.
fedoraproject.org/




-- 

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA 

TRIED. TESTED. TRUSTED. 
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] Tests failure when building the latest master

2017-04-20 Thread Shmuel Melamud
Hi!

Just tried to build the latest Engine from master and got a lot of
test failures. All related to ThreadPoolUtil. For example:

onSuccessReturnValueOk(org.ovirt.engine.core.bll.pm.StartVdsCommandTest)
 Time elapsed: 0 sec  <<< ERROR!
java.lang.ExceptionInInitializerError
at 
org.ovirt.engine.core.bll.VdsCommand.runSleepOnReboot(VdsCommand.java:103)
at 
org.ovirt.engine.core.bll.VdsCommand.runSleepOnReboot(VdsCommand.java:99)
at 
org.ovirt.engine.core.bll.pm.StartVdsCommand.teardown(StartVdsCommand.java:131)
at 
org.ovirt.engine.core.bll.pm.FenceVdsBaseCommand.executeCommand(FenceVdsBaseCommand.java:118)
.

Caused by: java.lang.NullPointerException
at org.ovirt.engine.core.common.config.Config.getValue(Config.java:22)
at org.ovirt.engine.core.common.config.Config.getValue(Config.java:18)
at 
org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$InternalThreadExecutor.(ThreadPoolUtil.java:34)
at 
org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil.(ThreadPoolUtil.java:110)
at 
org.ovirt.engine.core.bll.VdsCommand.runSleepOnReboot(VdsCommand.java:103)
at 
org.ovirt.engine.core.bll.pm.StartVdsCommand$MockitoMock$1274036240.runSleepOnReboot$accessor$RBFb71Df(Unknown
Source)


Do you know what may cause this?

Shmuel
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] [ OST Failure Report ] [ oVirt 4.1 ] [ 23-03-2017 ] [ 002_bootstrap.add_secondary_storage_domains ]

2017-04-20 Thread Yaniv Kaul
That got stuck in my drafts. Seems like a familiar issue.

I think it's another instance of mount not returning:
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_
4.1/1054/artifact/exported-artifacts/basic-suit-4.1-el7/
test_logs/basic-suite-4.1/post-002_bootstrap.py/lago-
basic-suite-4-1-host0/_var_log/vdsm/supervdsm.log
shows:
MainProcess|jsonrpc/0::DEBUG::2017-03-23 07:20:38,143::supervdsmServer:
:93::SuperVdsm.ServerCallback::(wrapper) call mount with (u'192.168.201.2:
/exports/nfs/iso', u'/rhev/data-center/mnt/192.168.201.2:_exports_nfs_iso')
{'vfstype': 'nfs', 'mntOpts':
'soft,nosharecache,timeo=600,retrans=6,nfsvers=3',
'timeout': None, 'cgroup': None}
MainProcess|jsonrpc/0::DEBUG::2017-03-23
07:20:38,143::commands::69::root::(execCmd)
/usr/bin/taskset --cpu-list 0-1 /usr/bin/mount -t nfs -o
soft,nosharecache,timeo=600,retrans=6,nfsvers=3 192.168.201.2:/exports/nfs/iso
/rhev/data-center/mnt/192.168.201.2:_exports_nfs_iso (cwd None)
MainProcess|jsonrpc/2::DEBUG::2017-03-23 07:23:38,704::supervdsmServer:
:93::SuperVdsm.ServerCallback::(wrapper) call hbaRescan with () {}

And there's no return of that mount command to supervdsm.

The storage does get those requests:
Mar 23 07:19:56 lago-basic-suite-4-1-engine rpc.mountd[5171]: authenticated
mount request from 192.168.201.3:911 for /exports/nfs/iso (/exports/nfs/iso)
Mar 23 07:20:38 lago-basic-suite-4-1-engine rpc.mountd[5171]: authenticated
mount request from 192.168.201.4:905 for /exports/nfs/iso (/exports/nfs/iso)



On Thu, Mar 23, 2017 at 2:50 PM, Shlomo Ben David 
wrote:

> Hi,
>
>
> Test failed: [ 002_bootstrap.add_secondary_storage_domains ]
> Link to suspected patches: N/A
> Link to Job: http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_
> 4.1/1054
> Link to all logs: http://jenkins.ovirt.org/job/
> test-repo_ovirt_experimental_4.1/1054/artifact/exported-
> artifacts/basic-suit-4.1-el7/test_logs/basic-suite-4.1/
> post-002_bootstrap.py/
>
> Error snippet from the log:
>
> 
>
> 2017-03-23 07:23:38,867-0400 ERROR (jsonrpc/2) [storage.TaskManager.Task]
> (Task='8347d74c-92fe-4371-bc84-1314a43a2971') Unexpected error (task:870)
> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/task.py", line 877, in _run
> return fn(*args, **kargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/logUtils.py", line 52, in
> wrapper
> res = f(*args, **kwargs)
>   File "/usr/share/vdsm/storage/hsm.py", line 1159, in attachStorageDomain
> pool.attachSD(sdUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line
> 79, in wrapper
> return method(self, *args, **kwargs)
>   File "/usr/share/vdsm/storage/sp.py", line 924, in attachSD
> dom = sdCache.produce(sdUUID)
>   File "/usr/share/vdsm/storage/sdc.py", line 112, in produce
> domain.getRealDomain()
>   File "/usr/share/vdsm/storage/sdc.py", line 53, in getRealDomain
> return self._cache._realProduce(self._sdUUID)
>   File "/usr/share/vdsm/storage/sdc.py", line 136, in _realProduce
> domain = self._findDomain(sdUUID)
>   File "/usr/share/vdsm/storage/sdc.py", line 153, in _findDomain
> return findMethod(sdUUID)
>   File "/usr/share/vdsm/storage/sdc.py", line 178, in _findUnfetchedDomain
> raise se.StorageDomainDoesNotExist(sdUUID)
> StorageDomainDoesNotExist: Storage domain does not exist:
> (u'127fbe66-204c-4c6d-b521-d0f431af2b6c',)
>
> 
>
>
> Best Regards,
>
> Shlomi Ben-David | Software Engineer | Red Hat ISRAEL
> RHCSA | RHCVA | RHCE
> IRC: shlomibendavid (on #rhev-integ, #rhev-dev, #rhev-ci)
>
> OPEN SOURCE - 1 4 011 && 011 4 1
>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Tests failure when building the latest master

2017-04-20 Thread Ondra Machacek
I am very sorry, this[1] patch broke it. I am sending a fix now.
Sorry again.

[1] https://gerrit.ovirt.org/#/c/75577/

On Thu, Apr 20, 2017 at 5:31 PM, Shmuel Melamud  wrote:

> Hi!
>
> Just tried to build the latest Engine from master and got a lot of
> test failures. All related to ThreadPoolUtil. For example:
>
> onSuccessReturnValueOk(org.ovirt.engine.core.bll.pm.StartVdsCommandTest)
>  Time elapsed: 0 sec  <<< ERROR!
> java.lang.ExceptionInInitializerError
> at org.ovirt.engine.core.bll.VdsCommand.runSleepOnReboot(
> VdsCommand.java:103)
> at org.ovirt.engine.core.bll.VdsCommand.runSleepOnReboot(
> VdsCommand.java:99)
> at org.ovirt.engine.core.bll.pm.StartVdsCommand.teardown(
> StartVdsCommand.java:131)
> at org.ovirt.engine.core.bll.pm.FenceVdsBaseCommand.
> executeCommand(FenceVdsBaseCommand.java:118)
> .
>
> Caused by: java.lang.NullPointerException
> at org.ovirt.engine.core.common.config.Config.getValue(Config.
> java:22)
> at org.ovirt.engine.core.common.config.Config.getValue(Config.
> java:18)
> at org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$
> InternalThreadExecutor.(ThreadPoolUtil.java:34)
> at org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil.<
> clinit>(ThreadPoolUtil.java:110)
> at org.ovirt.engine.core.bll.VdsCommand.runSleepOnReboot(
> VdsCommand.java:103)
> at org.ovirt.engine.core.bll.pm.StartVdsCommand$MockitoMock$
> 1274036240.runSleepOnReboot$accessor$RBFb71Df(Unknown
> Source)
> 
>
> Do you know what may cause this?
>
> Shmuel
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Tests failure when building the latest master

2017-04-20 Thread Ondra Machacek
Fixed, sorry for inconvenience.

On Apr 20, 2017 6:46 PM, "Ondra Machacek"  wrote:

> I am very sorry, this[1] patch broke it. I am sending a fix now.
> Sorry again.
>
> [1] https://gerrit.ovirt.org/#/c/75577/
>
> On Thu, Apr 20, 2017 at 5:31 PM, Shmuel Melamud 
> wrote:
>
>> Hi!
>>
>> Just tried to build the latest Engine from master and got a lot of
>> test failures. All related to ThreadPoolUtil. For example:
>>
>> onSuccessReturnValueOk(org.ovirt.engine.core.bll.pm.StartVdsCommandTest)
>>  Time elapsed: 0 sec  <<< ERROR!
>> java.lang.ExceptionInInitializerError
>> at org.ovirt.engine.core.bll.VdsCommand.runSleepOnReboot(VdsCom
>> mand.java:103)
>> at org.ovirt.engine.core.bll.VdsCommand.runSleepOnReboot(VdsCom
>> mand.java:99)
>> at org.ovirt.engine.core.bll.pm.StartVdsCommand.teardown(StartV
>> dsCommand.java:131)
>> at org.ovirt.engine.core.bll.pm.FenceVdsBaseCommand.executeComm
>> and(FenceVdsBaseCommand.java:118)
>> .
>>
>> Caused by: java.lang.NullPointerException
>> at org.ovirt.engine.core.common.config.Config.getValue(Config.j
>> ava:22)
>> at org.ovirt.engine.core.common.config.Config.getValue(Config.j
>> ava:18)
>> at org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$Intern
>> alThreadExecutor.(ThreadPoolUtil.java:34)
>> at org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil.> t>(ThreadPoolUtil.java:110)
>> at org.ovirt.engine.core.bll.VdsCommand.runSleepOnReboot(VdsCom
>> mand.java:103)
>> at org.ovirt.engine.core.bll.pm.StartVdsCommand$MockitoMock$127
>> 4036240.runSleepOnReboot$accessor$RBFb71Df(Unknown
>> Source)
>> 
>>
>> Do you know what may cause this?
>>
>> Shmuel
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 20.4.17 ] [ add_secondary_storage_domains ]

2017-04-20 Thread Gil Shinar
Test failed: add_secondary_storage_domains
Link to suspected patches:
Link to Job:
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/6403
Link to all logs:
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/6403/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py


Error seems to be:






















*2017-04-19 18:58:58,774-0400 ERROR (jsonrpc/2) [storage.TaskManager.Task]
(Task='8f9699ed-cc2f-434b-aa1e-b3c8ff30324a') Unexpected error
(task:871)Traceback (most recent call last):  File
"/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 878, in _run
  return fn(*args, **kargs)  File
"/usr/lib/python2.7/site-packages/vdsm/logUtils.py", line 52, in wrapper
res = f(*args, **kwargs)  File "/usr/share/vdsm/storage/hsm.py", line 2709,
in getStorageDomainInfodom = self.validateSdUUID(sdUUID)  File
"/usr/share/vdsm/storage/hsm.py", line 298, in validateSdUUIDsdDom =
sdCache.produce(sdUUID=sdUUID)  File "/usr/share/vdsm/storage/sdc.py", line
112, in producedomain.getRealDomain()  File
"/usr/share/vdsm/storage/sdc.py", line 53, in getRealDomainreturn
self._cache._realProduce(self._sdUUID)  File
"/usr/share/vdsm/storage/sdc.py", line 136, in _realProducedomain =
self._findDomain(sdUUID)  File "/usr/share/vdsm/storage/sdc.py", line 153,
in _findDomainreturn findMethod(sdUUID)  File
"/usr/share/vdsm/storage/sdc.py", line 178, in _findUnfetchedDomain
raise se.StorageDomainDoesNotExist(sdUUID)StorageDomainDoesNotExist:
Storage domain does not exist:
(u'ac3bbc93-26ba-4ea8-8e76-c5b761f01931',)2017-04-19 18:58:58,777-0400 INFO
 (jsonrpc/2) [storage.TaskManager.Task]
(Task='8f9699ed-cc2f-434b-aa1e-b3c8ff30324a') aborting: Task is aborted:
'Storage domain does not exist' - code 358 (task:1176)2017-04-19
18:58:58,777-0400 ERROR (jsonrpc/2) [storage.Dispatcher] {'status':
{'message': "Storage domain does not exist:
(u'ac3bbc93-26ba-4ea8-8e76-c5b761f01931',)", 'code': 358}} (dispatcher:78)*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] Announcing VM Portal 0.1.4

2017-04-20 Thread Marek Libra
Hello All,

Let me announce availability of the VM Portal v0.1.4 for preliminary
testing.
We are looking forward to your feedback which we will try to incorporate
into oncoming stable 1.0.0 version.

The VM Portal aims to be a drop-in replacement of the existing Basic User
Portal.
Revised list of Extended User Portal features will be implemented to
ideally replace it as well.

The VM Portal is installed by default since oVirt 4.1.

*The simplest way to try latest version is via Docker by [1].*
Once oVirt credentials are entered and initialization is finished, you can
access it on [2].

If you prefer to stay as closest to the production setup as possible, the
latest rpms are available on project's yum repo [3].
Then you can access the portal from [4].

Prerequisites: The VM Portal requires ovirt-engine 4.0+, so far mostly
tested on 4.1.

Please note, the docker image is so far meant to just simplify user testing
and is not ready for production setup.
Unless decided otherwise in the future, stable releases are still planed to
be deployed via rpms.

For issue reporting or enhancement ideas, please use project's github issue
tracker [5].

Thank you for your feedback,
Marek


[1] docker run --rm -it -e ENGINE_URL=https://[OVIRT.ENGINE.FQDN]/ovirt-engine/
-p 3000:3000 mareklibra/ovirt-web-ui:latest
[2] http://localhost:3000
[3] https://people.redhat.com/mlibra/repos/ovirt-web-ui
[4] https://[OVIRT.ENGINE.FQDN]/ovirt-engine/web-ui
[5] https://github.com/oVirt/ovirt-web-ui/issues


-- 

Marek Libra

senior software engineer

Red Hat Czech




​
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ovirt-users] Announcing VM Portal 0.1.4

2017-04-20 Thread Michal Skrivanek

> On 20 Apr 2017, at 09:22, Nathanaël Blanchet  wrote:
> 
> Works well, good job, but I can't see which kind of new feature VM Portal 
> brings compared to Basic User Portal.
> 
> 

Depends from which POV you look at it - there's the “Goals” section on 
https://github.com/oVirt/ovirt-web-ui
But it’s a great question which you can help with - what new feature would you 
like to see? Please go ahead and file suggestions and bugs on the project page

Thanks,
michal

> 
> Le 20/04/2017 à 08:50, Marek Libra a écrit :
>> Hello All,
>> 
>> Let me announce availability of the VM Portal v0.1.4 for preliminary testing.
>> We are looking forward to your feedback which we will try to incorporate 
>> into oncoming stable 1.0.0 version.
>> 
>> The VM Portal aims to be a drop-in replacement of the existing Basic User 
>> Portal.
>> Revised list of Extended User Portal features will be implemented to ideally 
>> replace it as well.
>> 
>> The VM Portal is installed by default since oVirt 4.1.
>> 
>> The simplest way to try latest version is via Docker by [1].
>> Once oVirt credentials are entered and initialization is finished, you can 
>> access it on [2].
>> 
>> If you prefer to stay as closest to the production setup as possible, the 
>> latest rpms are available on project's yum repo [3].
>> Then you can access the portal from [4].
>> 
>> Prerequisites: The VM Portal requires ovirt-engine 4.0+, so far mostly 
>> tested on 4.1.
>> 
>> Please note, the docker image is so far meant to just simplify user testing 
>> and is not ready for production setup.
>> Unless decided otherwise in the future, stable releases are still planed to 
>> be deployed via rpms.
>> 
>> For issue reporting or enhancement ideas, please use project's github issue 
>> tracker [5].
>> 
>> Thank you for your feedback,
>> Marek
>> 
>> 
>> [1] docker run --rm -it -e 
>> ENGINE_URL=https://[OVIRT.ENGINE.FQDN]/ovirt-engine/ -p 3000:3000 
>> mareklibra/ovirt-web-ui:latest
>> [2] http://localhost:3000 
>> [3] https://people.redhat.com/mlibra/repos/ovirt-web-ui 
>> 
>> [4] https://[OVIRT.ENGINE.FQDN]/ovirt-engine/web-ui
>> [5] https://github.com/oVirt/ovirt-web-ui/issues 
>> 
>> 
>> 
>> -- 
>> MAREK LIBRA
>> SENIOR SOFTWARE ENGINEER
>> Red Hat Czech
>> 
>>  
>> 
>> ​
>> 
>> 
>> 
>> 
>> ___
>> Users mailing list
>> us...@ovirt.org 
>> http://lists.ovirt.org/mailman/listinfo/users 
>> 
> 
> -- 
> Nathanaël Blanchet
> 
> Supervision réseau
> Pôle Infrastrutures Informatiques
> 227 avenue Professeur-Jean-Louis-Viala
> 34193 MONTPELLIER CEDEX 5 
> Tél. 33 (0)4 67 54 84 55
> Fax  33 (0)4 67 54 84 14
> blanc...@abes.fr  
> ___
> Users mailing list
> us...@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ovirt-users] Announcing VM Portal 0.1.4

2017-04-20 Thread Nathanaël Blanchet
Works well, good job, but I can't see which kind of new feature VM 
Portal brings compared to Basic User Portal.



Le 20/04/2017 à 08:50, Marek Libra a écrit :

Hello All,

Let me announce availability of the VM Portal v0.1.4 for preliminary 
testing.
We are looking forward to your feedback which we will try to 
incorporate into oncoming stable 1.0.0 version.


The VM Portal aims to be a drop-in replacement of the existing Basic 
User Portal.
Revised list of Extended User Portal features will be implemented to 
ideally replace it as well.


The VM Portal is installed by default since oVirt 4.1.

*The simplest way to try latest version is via Docker by [1].*
Once oVirt credentials are entered and initialization is finished, you 
can access it on [2].


If you prefer to stay as closest to the production setup as possible, 
the latest rpms are available on project's yum repo [3].

Then you can access the portal from [4].

Prerequisites: The VM Portal requires ovirt-engine 4.0+, so far mostly 
tested on 4.1.


Please note, the docker image is so far meant to just simplify user 
testing and is not ready for production setup.
Unless decided otherwise in the future, stable releases are still 
planed to be deployed via rpms.


For issue reporting or enhancement ideas, please use project's github 
issue tracker [5].


Thank you for your feedback,
Marek


[1] docker run --rm -it -e 
ENGINE_URL=https://[OVIRT.ENGINE.FQDN]/ovirt-engine/ -p 3000:3000 
mareklibra/ovirt-web-ui:latest

[2] http://localhost:3000
[3] https://people.redhat.com/mlibra/repos/ovirt-web-ui
[4] https://[OVIRT.ENGINE.FQDN]/ovirt-engine/web-ui
[5] https://github.com/oVirt/ovirt-web-ui/issues


--

Marek Libra

senior software engineer

Red Hat Czech





​




___
Users mailing list
us...@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


--
Nathanaël Blanchet

Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel