[ovirt-users] Re: Gluster quorum issue on 3-node HCI with extra 5-nodes as compute and storage nodes

2020-09-15 Thread Strahil Nikolov via Users
As I mentioned in the Gluster's slack, start with providing the output of some 
cli commands:
gluster pool list
gluster peer status
gluster volume list
gluster volume status

Best Regards,
Strahil Nikolov






В понеделник, 14 септември 2020 г., 16:24:04 Гринуич+3, tho...@hoberg.net 
 написа: 





Yes, I've also posted this on the Gluster Slack. But I am using Gluster mostly 
because it's part of oVirt HCI, so don't just send me away, please!

Problem: GlusterD refusing to start due to quorum issues for volumes where it 
isn’t contributing any brick

(I've had this before on a different farm, but there it was transitory. Now I 
have it in a more observable manner, that's why I open a new topic)

In a test farm with recycled servers, I started running Gluster via oVirt 
3node-HCI, because I got 3 machines originally.
They were set up as group A in a 2:1 (replica:arbiter) oVirt HCI setup with 
'engine', 'vmstore' and 'data' volumes, one brick on each node.

I then got another five machines with hardware specs that were rather different 
to group A, so I set those up as group B to mostly act as compute nodes, but 
also to provide extra storage, mostly to be used externally as GlusterFS 
shares. It took a bit of fiddling with Ansible but I got these 5 nodes to serve 
two more Gluster volumes 'tape' and 'scratch' using dispersed bricks (4 
disperse:1 redundancy), RAID5 in my mind.

The two groups are in one Gluster, not because they serve bricks to the same 
volumes, but because oVirt doesn't like nodes to be in different Glusters (or 
actually, to already be in a Gluster when you add them as host node). But the 
two groups provide bricks to distinct volumes, there is no overlap.

After setup things have been running fine for weeks, but now I needed to 
restart a machine from group B, which has ‘tape’ and ‘scratch’ bricks, but none 
from original oVirt ‘engine’, ‘vmstore’ and ‘data’ in group A. Yet the gluster 
daemon refuses to start, citing a loss of quorum for these three volumes, even 
if it has no bricks in them… which makes no sense to me.

I am afraid the source of the issue is concept issues: I clearly don't really 
understand some design assumptions of Gluster.
And I'm afraid the design assumptions of Gluster and of oVirt (even with HCI), 
are not as related as one might assume from the marketing materials on the 
oVirt home-page.

But most of all I'd like to know: How do I fix this now?

I can't heal 'tape' and 'scratch', which are growing ever more apart while the 
glusterd on this machine in group B refuses to come online for lack of a quorum 
on volumes where it is not contributing bricks.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BENAUHUJGCDQUDKNNTSVSIYMROMTP5Z3/


[ovirt-users] Re: Gluster quorum issue on 3-node HCI with extra 5-nodes as compute and storage nodes

2020-09-14 Thread Thomas Hoberg

Am 14.09.2020 um 15:23 schrieb tho...@hoberg.net:

Sorry two times now:
1. It is a duplicate post, because the delay for posts to show up on the 
web site is ever longer (as I am responding via mail, the first post is 
still not shown...)


2. It seems to have been a wild goose chase: The gluster daemon from 
group B did eventually regain quorum (or returned to its senses) some 
time later... the error message is pretty scary and IMHO somewhat 
misleading, but...


With oVirt one must learn to be patient, evidently all that self-healing 
built-in depends on state machines turning their cogs and gears, not on 
admins pushing for things to happen... sorry!

Yes, I've also posted this on the Gluster Slack. But I am using Gluster mostly 
because it's part of oVirt HCI, so don't just send me away, please!

Problem: GlusterD refusing to start due to quorum issues for volumes where it 
isn’t contributing any brick

(I've had this before on a different farm, but there it was transitory. Now I 
have it in a more observable manner, that's why I open a new topic)

In a test farm with recycled servers, I started running Gluster via oVirt 
3node-HCI, because I got 3 machines originally.
They were set up as group A in a 2:1 (replica:arbiter) oVirt HCI setup with 
'engine', 'vmstore' and 'data' volumes, one brick on each node.

I then got another five machines with hardware specs that were rather different 
to group A, so I set those up as group B to mostly act as compute nodes, but 
also to provide extra storage, mostly to be used externally as GlusterFS 
shares. It took a bit of fiddling with Ansible but I got these 5 nodes to serve 
two more Gluster volumes 'tape' and 'scratch' using dispersed bricks (4 
disperse:1 redundancy), RAID5 in my mind.

The two groups are in one Gluster, not because they serve bricks to the same 
volumes, but because oVirt doesn't like nodes to be in different Glusters (or 
actually, to already be in a Gluster when you add them as host node). But the 
two groups provide bricks to distinct volumes, there is no overlap.

After setup things have been running fine for weeks, but now I needed to 
restart a machine from group B, which has ‘tape’ and ‘scratch’ bricks, but none 
from original oVirt ‘engine’, ‘vmstore’ and ‘data’ in group A. Yet the gluster 
daemon refuses to start, citing a loss of quorum for these three volumes, even 
if it has no bricks in them… which makes no sense to me.

I am afraid the source of the issue is concept issues: I clearly don't really 
understand some design assumptions of Gluster.
And I'm afraid the design assumptions of Gluster and of oVirt (even with HCI), 
are not as related as one might assume from the marketing materials on the 
oVirt home-page.

But most of all I'd like to know: How do I fix this now?

I can't heal 'tape' and 'scratch', which are growing ever more apart while the 
glusterd on this machine in group B refuses to come online for lack of a quorum 
on volumes where it is not contributing bricks.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives:



<>___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives:


[ovirt-users] Re: Gluster quorum

2018-05-30 Thread Demeter Tibor
Dear Jim, 

Thank you for your help, now it's working again!!! :) 
Have a nice day! 

Regards, 

Tibor 

- 2018. máj.. 29., 23:57, Jim Kusznir  írta: 

> I had the same problem when I upgraded to 4.2. I found that if I went to the
> brick in the UI and selected it, there was a "start" button in the upper-right
> of the gui. clicking that resolved this problem a few minutes later.
> I had to repeat for each volume that showed a brick down for which that brick
> was not actually down.

> --Jim

> On Tue, May 29, 2018 at 6:34 AM, Demeter Tibor < [ mailto:tdeme...@itsmart.hu 
> |
> tdeme...@itsmart.hu ] > wrote:

>> Hi,

>> I've successfully upgraded my hosts and I could raise the cluster level to 
>> 4.2.
>> Everything seems fine, but the monitoring problem does not resolved. My 
>> bricks
>> on first node are shown down (red) , but the glusterfs working fine (I 
>> verified
>> in terminal).

>> I've attached my engine.log.

>> Thanks in advance,

>> R,
>> Tibor

>> - 2018. máj.. 28., 14:59, Demeter Tibor < [ mailto:tdeme...@itsmart.hu |
>> tdeme...@itsmart.hu ] > írta:

>>> Hi,
>>> Ok I will try it.

>>> In this case, is it possible to remove and re-add a host that member of HA
>>> gluster ? This is an another task, but I need to separate my gluster network
>>> from my ovirtmgmt network.
>>> What is the recommended way for do this?

>>> It is not important now, but I need to do in future.

>>> I will attach my engine.log after upgrade my host.

>>> Thanks,
>>> Regards.

>>> Tibor

>>> - 2018. máj.. 28., 14:44, Sahina Bose < [ mailto:sab...@redhat.com |
>>> sab...@redhat.com ] > írta:

 On Mon, May 28, 2018 at 4:47 PM, Demeter Tibor < [ 
 mailto:tdeme...@itsmart.hu |
 tdeme...@itsmart.hu ] > wrote:

> Dear Sahina,

> Yes, exactly. I can check that check box, but I don't know how is safe 
> that. Is
> it safe?

 It is safe - if you can ensure that only one host is put into maintenance 
 at a
 time.

> I want to upgrade all of my host. If it will done, then the monitoring 
> will work
> perfectly?

 If it does not please provide engine.log again once you've upgraded all the
 hosts.

> Thanks.
> R.

> Tibor

> - 2018. máj.. 28., 10:09, Sahina Bose < [ mailto:sab...@redhat.com |
> sab...@redhat.com ] > írta:

>> On Mon, May 28, 2018 at 1:06 PM, Demeter Tibor < [ 
>> mailto:tdeme...@itsmart.hu |
>> tdeme...@itsmart.hu ] > wrote:

>>> Hi,

>>> Somebody could answer to my question please?
>>> It is very important for me, I could no finish my upgrade process (from 
>>> 4.1 to
>>> 4.2) since 9th May!

>> Can you explain how the upgrade process is blocked due to the monitoring?
>> If it's because you cannot move the host to maintenance, can you try 
>> with the
>> option "Ignore quorum checks" enabled?

>>> Meanwhile - I don't know why - one of my two gluster volume seems UP 
>>> (green) on
>>> the GUI. So, now only one is down.

>>> I need help. What can I do?

>>> Thanks in advance,

>>> Regards,

>>> Tibor

>>> - 2018. máj.. 23., 21:09, Demeter Tibor < [ 
>>> mailto:tdeme...@itsmart.hu |
>>> tdeme...@itsmart.hu ] > írta:

 Hi,

 I've updated again to the latest version, but there are no changes. 
 All of
 bricks on my first node are down in the GUI (in console are ok)
 An Interesting thing, the "Self-Heal info" column show "OK" for all 
 hosts and
 all bricks, but "Space used" column is zero for all hosts/bricks.
 Can I force remove and re-add my host to cluster awhile it is a 
 gluster member?
 Is it safe ?
 What can I do?

 I haven't update other hosts while gluster not working fine, or the 
 GUI does not
 detect . So my other hosts is remained 4.1 yet :(

 Thanks in advance,

 Regards

 Tibor

 - 2018. máj.. 23., 14:45, Denis Chapligin < [ 
 mailto:dchap...@redhat.com |
 dchap...@redhat.com ] > írta:

> Hello!

> On Tue, May 22, 2018 at 11:10 AM, Demeter Tibor < [ 
> mailto:tdeme...@itsmart.hu |
> tdeme...@itsmart.hu ] > wrote:

>> Is there any changes with this bug?

>> Still I haven't finish my upgrade process that i've started on 9th 
>> may:(

>> Please help me if you can.

> Looks like all required patches are already merged, so could you 
> please to
> update your engine again to the latest night build?

 ___
 Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ]
 To unsubscribe send an email to [ mailto:users-le...@ovirt.org |
 users-le...@ovirt.org ]

>>> ___
>>> Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ]

[ovirt-users] Re: Gluster quorum

2018-05-29 Thread Jim Kusznir
I had the same problem when I upgraded to 4.2.  I found that if I went to
the brick in the UI and selected it, there was a "start" button in the
upper-right of the gui.  clicking that resolved this problem a few minutes
later.

I had to repeat for each volume that showed a brick down for which that
brick was not actually down.

--Jim

On Tue, May 29, 2018 at 6:34 AM, Demeter Tibor  wrote:

> Hi,
>
> I've successfully upgraded my hosts and I could raise the cluster level to
> 4.2.
> Everything seems fine, but the monitoring problem does not resolved. My
> bricks on first node are shown down (red) , but the glusterfs working fine
> (I verified in terminal).
>
> I've attached my engine.log.
>
> Thanks in advance,
>
> R,
> Tibor
>
> - 2018. máj.. 28., 14:59, Demeter Tibor  írta:
>
> Hi,
> Ok I will try it.
>
> In this case, is it possible to remove and re-add a host that member of HA
> gluster ? This is an another task, but I need to separate my gluster
> network from my ovirtmgmt network.
> What is the recommended way for do this?
>
> It is not important now, but I need to do in future.
>
> I will attach my engine.log after upgrade my host.
>
> Thanks,
> Regards.
>
> Tibor
>
>
> - 2018. máj.. 28., 14:44, Sahina Bose  írta:
>
>
>
> On Mon, May 28, 2018 at 4:47 PM, Demeter Tibor 
> wrote:
>
>> Dear Sahina,
>>
>> Yes, exactly. I can check that check box, but I don't know how is safe
>> that. Is it safe?
>>
>
> It is safe - if you can ensure that only one host is put into maintenance
> at a time.
>
>>
>> I want to upgrade all of my host. If it will done, then the monitoring
>> will work perfectly?
>>
>
> If it does not please provide engine.log again once you've upgraded all
> the hosts.
>
>
>> Thanks.
>> R.
>>
>> Tibor
>>
>>
>>
>> - 2018. máj.. 28., 10:09, Sahina Bose  írta:
>>
>>
>>
>> On Mon, May 28, 2018 at 1:06 PM, Demeter Tibor 
>> wrote:
>>
>>> Hi,
>>>
>>> Somebody could answer to my question please?
>>> It is very important for me, I could no finish my upgrade process (from
>>> 4.1 to 4.2) since 9th May!
>>>
>>
>> Can you explain how the upgrade process is blocked due to the monitoring?
>> If it's because you cannot move the host to maintenance, can you try with
>> the option "Ignore quorum checks" enabled?
>>
>>
>>> Meanwhile - I don't know why - one of my two gluster volume seems UP
>>> (green) on the GUI. So, now only one is down.
>>>
>>> I need help. What can I do?
>>>
>>> Thanks in advance,
>>>
>>> Regards,
>>>
>>> Tibor
>>>
>>>
>>> - 2018. máj.. 23., 21:09, Demeter Tibor  írta:
>>>
>>> Hi,
>>>
>>> I've updated again to the latest version, but there are no changes. All
>>> of bricks on my first node are down in the GUI (in console are ok)
>>> An Interesting thing, the "Self-Heal info" column show "OK" for all
>>> hosts and all bricks, but "Space used" column is zero for all hosts/bricks.
>>> Can I force remove and re-add my host to cluster awhile it is a gluster
>>> member? Is it safe ?
>>> What can I do?
>>>
>>> I haven't update other hosts while gluster not working fine, or the GUI
>>> does not detect . So my other hosts is remained 4.1 yet :(
>>>
>>> Thanks in advance,
>>>
>>> Regards
>>>
>>> Tibor
>>>
>>> - 2018. máj.. 23., 14:45, Denis Chapligin 
>>> írta:
>>>
>>> Hello!
>>>
>>> On Tue, May 22, 2018 at 11:10 AM, Demeter Tibor 
>>> wrote:
>>>

 Is there any changes with this bug?

 Still I haven't finish my upgrade process that i've started on 9th may:(

 Please help me if you can.


>>>
>>> Looks like all required patches are already merged, so could you please
>>> to update your engine again to the latest night build?
>>>
>>>
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>>
>>>
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
>>> guidelines/
>>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
>>> message/MRAAPZSRIXLAJZBV6TRDXXK7R2ISPSDK/
>>>
>>>
>>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/OWA2I6AFZPO56Z2N6D25HUHLW6CGOUWL/
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: 

[ovirt-users] Re: Gluster quorum

2018-05-28 Thread Demeter Tibor
Hi, 
Ok I will try it. 

In this case, is it possible to remove and re-add a host that member of HA 
gluster ? This is an another task, but I need to separate my gluster network 
from my ovirtmgmt network. 
What is the recommended way for do this? 

It is not important now, but I need to do in future. 

I will attach my engine.log after upgrade my host. 

Thanks, 
Regards. 

Tibor 

- 2018. máj.. 28., 14:44, Sahina Bose  írta: 

> On Mon, May 28, 2018 at 4:47 PM, Demeter Tibor < [ mailto:tdeme...@itsmart.hu 
> |
> tdeme...@itsmart.hu ] > wrote:

>> Dear Sahina,

>> Yes, exactly. I can check that check box, but I don't know how is safe that. 
>> Is
>> it safe?

> It is safe - if you can ensure that only one host is put into maintenance at a
> time.

>> I want to upgrade all of my host. If it will done, then the monitoring will 
>> work
>> perfectly?

> If it does not please provide engine.log again once you've upgraded all the
> hosts.

>> Thanks.
>> R.

>> Tibor

>> - 2018. máj.. 28., 10:09, Sahina Bose < [ mailto:sab...@redhat.com |
>> sab...@redhat.com ] > írta:

>>> On Mon, May 28, 2018 at 1:06 PM, Demeter Tibor < [ 
>>> mailto:tdeme...@itsmart.hu |
>>> tdeme...@itsmart.hu ] > wrote:

 Hi,

 Somebody could answer to my question please?
 It is very important for me, I could no finish my upgrade process (from 
 4.1 to
 4.2) since 9th May!

>>> Can you explain how the upgrade process is blocked due to the monitoring?
>>> If it's because you cannot move the host to maintenance, can you try with 
>>> the
>>> option "Ignore quorum checks" enabled?

 Meanwhile - I don't know why - one of my two gluster volume seems UP 
 (green) on
 the GUI. So, now only one is down.

 I need help. What can I do?

 Thanks in advance,

 Regards,

 Tibor

 - 2018. máj.. 23., 21:09, Demeter Tibor < [ mailto:tdeme...@itsmart.hu 
 |
 tdeme...@itsmart.hu ] > írta:

> Hi,

> I've updated again to the latest version, but there are no changes. All of
> bricks on my first node are down in the GUI (in console are ok)
> An Interesting thing, the "Self-Heal info" column show "OK" for all hosts 
> and
> all bricks, but "Space used" column is zero for all hosts/bricks.
> Can I force remove and re-add my host to cluster awhile it is a gluster 
> member?
> Is it safe ?
> What can I do?

> I haven't update other hosts while gluster not working fine, or the GUI 
> does not
> detect . So my other hosts is remained 4.1 yet :(

> Thanks in advance,

> Regards

> Tibor

> - 2018. máj.. 23., 14:45, Denis Chapligin < [ 
> mailto:dchap...@redhat.com |
> dchap...@redhat.com ] > írta:

>> Hello!

>> On Tue, May 22, 2018 at 11:10 AM, Demeter Tibor < [ 
>> mailto:tdeme...@itsmart.hu |
>> tdeme...@itsmart.hu ] > wrote:

>>> Is there any changes with this bug?

>>> Still I haven't finish my upgrade process that i've started on 9th may:(

>>> Please help me if you can.

>> Looks like all required patches are already merged, so could you please 
>> to
>> update your engine again to the latest night build?

> ___
> Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ]
> To unsubscribe send an email to [ mailto:users-le...@ovirt.org |
> users-le...@ovirt.org ]

 ___
 Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ]
 To unsubscribe send an email to [ mailto:users-le...@ovirt.org |
 users-le...@ovirt.org ]
 Privacy Statement: [ https://www.ovirt.org/site/privacy-policy/ |
 https://www.ovirt.org/site/privacy-policy/ ]
 oVirt Code of Conduct: [
 https://www.ovirt.org/community/about/community-guidelines/ |
 https://www.ovirt.org/community/about/community-guidelines/ ]
 List Archives: [
 https://lists.ovirt.org/archives/list/users@ovirt.org/message/MRAAPZSRIXLAJZBV6TRDXXK7R2ISPSDK/
 |
 https://lists.ovirt.org/archives/list/users@ovirt.org/message/MRAAPZSRIXLAJZBV6TRDXXK7R2ISPSDK/
 ]
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OWA2I6AFZPO56Z2N6D25HUHLW6CGOUWL/


[ovirt-users] Re: Gluster quorum

2018-05-28 Thread Sahina Bose
On Mon, May 28, 2018 at 4:47 PM, Demeter Tibor  wrote:

> Dear Sahina,
>
> Yes, exactly. I can check that check box, but I don't know how is safe
> that. Is it safe?
>

It is safe - if you can ensure that only one host is put into maintenance
at a time.

>
> I want to upgrade all of my host. If it will done, then the monitoring
> will work perfectly?
>

If it does not please provide engine.log again once you've upgraded all the
hosts.


> Thanks.
> R.
>
> Tibor
>
>
>
> - 2018. máj.. 28., 10:09, Sahina Bose  írta:
>
>
>
> On Mon, May 28, 2018 at 1:06 PM, Demeter Tibor 
> wrote:
>
>> Hi,
>>
>> Somebody could answer to my question please?
>> It is very important for me, I could no finish my upgrade process (from
>> 4.1 to 4.2) since 9th May!
>>
>
> Can you explain how the upgrade process is blocked due to the monitoring?
> If it's because you cannot move the host to maintenance, can you try with
> the option "Ignore quorum checks" enabled?
>
>
>> Meanwhile - I don't know why - one of my two gluster volume seems UP
>> (green) on the GUI. So, now only one is down.
>>
>> I need help. What can I do?
>>
>> Thanks in advance,
>>
>> Regards,
>>
>> Tibor
>>
>>
>> - 2018. máj.. 23., 21:09, Demeter Tibor  írta:
>>
>> Hi,
>>
>> I've updated again to the latest version, but there are no changes. All
>> of bricks on my first node are down in the GUI (in console are ok)
>> An Interesting thing, the "Self-Heal info" column show "OK" for all hosts
>> and all bricks, but "Space used" column is zero for all hosts/bricks.
>> Can I force remove and re-add my host to cluster awhile it is a gluster
>> member? Is it safe ?
>> What can I do?
>>
>> I haven't update other hosts while gluster not working fine, or the GUI
>> does not detect . So my other hosts is remained 4.1 yet :(
>>
>> Thanks in advance,
>>
>> Regards
>>
>> Tibor
>>
>> - 2018. máj.. 23., 14:45, Denis Chapligin  írta:
>>
>> Hello!
>>
>> On Tue, May 22, 2018 at 11:10 AM, Demeter Tibor 
>> wrote:
>>
>>>
>>> Is there any changes with this bug?
>>>
>>> Still I haven't finish my upgrade process that i've started on 9th may:(
>>>
>>> Please help me if you can.
>>>
>>>
>>
>> Looks like all required patches are already merged, so could you please
>> to update your engine again to the latest night build?
>>
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>>
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
>> guidelines/
>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
>> message/MRAAPZSRIXLAJZBV6TRDXXK7R2ISPSDK/
>>
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/A6FFC37LB6CNDTXW2PYFLIYRW7OVFRYW/


[ovirt-users] Re: Gluster quorum

2018-05-28 Thread Demeter Tibor
Dear Sahina, 

Yes, exactly. I can check that check box, but I don't know how is safe that. Is 
it safe? 

I want to upgrade all of my host. If it will done, then the monitoring will 
work perfectly? 

Thanks. 
R. 

Tibor 

- 2018. máj.. 28., 10:09, Sahina Bose  írta: 

> On Mon, May 28, 2018 at 1:06 PM, Demeter Tibor < [ mailto:tdeme...@itsmart.hu 
> |
> tdeme...@itsmart.hu ] > wrote:

>> Hi,

>> Somebody could answer to my question please?
>> It is very important for me, I could no finish my upgrade process (from 4.1 
>> to
>> 4.2) since 9th May!

> Can you explain how the upgrade process is blocked due to the monitoring?
> If it's because you cannot move the host to maintenance, can you try with the
> option "Ignore quorum checks" enabled?

>> Meanwhile - I don't know why - one of my two gluster volume seems UP (green) 
>> on
>> the GUI. So, now only one is down.

>> I need help. What can I do?

>> Thanks in advance,

>> Regards,

>> Tibor

>> - 2018. máj.. 23., 21:09, Demeter Tibor < [ mailto:tdeme...@itsmart.hu |
>> tdeme...@itsmart.hu ] > írta:

>>> Hi,

>>> I've updated again to the latest version, but there are no changes. All of
>>> bricks on my first node are down in the GUI (in console are ok)
>>> An Interesting thing, the "Self-Heal info" column show "OK" for all hosts 
>>> and
>>> all bricks, but "Space used" column is zero for all hosts/bricks.
>>> Can I force remove and re-add my host to cluster awhile it is a gluster 
>>> member?
>>> Is it safe ?
>>> What can I do?

>>> I haven't update other hosts while gluster not working fine, or the GUI 
>>> does not
>>> detect . So my other hosts is remained 4.1 yet :(

>>> Thanks in advance,

>>> Regards

>>> Tibor

>>> - 2018. máj.. 23., 14:45, Denis Chapligin < [ 
>>> mailto:dchap...@redhat.com |
>>> dchap...@redhat.com ] > írta:

 Hello!

 On Tue, May 22, 2018 at 11:10 AM, Demeter Tibor < [ 
 mailto:tdeme...@itsmart.hu |
 tdeme...@itsmart.hu ] > wrote:

> Is there any changes with this bug?

> Still I haven't finish my upgrade process that i've started on 9th may:(

> Please help me if you can.

 Looks like all required patches are already merged, so could you please to
 update your engine again to the latest night build?

>>> ___
>>> Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ]
>>> To unsubscribe send an email to [ mailto:users-le...@ovirt.org |
>>> users-le...@ovirt.org ]

>> ___
>> Users mailing list -- [ mailto:users@ovirt.org | users@ovirt.org ]
>> To unsubscribe send an email to [ mailto:users-le...@ovirt.org |
>> users-le...@ovirt.org ]
>> Privacy Statement: [ https://www.ovirt.org/site/privacy-policy/ |
>> https://www.ovirt.org/site/privacy-policy/ ]
>> oVirt Code of Conduct: [
>> https://www.ovirt.org/community/about/community-guidelines/ |
>> https://www.ovirt.org/community/about/community-guidelines/ ]
>> List Archives: [
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MRAAPZSRIXLAJZBV6TRDXXK7R2ISPSDK/
>> |
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MRAAPZSRIXLAJZBV6TRDXXK7R2ISPSDK/
>> ]
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PORPMZS2OYP2NCKVUJACTVO2ILIYUAQJ/


[ovirt-users] Re: Gluster quorum

2018-05-28 Thread Sahina Bose
On Mon, May 28, 2018 at 1:06 PM, Demeter Tibor  wrote:

> Hi,
>
> Somebody could answer to my question please?
> It is very important for me, I could no finish my upgrade process (from
> 4.1 to 4.2) since 9th May!
>

Can you explain how the upgrade process is blocked due to the monitoring?
If it's because you cannot move the host to maintenance, can you try with
the option "Ignore quorum checks" enabled?


> Meanwhile - I don't know why - one of my two gluster volume seems UP
> (green) on the GUI. So, now only one is down.
>
> I need help. What can I do?
>
> Thanks in advance,
>
> Regards,
>
> Tibor
>
>
> - 2018. máj.. 23., 21:09, Demeter Tibor  írta:
>
> Hi,
>
> I've updated again to the latest version, but there are no changes. All of
> bricks on my first node are down in the GUI (in console are ok)
> An Interesting thing, the "Self-Heal info" column show "OK" for all hosts
> and all bricks, but "Space used" column is zero for all hosts/bricks.
> Can I force remove and re-add my host to cluster awhile it is a gluster
> member? Is it safe ?
> What can I do?
>
> I haven't update other hosts while gluster not working fine, or the GUI
> does not detect . So my other hosts is remained 4.1 yet :(
>
> Thanks in advance,
>
> Regards
>
> Tibor
>
> - 2018. máj.. 23., 14:45, Denis Chapligin  írta:
>
> Hello!
>
> On Tue, May 22, 2018 at 11:10 AM, Demeter Tibor 
> wrote:
>
>>
>> Is there any changes with this bug?
>>
>> Still I haven't finish my upgrade process that i've started on 9th may:(
>>
>> Please help me if you can.
>>
>>
>
> Looks like all required patches are already merged, so could you please to
> update your engine again to the latest night build?
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/MRAAPZSRIXLAJZBV6TRDXXK7R2ISPSDK/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OLDBXYCQZCJJ4KFDYRTE6VTH4L4OB5VQ/


[ovirt-users] Re: Gluster quorum

2018-05-28 Thread Demeter Tibor
Hi, 

Somebody could answer to my question please? 
It is very important for me, I could no finish my upgrade process (from 4.1 to 
4.2) since 9th May! 

Meanwhile - I don't know why - one of my two gluster volume seems UP (green) on 
the GUI. So, now only one is down. 

I need help. What can I do? 

Thanks in advance, 

Regards, 

Tibor 

- 2018. máj.. 23., 21:09, Demeter Tibor  írta: 

> Hi,

> I've updated again to the latest version, but there are no changes. All of
> bricks on my first node are down in the GUI (in console are ok)
> An Interesting thing, the "Self-Heal info" column show "OK" for all hosts and
> all bricks, but "Space used" column is zero for all hosts/bricks.
> Can I force remove and re-add my host to cluster awhile it is a gluster 
> member?
> Is it safe ?
> What can I do?

> I haven't update other hosts while gluster not working fine, or the GUI does 
> not
> detect . So my other hosts is remained 4.1 yet :(

> Thanks in advance,

> Regards

> Tibor

> - 2018. máj.. 23., 14:45, Denis Chapligin  írta:

>> Hello!

>> On Tue, May 22, 2018 at 11:10 AM, Demeter Tibor < [ 
>> mailto:tdeme...@itsmart.hu |
>> tdeme...@itsmart.hu ] > wrote:

>>> Is there any changes with this bug?

>>> Still I haven't finish my upgrade process that i've started on 9th may:(

>>> Please help me if you can.

>> Looks like all required patches are already merged, so could you please to
>> update your engine again to the latest night build?

> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MRAAPZSRIXLAJZBV6TRDXXK7R2ISPSDK/


[ovirt-users] Re: Gluster quorum

2018-05-23 Thread Demeter Tibor
Hi, 

I've updated again to the latest version, but there are no changes. All of 
bricks on my first node are down in the GUI (in console are ok) 
An Interesting thing, the "Self-Heal info" column show "OK" for all hosts and 
all bricks, but "Space used" column is zero for all hosts/bricks. 
Can I force remove and re-add my host to cluster awhile it is a gluster member? 
Is it safe ? 
What can I do? 

I haven't update other hosts while gluster not working fine, or the GUI does 
not detect . So my other hosts is remained 4.1 yet :( 

Thanks in advance, 

Regards 

Tibor 

- 2018. máj.. 23., 14:45, Denis Chapligin  írta: 

> Hello!

> On Tue, May 22, 2018 at 11:10 AM, Demeter Tibor < [ 
> mailto:tdeme...@itsmart.hu |
> tdeme...@itsmart.hu ] > wrote:

>> Is there any changes with this bug?

>> Still I haven't finish my upgrade process that i've started on 9th may:(

>> Please help me if you can.

> Looks like all required patches are already merged, so could you please to
> update your engine again to the latest night build?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] Re: Gluster quorum

2018-05-23 Thread Denis Chaplygin
Hello!

On Tue, May 22, 2018 at 11:10 AM, Demeter Tibor  wrote:

>
> Is there any changes with this bug?
>
> Still I haven't finish my upgrade process that i've started on 9th may:(
>
> Please help me if you can.
>
>

Looks like all required patches are already merged, so could you please to
update your engine again to the latest night build?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] Re: Gluster quorum

2018-05-22 Thread Demeter Tibor
Dear Sahina, 

Is there any changes with this bug? 

Still I haven't finish my upgrade process that i've started on 9th may:( 

Please help me if you can. 

Thanks 

Tibor 

- 2018. máj.. 18., 9:29, Demeter Tibor  írta: 

> Hi,

> I have to update the engine again?

> Thanks,

> R
> Tibor

> - 2018. máj.. 18., 6:47, Sahina Bose  írta:

>> Thanks for reporting this. [ https://gerrit.ovirt.org/91375 |
>> https://gerrit.ovirt.org/91375 ] fixes this. I've re-opened bug [
>> https://bugzilla.redhat.com/show_bug.cgi?id=1574508 |
>> https://bugzilla.redhat.com/show_bug.cgi?id=1574508 ]

>> On Thu, May 17, 2018 at 10:12 PM, Demeter Tibor < [ 
>> mailto:tdeme...@itsmart.hu |
>> tdeme...@itsmart.hu ] > wrote:

>>> Hi,

>>> 4.2.4-0.0.master.20180515183442.git00e1340.el7.centos

>>> Firstly, I did a yum update "ovirt-*-setup*"
>>> second, I have ran engine-setup to upgrade.

>>> I didn't remove the old repos, just installed the nightly repo.

>>> Thank you again,

>>> Regards,

>>> Tibor

>>> - 2018. máj.. 17., 15:02, Sahina Bose < [ mailto:sab...@redhat.com |
>>> sab...@redhat.com ] > írta:

 It doesn't look like the patch was applied. Still see the same error in
 engine.log
 "Error while refreshing brick statuses for volume 'volume1' of cluster 
 'C6220':
 null"\

 Did you use engine-setup to upgrade? What's the version of ovirt-engine
 currently installed?

 On Thu, May 17, 2018 at 5:10 PM, Demeter Tibor < [ 
 mailto:tdeme...@itsmart.hu |
 tdeme...@itsmart.hu ] > wrote:

> Hi,

> sure,

> Thank you for your time!

> R
> Tibor

> - 2018. máj.. 17., 12:19, Sahina Bose < [ mailto:sab...@redhat.com |
> sab...@redhat.com ] > írta:

>> [+users]

>> Can you provide the engine.log to see why the monitoring is not working 
>> here.
>> thanks!

>> On Wed, May 16, 2018 at 2:08 PM, Demeter Tibor < [ 
>> mailto:tdeme...@itsmart.hu |
>> tdeme...@itsmart.hu ] > wrote:

>>> Hi,

>>> Meanwhile, I did the upgrade engine, but the gluster state is same on 
>>> my first
>>> node.
>>> I've attached some screenshot of my problem.

>>> Thanks

>>> Tibor

>>> - 2018. máj.. 16., 10:16, Demeter Tibor < [ 
>>> mailto:tdeme...@itsmart.hu |
>>> tdeme...@itsmart.hu ] > írta Hi,

 If 4.3.4 will release, i just have to remove the nightly repo and 
 update to
 stable?

 I'm sorry for my terrible English, I try to explain what was my 
 problem with
 update.
 I'm upgraded from 4.1.8.

 I followed up the official hosted-engine update documentation, that 
 was not
 clear me, because it has referenced to a lot of old thing (i think).
 [ https://www.ovirt.org/documentation/upgrade-guide/upgrade-guide/ |
 https://www.ovirt.org/documentation/upgrade-guide/upgrade-guide/ ]
 [
 https://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-engine
 |
 https://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-engine
 ]

 Maybe it need to update, because I had a lot of question under upgrade 
 and I was
 not sure in all of necessary steps. For example, If I need to 
 installing the
 new, 4.2 repo on the hosts, then need to remove the old repo from that?
 Why I need to do a" yum update -y" on hosts, meanwhile there is an 
 "Updatehost"
 menu in the GUI? So, maybe it outdated.
 Since upgrade hosted engine, and the first node, I have problems with 
 gluster.
 It seems to working fine if you check it from console "gluster volume 
 status,
 etc" but not on the Gui, because now it yellow, and the brick reds in 
 the first
 node.

 Previously I did a mistake with glusterfs, my gluster config was 
 wrong. I have
 corrected them, but it did not helped to me,gluster bricks are reds on 
 my first
 node yet

 Now I try to upgrade to nightly, but I'm affraid, because it a living,
 productive system, and I don't have downtime. I hope it will help me.

 Thanks for all,

 Regards,
 Tibor Demeter

 - 2018. máj.. 16., 9:58, Sahina Bose < [ mailto:sab...@redhat.com |
 sab...@redhat.com ] > írta:

> On Wed, May 16, 2018 at 1:19 PM, Demeter Tibor < [ 
> mailto:tdeme...@itsmart.hu |
> tdeme...@itsmart.hu ] > wrote:

>> Hi,

>> is it a different, unstable repo? I have a productive cluster, how 
>> is safe that?
>> I don't have any experience with nightly build. How can I use this? 
>> It have to
>> install to the engine VM or all of my hosts?
>> Thanks in advance for help me..

> Only on the engine VM.

> Regarding 

[ovirt-users] Re: Gluster quorum

2018-05-18 Thread Demeter Tibor
Hi, 

I have to update the engine again? 

Thanks, 

R 
Tibor 

- 2018. máj.. 18., 6:47, Sahina Bose  írta: 

> Thanks for reporting this. [ https://gerrit.ovirt.org/91375 |
> https://gerrit.ovirt.org/91375 ] fixes this. I've re-opened bug [
> https://bugzilla.redhat.com/show_bug.cgi?id=1574508 |
> https://bugzilla.redhat.com/show_bug.cgi?id=1574508 ]

> On Thu, May 17, 2018 at 10:12 PM, Demeter Tibor < [ 
> mailto:tdeme...@itsmart.hu |
> tdeme...@itsmart.hu ] > wrote:

>> Hi,

>> 4.2.4-0.0.master.20180515183442.git00e1340.el7.centos

>> Firstly, I did a yum update "ovirt-*-setup*"
>> second, I have ran engine-setup to upgrade.

>> I didn't remove the old repos, just installed the nightly repo.

>> Thank you again,

>> Regards,

>> Tibor

>> - 2018. máj.. 17., 15:02, Sahina Bose < [ mailto:sab...@redhat.com |
>> sab...@redhat.com ] > írta:

>>> It doesn't look like the patch was applied. Still see the same error in
>>> engine.log
>>> "Error while refreshing brick statuses for volume 'volume1' of cluster 
>>> 'C6220':
>>> null"\

>>> Did you use engine-setup to upgrade? What's the version of ovirt-engine
>>> currently installed?

>>> On Thu, May 17, 2018 at 5:10 PM, Demeter Tibor < [ 
>>> mailto:tdeme...@itsmart.hu |
>>> tdeme...@itsmart.hu ] > wrote:

 Hi,

 sure,

 Thank you for your time!

 R
 Tibor

 - 2018. máj.. 17., 12:19, Sahina Bose < [ mailto:sab...@redhat.com |
 sab...@redhat.com ] > írta:

> [+users]

> Can you provide the engine.log to see why the monitoring is not working 
> here.
> thanks!

> On Wed, May 16, 2018 at 2:08 PM, Demeter Tibor < [ 
> mailto:tdeme...@itsmart.hu |
> tdeme...@itsmart.hu ] > wrote:

>> Hi,

>> Meanwhile, I did the upgrade engine, but the gluster state is same on my 
>> first
>> node.
>> I've attached some screenshot of my problem.

>> Thanks

>> Tibor

>> - 2018. máj.. 16., 10:16, Demeter Tibor < [ 
>> mailto:tdeme...@itsmart.hu |
>> tdeme...@itsmart.hu ] > írta Hi,

>>> If 4.3.4 will release, i just have to remove the nightly repo and 
>>> update to
>>> stable?

>>> I'm sorry for my terrible English, I try to explain what was my problem 
>>> with
>>> update.
>>> I'm upgraded from 4.1.8.

>>> I followed up the official hosted-engine update documentation, that was 
>>> not
>>> clear me, because it has referenced to a lot of old thing (i think).
>>> [ https://www.ovirt.org/documentation/upgrade-guide/upgrade-guide/ |
>>> https://www.ovirt.org/documentation/upgrade-guide/upgrade-guide/ ]
>>> [
>>> https://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-engine
>>> |
>>> https://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-engine
>>> ]

>>> Maybe it need to update, because I had a lot of question under upgrade 
>>> and I was
>>> not sure in all of necessary steps. For example, If I need to 
>>> installing the
>>> new, 4.2 repo on the hosts, then need to remove the old repo from that?
>>> Why I need to do a" yum update -y" on hosts, meanwhile there is an 
>>> "Updatehost"
>>> menu in the GUI? So, maybe it outdated.
>>> Since upgrade hosted engine, and the first node, I have problems with 
>>> gluster.
>>> It seems to working fine if you check it from console "gluster volume 
>>> status,
>>> etc" but not on the Gui, because now it yellow, and the brick reds in 
>>> the first
>>> node.

>>> Previously I did a mistake with glusterfs, my gluster config was wrong. 
>>> I have
>>> corrected them, but it did not helped to me,gluster bricks are reds on 
>>> my first
>>> node yet

>>> Now I try to upgrade to nightly, but I'm affraid, because it a living,
>>> productive system, and I don't have downtime. I hope it will help me.

>>> Thanks for all,

>>> Regards,
>>> Tibor Demeter

>>> - 2018. máj.. 16., 9:58, Sahina Bose < [ mailto:sab...@redhat.com |
>>> sab...@redhat.com ] > írta:

 On Wed, May 16, 2018 at 1:19 PM, Demeter Tibor < [ 
 mailto:tdeme...@itsmart.hu |
 tdeme...@itsmart.hu ] > wrote:

> Hi,

> is it a different, unstable repo? I have a productive cluster, how is 
> safe that?
> I don't have any experience with nightly build. How can I use this? 
> It have to
> install to the engine VM or all of my hosts?
> Thanks in advance for help me..

 Only on the engine VM.

 Regarding stability - it passes CI so relatively stable, beyond that 
 there are
 no guarantees.

 What's the specific problem you're facing with update? Can you 
 elaborate?

> Regards,

> Tibor

> - 2018. máj.. 15., 9:58, Demeter Tibor < [ 
> mailto:tdeme...@itsmart.hu |
> tdeme...@itsmart.hu ] 

[ovirt-users] Re: Gluster quorum

2018-05-17 Thread Sahina Bose
Thanks for reporting this. https://gerrit.ovirt.org/91375 fixes this. I've
re-opened bug https://bugzilla.redhat.com/show_bug.cgi?id=1574508

On Thu, May 17, 2018 at 10:12 PM, Demeter Tibor  wrote:

> Hi,
>
> 4.2.4-0.0.master.20180515183442.git00e1340.el7.centos
>
> Firstly, I did a yum update "ovirt-*-setup*"
> second, I have ran engine-setup to upgrade.
>
> I didn't remove the old repos, just installed the nightly repo.
>
> Thank you again,
>
> Regards,
>
> Tibor
>
> - 2018. máj.. 17., 15:02, Sahina Bose  írta:
>
> It doesn't look like the patch was applied. Still see the same error in
> engine.log
> "Error while refreshing brick statuses for volume 'volume1' of cluster
> 'C6220': null"\
>
> Did you use engine-setup to upgrade? What's the version of ovirt-engine
> currently installed?
>
> On Thu, May 17, 2018 at 5:10 PM, Demeter Tibor 
> wrote:
>
>> Hi,
>>
>> sure,
>>
>> Thank you for your time!
>>
>> R
>> Tibor
>>
>> - 2018. máj.. 17., 12:19, Sahina Bose  írta:
>>
>> [+users]
>>
>> Can you provide the engine.log to see why the monitoring is not working
>> here. thanks!
>>
>> On Wed, May 16, 2018 at 2:08 PM, Demeter Tibor 
>> wrote:
>>
>>> Hi,
>>>
>>> Meanwhile, I did the upgrade engine, but the gluster state is same on my
>>> first node.
>>> I've attached some screenshot of my problem.
>>>
>>> Thanks
>>>
>>> Tibor
>>>
>>>
>>>
>>> - 2018. máj.. 16., 10:16, Demeter Tibor  írta
>>> Hi,
>>>
>>>
>>> If 4.3.4 will release, i just have to remove the nightly repo and update
>>> to stable?
>>>
>>> I'm sorry for my terrible English, I try to explain what was my problem
>>> with update.
>>> I'm upgraded from 4.1.8.
>>>
>>> I followed up the official hosted-engine update documentation, that was
>>> not clear me, because it has referenced to a lot of old thing (i think).
>>> https://www.ovirt.org/documentation/upgrade-guide/upgrade-guide/
>>> https://www.ovirt.org/documentation/how-to/hosted-
>>> engine/#upgrade-hosted-engine
>>>
>>> Maybe it need to update, because I had a lot of question under upgrade
>>> and I was not sure in all of necessary steps. For example, If I need to
>>> installing the new, 4.2 repo on the hosts, then need to remove the old repo
>>> from that?
>>> Why I need to do a" yum update -y" on hosts, meanwhile there is an
>>> "Updatehost" menu in the GUI? So, maybe it outdated.
>>> Since upgrade hosted engine, and the first node, I have problems with
>>> gluster. It seems to working fine if you check it from console "gluster
>>> volume status, etc" but not on the Gui, because now it yellow, and the
>>> brick reds in the first node.
>>>
>>> Previously I did a mistake with glusterfs, my gluster config was wrong.
>>> I have corrected them, but it did not helped to me,gluster bricks are reds
>>> on my first node yet
>>>
>>>
>>> Now I try to upgrade to nightly, but I'm affraid, because it a living,
>>> productive system, and I don't have downtime. I hope it will help me.
>>>
>>> Thanks for all,
>>>
>>> Regards,
>>> Tibor Demeter
>>>
>>>
>>>
>>> - 2018. máj.. 16., 9:58, Sahina Bose  írta:
>>>
>>>
>>>
>>> On Wed, May 16, 2018 at 1:19 PM, Demeter Tibor 
>>> wrote:
>>>
 Hi,

 is it a different, unstable repo? I have a productive cluster, how is
 safe that?
 I don't have any experience with nightly build. How can I use this? It
 have to install to the engine VM or all of my hosts?
 Thanks in advance for help me..

>>>
>>> Only on the engine VM.
>>>
>>> Regarding stability - it passes CI so relatively stable, beyond that
>>> there are no guarantees.
>>>
>>> What's the specific problem you're facing with update? Can you elaborate?
>>>
>>>
 Regards,

 Tibor

 - 2018. máj.. 15., 9:58, Demeter Tibor  írta:

 Hi,

 Could you explain how can I use this patch?

 R,
 Tibor


 - 2018. máj.. 14., 11:18, Demeter Tibor  írta:

 Hi,

 Sorry for my question, but can you tell me please how can I use this
 patch?

 Thanks,
 Regards,
 Tibor
 - 2018. máj.. 14., 10:47, Sahina Bose  írta:



 On Sat, May 12, 2018 at 1:14 PM, Demeter Tibor 
 wrote:

> Hi,
>
> Could someone help me please ? I can't finish my upgrade process.
>

 https://gerrit.ovirt.org/91164 should fix the error you're facing.

 Can you elaborate why this is affecting the upgrade process?


> Thanks
> R
> Tibor
>
>
>
> - 2018. máj.. 10., 12:51, Demeter Tibor 
> írta:
>
> Hi,
>
> I've attached the vdsm and supervdsm logs. But I don't have engine.log
> here, because that is on hosted engine vm. Should I send that ?
>

[ovirt-users] Re: Gluster quorum

2018-05-17 Thread Demeter Tibor
Hi, 

4.2.4-0.0.master.20180515183442.git00e1340.el7.centos 

Firstly, I did a yum update "ovirt-*-setup*" 
second, I have ran engine-setup to upgrade. 

I didn't remove the old repos, just installed the nightly repo. 

Thank you again, 

Regards, 

Tibor 

- 2018. máj.. 17., 15:02, Sahina Bose  írta: 

> It doesn't look like the patch was applied. Still see the same error in
> engine.log
> "Error while refreshing brick statuses for volume 'volume1' of cluster 
> 'C6220':
> null"\

> Did you use engine-setup to upgrade? What's the version of ovirt-engine
> currently installed?

> On Thu, May 17, 2018 at 5:10 PM, Demeter Tibor < [ mailto:tdeme...@itsmart.hu 
> |
> tdeme...@itsmart.hu ] > wrote:

>> Hi,

>> sure,

>> Thank you for your time!

>> R
>> Tibor

>> - 2018. máj.. 17., 12:19, Sahina Bose < [ mailto:sab...@redhat.com |
>> sab...@redhat.com ] > írta:

>>> [+users]

>>> Can you provide the engine.log to see why the monitoring is not working 
>>> here.
>>> thanks!

>>> On Wed, May 16, 2018 at 2:08 PM, Demeter Tibor < [ 
>>> mailto:tdeme...@itsmart.hu |
>>> tdeme...@itsmart.hu ] > wrote:

 Hi,

 Meanwhile, I did the upgrade engine, but the gluster state is same on my 
 first
 node.
 I've attached some screenshot of my problem.

 Thanks

 Tibor

 - 2018. máj.. 16., 10:16, Demeter Tibor < [ mailto:tdeme...@itsmart.hu 
 |
 tdeme...@itsmart.hu ] > írta Hi,

> If 4.3.4 will release, i just have to remove the nightly repo and update 
> to
> stable?

> I'm sorry for my terrible English, I try to explain what was my problem 
> with
> update.
> I'm upgraded from 4.1.8.

> I followed up the official hosted-engine update documentation, that was 
> not
> clear me, because it has referenced to a lot of old thing (i think).
> [ https://www.ovirt.org/documentation/upgrade-guide/upgrade-guide/ |
> https://www.ovirt.org/documentation/upgrade-guide/upgrade-guide/ ]
> [
> https://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-engine
> |
> https://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-engine
> ]

> Maybe it need to update, because I had a lot of question under upgrade 
> and I was
> not sure in all of necessary steps. For example, If I need to installing 
> the
> new, 4.2 repo on the hosts, then need to remove the old repo from that?
> Why I need to do a" yum update -y" on hosts, meanwhile there is an 
> "Updatehost"
> menu in the GUI? So, maybe it outdated.
> Since upgrade hosted engine, and the first node, I have problems with 
> gluster.
> It seems to working fine if you check it from console "gluster volume 
> status,
> etc" but not on the Gui, because now it yellow, and the brick reds in the 
> first
> node.

> Previously I did a mistake with glusterfs, my gluster config was wrong. I 
> have
> corrected them, but it did not helped to me,gluster bricks are reds on my 
> first
> node yet

> Now I try to upgrade to nightly, but I'm affraid, because it a living,
> productive system, and I don't have downtime. I hope it will help me.

> Thanks for all,

> Regards,
> Tibor Demeter

> - 2018. máj.. 16., 9:58, Sahina Bose < [ mailto:sab...@redhat.com |
> sab...@redhat.com ] > írta:

>> On Wed, May 16, 2018 at 1:19 PM, Demeter Tibor < [ 
>> mailto:tdeme...@itsmart.hu |
>> tdeme...@itsmart.hu ] > wrote:

>>> Hi,

>>> is it a different, unstable repo? I have a productive cluster, how is 
>>> safe that?
>>> I don't have any experience with nightly build. How can I use this? It 
>>> have to
>>> install to the engine VM or all of my hosts?
>>> Thanks in advance for help me..

>> Only on the engine VM.

>> Regarding stability - it passes CI so relatively stable, beyond that 
>> there are
>> no guarantees.

>> What's the specific problem you're facing with update? Can you elaborate?

>>> Regards,

>>> Tibor

>>> - 2018. máj.. 15., 9:58, Demeter Tibor < [ 
>>> mailto:tdeme...@itsmart.hu |
>>> tdeme...@itsmart.hu ] > írta:

 Hi,

 Could you explain how can I use this patch?

 R,
 Tibor

 - 2018. máj.. 14., 11:18, Demeter Tibor < [ 
 mailto:tdeme...@itsmart.hu |
 tdeme...@itsmart.hu ] > írta:

> Hi,

> Sorry for my question, but can you tell me please how can I use this 
> patch?

> Thanks,
> Regards,
> Tibor
> - 2018. máj.. 14., 10:47, Sahina Bose < [ 
> mailto:sab...@redhat.com |
> sab...@redhat.com ] > írta:

>> On Sat, May 12, 2018 at 1:14 PM, Demeter Tibor < [ 
>> mailto:tdeme...@itsmart.hu |
>> tdeme...@itsmart.hu ] > wrote:

>>> Hi,

>>> Could someone help me please ? I can't 

[ovirt-users] Re: Gluster quorum

2018-05-17 Thread Sahina Bose
It doesn't look like the patch was applied. Still see the same error in
engine.log
"Error while refreshing brick statuses for volume 'volume1' of cluster
'C6220': null"\

Did you use engine-setup to upgrade? What's the version of ovirt-engine
currently installed?

On Thu, May 17, 2018 at 5:10 PM, Demeter Tibor  wrote:

> Hi,
>
> sure,
>
> Thank you for your time!
>
> R
> Tibor
>
> - 2018. máj.. 17., 12:19, Sahina Bose  írta:
>
> [+users]
>
> Can you provide the engine.log to see why the monitoring is not working
> here. thanks!
>
> On Wed, May 16, 2018 at 2:08 PM, Demeter Tibor 
> wrote:
>
>> Hi,
>>
>> Meanwhile, I did the upgrade engine, but the gluster state is same on my
>> first node.
>> I've attached some screenshot of my problem.
>>
>> Thanks
>>
>> Tibor
>>
>>
>>
>> - 2018. máj.. 16., 10:16, Demeter Tibor  írtaHi,
>>
>>
>> If 4.3.4 will release, i just have to remove the nightly repo and update
>> to stable?
>>
>> I'm sorry for my terrible English, I try to explain what was my problem
>> with update.
>> I'm upgraded from 4.1.8.
>>
>> I followed up the official hosted-engine update documentation, that was
>> not clear me, because it has referenced to a lot of old thing (i think).
>> https://www.ovirt.org/documentation/upgrade-guide/upgrade-guide/
>> https://www.ovirt.org/documentation/how-to/hosted-
>> engine/#upgrade-hosted-engine
>>
>> Maybe it need to update, because I had a lot of question under upgrade
>> and I was not sure in all of necessary steps. For example, If I need to
>> installing the new, 4.2 repo on the hosts, then need to remove the old repo
>> from that?
>> Why I need to do a" yum update -y" on hosts, meanwhile there is an
>> "Updatehost" menu in the GUI? So, maybe it outdated.
>> Since upgrade hosted engine, and the first node, I have problems with
>> gluster. It seems to working fine if you check it from console "gluster
>> volume status, etc" but not on the Gui, because now it yellow, and the
>> brick reds in the first node.
>>
>> Previously I did a mistake with glusterfs, my gluster config was wrong. I
>> have corrected them, but it did not helped to me,gluster bricks are reds on
>> my first node yet
>>
>>
>> Now I try to upgrade to nightly, but I'm affraid, because it a living,
>> productive system, and I don't have downtime. I hope it will help me.
>>
>> Thanks for all,
>>
>> Regards,
>> Tibor Demeter
>>
>>
>>
>> - 2018. máj.. 16., 9:58, Sahina Bose  írta:
>>
>>
>>
>> On Wed, May 16, 2018 at 1:19 PM, Demeter Tibor 
>> wrote:
>>
>>> Hi,
>>>
>>> is it a different, unstable repo? I have a productive cluster, how is
>>> safe that?
>>> I don't have any experience with nightly build. How can I use this? It
>>> have to install to the engine VM or all of my hosts?
>>> Thanks in advance for help me..
>>>
>>
>> Only on the engine VM.
>>
>> Regarding stability - it passes CI so relatively stable, beyond that
>> there are no guarantees.
>>
>> What's the specific problem you're facing with update? Can you elaborate?
>>
>>
>>> Regards,
>>>
>>> Tibor
>>>
>>> - 2018. máj.. 15., 9:58, Demeter Tibor  írta:
>>>
>>> Hi,
>>>
>>> Could you explain how can I use this patch?
>>>
>>> R,
>>> Tibor
>>>
>>>
>>> - 2018. máj.. 14., 11:18, Demeter Tibor  írta:
>>>
>>> Hi,
>>>
>>> Sorry for my question, but can you tell me please how can I use this
>>> patch?
>>>
>>> Thanks,
>>> Regards,
>>> Tibor
>>> - 2018. máj.. 14., 10:47, Sahina Bose  írta:
>>>
>>>
>>>
>>> On Sat, May 12, 2018 at 1:14 PM, Demeter Tibor 
>>> wrote:
>>>
 Hi,

 Could someone help me please ? I can't finish my upgrade process.

>>>
>>> https://gerrit.ovirt.org/91164 should fix the error you're facing.
>>>
>>> Can you elaborate why this is affecting the upgrade process?
>>>
>>>
 Thanks
 R
 Tibor



 - 2018. máj.. 10., 12:51, Demeter Tibor  írta:

 Hi,

 I've attached the vdsm and supervdsm logs. But I don't have engine.log
 here, because that is on hosted engine vm. Should I send that ?

 Thank you

 Regards,

 Tibor
 - 2018. máj.. 10., 12:30, Sahina Bose  írta:

 There's a bug here. Can you log one attaching this engine.log and also
 vdsm.log & supervdsm.log from n3.itsmart.cloud

 On Thu, May 10, 2018 at 3:35 PM, Demeter Tibor 
 wrote:

> Hi,
>
> I found this:
>
>
> 2018-05-10 03:24:19,096+02 INFO  [org.ovirt.engine.core.
> vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand]
> (DefaultQuartzScheduler7) [43f4eaec] FINISH, 
> GetGlusterVolumeAdvancedDetailsVDSCommand,
> return: org.ovirt.engine.core.common.businessentities.gluster.
> GlusterVolumeAdvancedDetails@ca97448e, log 

[ovirt-users] Re: Gluster quorum

2018-05-17 Thread Sahina Bose
[+users]

Can you provide the engine.log to see why the monitoring is not working
here. thanks!

On Wed, May 16, 2018 at 2:08 PM, Demeter Tibor  wrote:

> Hi,
>
> Meanwhile, I did the upgrade engine, but the gluster state is same on my
> first node.
> I've attached some screenshot of my problem.
>
> Thanks
>
> Tibor
>
>
>
> - 2018. máj.. 16., 10:16, Demeter Tibor  írtaHi,
>
>
> If 4.3.4 will release, i just have to remove the nightly repo and update
> to stable?
>
> I'm sorry for my terrible English, I try to explain what was my problem
> with update.
> I'm upgraded from 4.1.8.
>
> I followed up the official hosted-engine update documentation, that was
> not clear me, because it has referenced to a lot of old thing (i think).
> https://www.ovirt.org/documentation/upgrade-guide/upgrade-guide/
> https://www.ovirt.org/documentation/how-to/hosted-
> engine/#upgrade-hosted-engine
>
> Maybe it need to update, because I had a lot of question under upgrade and
> I was not sure in all of necessary steps. For example, If I need to
> installing the new, 4.2 repo on the hosts, then need to remove the old repo
> from that?
> Why I need to do a" yum update -y" on hosts, meanwhile there is an
> "Updatehost" menu in the GUI? So, maybe it outdated.
> Since upgrade hosted engine, and the first node, I have problems with
> gluster. It seems to working fine if you check it from console "gluster
> volume status, etc" but not on the Gui, because now it yellow, and the
> brick reds in the first node.
>
> Previously I did a mistake with glusterfs, my gluster config was wrong. I
> have corrected them, but it did not helped to me,gluster bricks are reds on
> my first node yet
>
>
> Now I try to upgrade to nightly, but I'm affraid, because it a living,
> productive system, and I don't have downtime. I hope it will help me.
>
> Thanks for all,
>
> Regards,
> Tibor Demeter
>
>
>
> - 2018. máj.. 16., 9:58, Sahina Bose  írta:
>
>
>
> On Wed, May 16, 2018 at 1:19 PM, Demeter Tibor 
> wrote:
>
>> Hi,
>>
>> is it a different, unstable repo? I have a productive cluster, how is
>> safe that?
>> I don't have any experience with nightly build. How can I use this? It
>> have to install to the engine VM or all of my hosts?
>> Thanks in advance for help me..
>>
>
> Only on the engine VM.
>
> Regarding stability - it passes CI so relatively stable, beyond that there
> are no guarantees.
>
> What's the specific problem you're facing with update? Can you elaborate?
>
>
>> Regards,
>>
>> Tibor
>>
>> - 2018. máj.. 15., 9:58, Demeter Tibor  írta:
>>
>> Hi,
>>
>> Could you explain how can I use this patch?
>>
>> R,
>> Tibor
>>
>>
>> - 2018. máj.. 14., 11:18, Demeter Tibor  írta:
>>
>> Hi,
>>
>> Sorry for my question, but can you tell me please how can I use this
>> patch?
>>
>> Thanks,
>> Regards,
>> Tibor
>> - 2018. máj.. 14., 10:47, Sahina Bose  írta:
>>
>>
>>
>> On Sat, May 12, 2018 at 1:14 PM, Demeter Tibor 
>> wrote:
>>
>>> Hi,
>>>
>>> Could someone help me please ? I can't finish my upgrade process.
>>>
>>
>> https://gerrit.ovirt.org/91164 should fix the error you're facing.
>>
>> Can you elaborate why this is affecting the upgrade process?
>>
>>
>>> Thanks
>>> R
>>> Tibor
>>>
>>>
>>>
>>> - 2018. máj.. 10., 12:51, Demeter Tibor  írta:
>>>
>>> Hi,
>>>
>>> I've attached the vdsm and supervdsm logs. But I don't have engine.log
>>> here, because that is on hosted engine vm. Should I send that ?
>>>
>>> Thank you
>>>
>>> Regards,
>>>
>>> Tibor
>>> - 2018. máj.. 10., 12:30, Sahina Bose  írta:
>>>
>>> There's a bug here. Can you log one attaching this engine.log and also
>>> vdsm.log & supervdsm.log from n3.itsmart.cloud
>>>
>>> On Thu, May 10, 2018 at 3:35 PM, Demeter Tibor 
>>> wrote:
>>>
 Hi,

 I found this:


 2018-05-10 03:24:19,096+02 INFO  [org.ovirt.engine.core.
 vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand]
 (DefaultQuartzScheduler7) [43f4eaec] FINISH, 
 GetGlusterVolumeAdvancedDetailsVDSCommand,
 return: org.ovirt.engine.core.common.businessentities.gluster.
 GlusterVolumeAdvancedDetails@ca97448e, log id: 347435ae
 2018-05-10 03:24:19,097+02 ERROR 
 [org.ovirt.engine.core.bll.gluster.GlusterSyncJob]
 (DefaultQuartzScheduler7) [43f4eaec] Error while refreshing brick statuses
 for volume 'volume2' of cluster 'C6220': null
 2018-05-10 03:24:19,097+02 INFO  
 [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
 (DefaultQuartzScheduler8) [7715ceda] Failed to acquire lock and wait lock
 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-0339=GLUSTER]',
 sharedLocks=''}'
 2018-05-10 03:24:19,104+02 INFO  [org.ovirt.engine.core.
 

[ovirt-users] Re: Gluster quorum

2018-05-15 Thread Sahina Bose
On Tue, May 15, 2018 at 1:28 PM, Demeter Tibor  wrote:

> Hi,
>
> Could you explain how can I use this patch?
>

You can use the 4.2 nightly to test it out -
http://resources.ovirt.org/pub/yum-repo/ovirt-release42-snapshot.rpm


> R,
> Tibor
>
>
> - 2018. máj.. 14., 11:18, Demeter Tibor  írta:
>
> Hi,
>
> Sorry for my question, but can you tell me please how can I use this patch?
>
> Thanks,
> Regards,
> Tibor
> - 2018. máj.. 14., 10:47, Sahina Bose  írta:
>
>
>
> On Sat, May 12, 2018 at 1:14 PM, Demeter Tibor 
> wrote:
>
>> Hi,
>>
>> Could someone help me please ? I can't finish my upgrade process.
>>
>
> https://gerrit.ovirt.org/91164 should fix the error you're facing.
>
> Can you elaborate why this is affecting the upgrade process?
>
>
>> Thanks
>> R
>> Tibor
>>
>>
>>
>> - 2018. máj.. 10., 12:51, Demeter Tibor  írta:
>>
>> Hi,
>>
>> I've attached the vdsm and supervdsm logs. But I don't have engine.log
>> here, because that is on hosted engine vm. Should I send that ?
>>
>> Thank you
>>
>> Regards,
>>
>> Tibor
>> - 2018. máj.. 10., 12:30, Sahina Bose  írta:
>>
>> There's a bug here. Can you log one attaching this engine.log and also
>> vdsm.log & supervdsm.log from n3.itsmart.cloud
>>
>> On Thu, May 10, 2018 at 3:35 PM, Demeter Tibor 
>> wrote:
>>
>>> Hi,
>>>
>>> I found this:
>>>
>>>
>>> 2018-05-10 03:24:19,096+02 INFO  [org.ovirt.engine.core.
>>> vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand]
>>> (DefaultQuartzScheduler7) [43f4eaec] FINISH, 
>>> GetGlusterVolumeAdvancedDetailsVDSCommand,
>>> return: org.ovirt.engine.core.common.businessentities.gluster.
>>> GlusterVolumeAdvancedDetails@ca97448e, log id: 347435ae
>>> 2018-05-10 03:24:19,097+02 ERROR 
>>> [org.ovirt.engine.core.bll.gluster.GlusterSyncJob]
>>> (DefaultQuartzScheduler7) [43f4eaec] Error while refreshing brick statuses
>>> for volume 'volume2' of cluster 'C6220': null
>>> 2018-05-10 03:24:19,097+02 INFO  
>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
>>> (DefaultQuartzScheduler8) [7715ceda] Failed to acquire lock and wait lock
>>> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-0339=GLUSTER]',
>>> sharedLocks=''}'
>>> 2018-05-10 03:24:19,104+02 INFO  [org.ovirt.engine.core.
>>> vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>> (DefaultQuartzScheduler7) [43f4eaec] START, 
>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName
>>> = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:
>>> {hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), log id: 6908121d
>>> 2018-05-10 03:24:19,106+02 ERROR [org.ovirt.engine.core.
>>> vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>> (DefaultQuartzScheduler7) [43f4eaec] Command '
>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud,
>>> VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})'
>>> execution failed: null
>>> 2018-05-10 03:24:19,106+02 INFO  [org.ovirt.engine.core.
>>> vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>> (DefaultQuartzScheduler7) [43f4eaec] FINISH, 
>>> GetGlusterLocalLogicalVolumeListVDSCommand,
>>> log id: 6908121d
>>> 2018-05-10 03:24:19,107+02 INFO  [org.ovirt.engine.core.
>>> vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>> (DefaultQuartzScheduler7) [43f4eaec] START, 
>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName
>>> = n1.itsmart.cloud, VdsIdVDSCommandParametersBase:
>>> {hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'}), log id: 735c6a5f
>>> 2018-05-10 03:24:19,109+02 ERROR [org.ovirt.engine.core.
>>> vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>> (DefaultQuartzScheduler7) [43f4eaec] Command '
>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud,
>>> VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})'
>>> execution failed: null
>>> 2018-05-10 03:24:19,109+02 INFO  [org.ovirt.engine.core.
>>> vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>> (DefaultQuartzScheduler7) [43f4eaec] FINISH, 
>>> GetGlusterLocalLogicalVolumeListVDSCommand,
>>> log id: 735c6a5f
>>> 2018-05-10 03:24:19,110+02 INFO  [org.ovirt.engine.core.
>>> vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>> (DefaultQuartzScheduler7) [43f4eaec] START, 
>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName
>>> = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:
>>> {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 6f9e9f58
>>> 2018-05-10 03:24:19,112+02 ERROR [org.ovirt.engine.core.
>>> vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>> (DefaultQuartzScheduler7) [43f4eaec] Command '
>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud,
>>> VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})'
>>> execution failed: null
>>> 2018-05-10 

[ovirt-users] Re: Gluster quorum

2018-05-15 Thread Demeter Tibor
Hi, 

Could you explain how can I use this patch? 

R, 
Tibor 

- 2018. máj.. 14., 11:18, Demeter Tibor  írta: 

> Hi,

> Sorry for my question, but can you tell me please how can I use this patch?

> Thanks,
> Regards,
> Tibor
> - 2018. máj.. 14., 10:47, Sahina Bose  írta:

>> On Sat, May 12, 2018 at 1:14 PM, Demeter Tibor < [ 
>> mailto:tdeme...@itsmart.hu |
>> tdeme...@itsmart.hu ] > wrote:

>>> Hi,

>>> Could someone help me please ? I can't finish my upgrade process.

>> [ https://gerrit.ovirt.org/91164 | https://gerrit.ovirt.org/91164 ] should 
>> fix
>> the error you're facing.

>> Can you elaborate why this is affecting the upgrade process?

>>> Thanks
>>> R
>>> Tibor

>>> - 2018. máj.. 10., 12:51, Demeter Tibor < [ mailto:tdeme...@itsmart.hu |
>>> tdeme...@itsmart.hu ] > írta:

 Hi,

 I've attached the vdsm and supervdsm logs. But I don't have engine.log 
 here,
 because that is on hosted engine vm. Should I send that ?

 Thank you

 Regards,

 Tibor
 - 2018. máj.. 10., 12:30, Sahina Bose < [ mailto:sab...@redhat.com |
 sab...@redhat.com ] > írta:

> There's a bug here. Can you log one attaching this engine.log and also 
> vdsm.log
> & supervdsm.log from n3.itsmart.cloud

> On Thu, May 10, 2018 at 3:35 PM, Demeter Tibor < [ 
> mailto:tdeme...@itsmart.hu |
> tdeme...@itsmart.hu ] > wrote:

>> Hi,

>> I found this:

>> 2018-05-10 03:24:19,096+02 INFO
>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] FINISH,
>> GetGlusterVolumeAdvancedDetailsVDSCommand, return:
>> org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@ca97448e,
>> log id: 347435ae
>> 2018-05-10 03:24:19,097+02 ERROR
>> [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] 
>> (DefaultQuartzScheduler7)
>> [43f4eaec] Error while refreshing brick statuses for volume 'volume2' of
>> cluster 'C6220': null
>> 2018-05-10 03:24:19,097+02 INFO
>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] 
>> (DefaultQuartzScheduler8)
>> [7715ceda] Failed to acquire lock and wait lock
>> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-0339=GLUSTER]',
>> sharedLocks=''}'
>> 2018-05-10 03:24:19,104+02 INFO
>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] START,
>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud,
>> VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'}),
>> log id: 6908121d
>> 2018-05-10 03:24:19,106+02 ERROR
>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] Command
>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud,
>> VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})'
>> execution failed: null
>> 2018-05-10 03:24:19,106+02 INFO
>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] FINISH,
>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6908121d
>> 2018-05-10 03:24:19,107+02 INFO
>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] START,
>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud,
>> VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'}),
>> log id: 735c6a5f
>> 2018-05-10 03:24:19,109+02 ERROR
>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] Command
>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud,
>> VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})'
>> execution failed: null
>> 2018-05-10 03:24:19,109+02 INFO
>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] FINISH,
>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 735c6a5f
>> 2018-05-10 03:24:19,110+02 INFO
>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] START,
>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud,
>> VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}),
>> log id: 6f9e9f58
>> 2018-05-10 03:24:19,112+02 ERROR
>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] Command
>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = 

[ovirt-users] Re: Gluster quorum

2018-05-14 Thread Demeter Tibor
Hi, 

Sorry for my question, but can you tell me please how can I use this patch? 

Thanks, 
Regards, 
Tibor 
- 2018. máj.. 14., 10:47, Sahina Bose  írta: 

> On Sat, May 12, 2018 at 1:14 PM, Demeter Tibor < [ mailto:tdeme...@itsmart.hu 
> |
> tdeme...@itsmart.hu ] > wrote:

>> Hi,

>> Could someone help me please ? I can't finish my upgrade process.

> [ https://gerrit.ovirt.org/91164 | https://gerrit.ovirt.org/91164 ] should fix
> the error you're facing.

> Can you elaborate why this is affecting the upgrade process?

>> Thanks
>> R
>> Tibor

>> - 2018. máj.. 10., 12:51, Demeter Tibor < [ mailto:tdeme...@itsmart.hu |
>> tdeme...@itsmart.hu ] > írta:

>>> Hi,

>>> I've attached the vdsm and supervdsm logs. But I don't have engine.log here,
>>> because that is on hosted engine vm. Should I send that ?

>>> Thank you

>>> Regards,

>>> Tibor
>>> - 2018. máj.. 10., 12:30, Sahina Bose < [ mailto:sab...@redhat.com |
>>> sab...@redhat.com ] > írta:

 There's a bug here. Can you log one attaching this engine.log and also 
 vdsm.log
 & supervdsm.log from n3.itsmart.cloud

 On Thu, May 10, 2018 at 3:35 PM, Demeter Tibor < [ 
 mailto:tdeme...@itsmart.hu |
 tdeme...@itsmart.hu ] > wrote:

> Hi,

> I found this:

> 2018-05-10 03:24:19,096+02 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand]
> (DefaultQuartzScheduler7) [43f4eaec] FINISH,
> GetGlusterVolumeAdvancedDetailsVDSCommand, return:
> org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@ca97448e,
> log id: 347435ae
> 2018-05-10 03:24:19,097+02 ERROR
> [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] 
> (DefaultQuartzScheduler7)
> [43f4eaec] Error while refreshing brick statuses for volume 'volume2' of
> cluster 'C6220': null
> 2018-05-10 03:24:19,097+02 INFO
> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] 
> (DefaultQuartzScheduler8)
> [7715ceda] Failed to acquire lock and wait lock
> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-0339=GLUSTER]',
> sharedLocks=''}'
> 2018-05-10 03:24:19,104+02 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
> (DefaultQuartzScheduler7) [43f4eaec] START,
> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud,
> VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'}),
> log id: 6908121d
> 2018-05-10 03:24:19,106+02 ERROR
> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
> (DefaultQuartzScheduler7) [43f4eaec] Command
> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud,
> VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})'
> execution failed: null
> 2018-05-10 03:24:19,106+02 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
> (DefaultQuartzScheduler7) [43f4eaec] FINISH,
> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6908121d
> 2018-05-10 03:24:19,107+02 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
> (DefaultQuartzScheduler7) [43f4eaec] START,
> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud,
> VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'}),
> log id: 735c6a5f
> 2018-05-10 03:24:19,109+02 ERROR
> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
> (DefaultQuartzScheduler7) [43f4eaec] Command
> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud,
> VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})'
> execution failed: null
> 2018-05-10 03:24:19,109+02 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
> (DefaultQuartzScheduler7) [43f4eaec] FINISH,
> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 735c6a5f
> 2018-05-10 03:24:19,110+02 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
> (DefaultQuartzScheduler7) [43f4eaec] START,
> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud,
> VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}),
> log id: 6f9e9f58
> 2018-05-10 03:24:19,112+02 ERROR
> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
> (DefaultQuartzScheduler7) [43f4eaec] Command
> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud,
> VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})'
> execution failed: null
> 2018-05-10 03:24:19,112+02 INFO
> 

[ovirt-users] Re: Gluster quorum

2018-05-14 Thread Sahina Bose
On Sat, May 12, 2018 at 1:14 PM, Demeter Tibor  wrote:

> Hi,
>
> Could someone help me please ? I can't finish my upgrade process.
>

https://gerrit.ovirt.org/91164 should fix the error you're facing.

Can you elaborate why this is affecting the upgrade process?


> Thanks
> R
> Tibor
>
>
>
> - 2018. máj.. 10., 12:51, Demeter Tibor  írta:
>
> Hi,
>
> I've attached the vdsm and supervdsm logs. But I don't have engine.log
> here, because that is on hosted engine vm. Should I send that ?
>
> Thank you
>
> Regards,
>
> Tibor
> - 2018. máj.. 10., 12:30, Sahina Bose  írta:
>
> There's a bug here. Can you log one attaching this engine.log and also
> vdsm.log & supervdsm.log from n3.itsmart.cloud
>
> On Thu, May 10, 2018 at 3:35 PM, Demeter Tibor 
> wrote:
>
>> Hi,
>>
>> I found this:
>>
>>
>> 2018-05-10 03:24:19,096+02 INFO  [org.ovirt.engine.core.
>> vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] FINISH, 
>> GetGlusterVolumeAdvancedDetailsVDSCommand,
>> return: org.ovirt.engine.core.common.businessentities.gluster.
>> GlusterVolumeAdvancedDetails@ca97448e, log id: 347435ae
>> 2018-05-10 03:24:19,097+02 ERROR 
>> [org.ovirt.engine.core.bll.gluster.GlusterSyncJob]
>> (DefaultQuartzScheduler7) [43f4eaec] Error while refreshing brick statuses
>> for volume 'volume2' of cluster 'C6220': null
>> 2018-05-10 03:24:19,097+02 INFO  
>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
>> (DefaultQuartzScheduler8) [7715ceda] Failed to acquire lock and wait lock
>> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-0339=GLUSTER]',
>> sharedLocks=''}'
>> 2018-05-10 03:24:19,104+02 INFO  [org.ovirt.engine.core.
>> vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] START, 
>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName
>> = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:
>> {hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), log id: 6908121d
>> 2018-05-10 03:24:19,106+02 ERROR [org.ovirt.engine.core.
>> vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] Command '
>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud,
>> VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})'
>> execution failed: null
>> 2018-05-10 03:24:19,106+02 INFO  [org.ovirt.engine.core.
>> vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] FINISH, 
>> GetGlusterLocalLogicalVolumeListVDSCommand,
>> log id: 6908121d
>> 2018-05-10 03:24:19,107+02 INFO  [org.ovirt.engine.core.
>> vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] START, 
>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName
>> = n1.itsmart.cloud, VdsIdVDSCommandParametersBase:
>> {hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'}), log id: 735c6a5f
>> 2018-05-10 03:24:19,109+02 ERROR [org.ovirt.engine.core.
>> vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] Command '
>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud,
>> VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})'
>> execution failed: null
>> 2018-05-10 03:24:19,109+02 INFO  [org.ovirt.engine.core.
>> vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] FINISH, 
>> GetGlusterLocalLogicalVolumeListVDSCommand,
>> log id: 735c6a5f
>> 2018-05-10 03:24:19,110+02 INFO  [org.ovirt.engine.core.
>> vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] START, 
>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName
>> = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:
>> {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 6f9e9f58
>> 2018-05-10 03:24:19,112+02 ERROR [org.ovirt.engine.core.
>> vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] Command '
>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud,
>> VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})'
>> execution failed: null
>> 2018-05-10 03:24:19,112+02 INFO  [org.ovirt.engine.core.
>> vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] FINISH, 
>> GetGlusterLocalLogicalVolumeListVDSCommand,
>> log id: 6f9e9f58
>> 2018-05-10 03:24:19,113+02 INFO  [org.ovirt.engine.core.
>> vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] START, 
>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName
>> = n3.itsmart.cloud, VdsIdVDSCommandParametersBase:
>> {hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}), log id: 2ee46967
>> 2018-05-10 03:24:19,115+02 ERROR [org.ovirt.engine.core.
>> 

[ovirt-users] Re: Gluster quorum

2018-05-14 Thread Demeter Tibor
Meanwhile i just changed my gluster network to 10.104.0.0/24 but does not 
happend anything. 

Regards, 

Tibor 

- 2018. máj.. 14., 9:49, Demeter Tibor  írta: 

> Hi,

> Yes, I have a gluster network, but it's "funny" because that is the
> 10.105.0.x/24. :( Also, the n4.itsmart.cloud is mean 10.104.0.4.
> The 10.104.0.x/24 is my ovirtmgmt network.

> However, the 10.104.0.x is accessable from all hosts.

> What should I do?

> Thanks,

> R

> Tibor

> - 2018. máj.. 12., 17:17, Doug Ingham  írta:

>> The two key errors I'd investigate are these...

>>> 2018-05-10 03:24:21,048+02 WARN
>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
>>> (DefaultQuartzScheduler8) [7715ceda] Could not associate brick
>>> '10.104.0.1:/gluster/brick/brick1' of volume
>>> 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster
>>> network found in cluster '59c10db3-0324-0320-0120-0339'

>>> 2018-05-10 03:24:20,749+02 ERROR
>>> [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7)
>>> [43f4eaec] Error while refreshing brick statuses for volume 'volume1' of
>>> cluster 'C6220': null

>>> 2018-05-10 11:59:26,051+02 ERROR
>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>> (DefaultQuartzScheduler4) [400fa486] Command
>>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud,
>>> VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})'
>>> execution failed: null

>> I'd start with that first one. Is the network/interface group of your storage
>> layer actually defined as a Gluster & Migration network within oVirt?

>> On 12 May 2018 at 03:44, Demeter Tibor < [ mailto:tdeme...@itsmart.hu |
>> tdeme...@itsmart.hu ] > wrote:

>>> Hi,

>>> Could someone help me please ? I can't finish my upgrade process.

>>> Thanks
>>> R
>>> Tibor

>>> - 2018. máj.. 10., 12:51, Demeter Tibor < [ mailto:tdeme...@itsmart.hu |
>>> tdeme...@itsmart.hu ] > írta:

 Hi,

 I've attached the vdsm and supervdsm logs. But I don't have engine.log 
 here,
 because that is on hosted engine vm. Should I send that ?

 Thank you

 Regards,

 Tibor
 - 2018. máj.. 10., 12:30, Sahina Bose < [ mailto:sab...@redhat.com |
 sab...@redhat.com ] > írta:

> There's a bug here. Can you log one attaching this engine.log and also 
> vdsm.log
> & supervdsm.log from n3.itsmart.cloud

> On Thu, May 10, 2018 at 3:35 PM, Demeter Tibor < [ 
> mailto:tdeme...@itsmart.hu |
> tdeme...@itsmart.hu ] > wrote:

>> Hi,

>> I found this:

>> 2018-05-10 03:24:19,096+02 INFO
>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] FINISH,
>> GetGlusterVolumeAdvancedDetailsVDSCommand, return:
>> org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@ca97448e,
>> log id: 347435ae
>> 2018-05-10 03:24:19,097+02 ERROR
>> [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] 
>> (DefaultQuartzScheduler7)
>> [43f4eaec] Error while refreshing brick statuses for volume 'volume2' of
>> cluster 'C6220': null
>> 2018-05-10 03:24:19,097+02 INFO
>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] 
>> (DefaultQuartzScheduler8)
>> [7715ceda] Failed to acquire lock and wait lock
>> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-0339=GLUSTER]',
>> sharedLocks=''}'
>> 2018-05-10 03:24:19,104+02 INFO
>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] START,
>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud,
>> VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'}),
>> log id: 6908121d
>> 2018-05-10 03:24:19,106+02 ERROR
>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] Command
>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud,
>> VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})'
>> execution failed: null
>> 2018-05-10 03:24:19,106+02 INFO
>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] FINISH,
>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6908121d
>> 2018-05-10 03:24:19,107+02 INFO
>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] START,
>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud,
>> VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'}),
>> log id: 735c6a5f
>> 2018-05-10 03:24:19,109+02 ERROR
>> 

[ovirt-users] Re: Gluster quorum

2018-05-14 Thread Demeter Tibor
Hi, 

Yes, I have a gluster network, but it's "funny" because that is the 
10.105.0.x/24. :( Also, the n4.itsmart.cloud is mean 10.104.0.4. 

The 10.104.0.x/24 is my ovirtmgmt network. 

However, the 10.104.0.x is accessable from all hosts. 

What should I do? 

Thanks, 

R 

Tibor 

- 2018. máj.. 12., 17:17, Doug Ingham  írta: 

> The two key errors I'd investigate are these...

>> 2018-05-10 03:24:21,048+02 WARN
>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
>> (DefaultQuartzScheduler8) [7715ceda] Could not associate brick
>> '10.104.0.1:/gluster/brick/brick1' of volume
>> 'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster
>> network found in cluster '59c10db3-0324-0320-0120-0339'

>> 2018-05-10 03:24:20,749+02 ERROR
>> [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7)
>> [43f4eaec] Error while refreshing brick statuses for volume 'volume1' of
>> cluster 'C6220': null

>> 2018-05-10 11:59:26,051+02 ERROR
>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler4) [400fa486] Command
>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud,
>> VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})'
>> execution failed: null

> I'd start with that first one. Is the network/interface group of your storage
> layer actually defined as a Gluster & Migration network within oVirt?

> On 12 May 2018 at 03:44, Demeter Tibor < [ mailto:tdeme...@itsmart.hu |
> tdeme...@itsmart.hu ] > wrote:

>> Hi,

>> Could someone help me please ? I can't finish my upgrade process.

>> Thanks
>> R
>> Tibor

>> - 2018. máj.. 10., 12:51, Demeter Tibor < [ mailto:tdeme...@itsmart.hu |
>> tdeme...@itsmart.hu ] > írta:

>>> Hi,

>>> I've attached the vdsm and supervdsm logs. But I don't have engine.log here,
>>> because that is on hosted engine vm. Should I send that ?

>>> Thank you

>>> Regards,

>>> Tibor
>>> - 2018. máj.. 10., 12:30, Sahina Bose < [ mailto:sab...@redhat.com |
>>> sab...@redhat.com ] > írta:

 There's a bug here. Can you log one attaching this engine.log and also 
 vdsm.log
 & supervdsm.log from n3.itsmart.cloud

 On Thu, May 10, 2018 at 3:35 PM, Demeter Tibor < [ 
 mailto:tdeme...@itsmart.hu |
 tdeme...@itsmart.hu ] > wrote:

> Hi,

> I found this:

> 2018-05-10 03:24:19,096+02 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand]
> (DefaultQuartzScheduler7) [43f4eaec] FINISH,
> GetGlusterVolumeAdvancedDetailsVDSCommand, return:
> org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@ca97448e,
> log id: 347435ae
> 2018-05-10 03:24:19,097+02 ERROR
> [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] 
> (DefaultQuartzScheduler7)
> [43f4eaec] Error while refreshing brick statuses for volume 'volume2' of
> cluster 'C6220': null
> 2018-05-10 03:24:19,097+02 INFO
> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] 
> (DefaultQuartzScheduler8)
> [7715ceda] Failed to acquire lock and wait lock
> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-0339=GLUSTER]',
> sharedLocks=''}'
> 2018-05-10 03:24:19,104+02 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
> (DefaultQuartzScheduler7) [43f4eaec] START,
> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud,
> VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'}),
> log id: 6908121d
> 2018-05-10 03:24:19,106+02 ERROR
> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
> (DefaultQuartzScheduler7) [43f4eaec] Command
> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud,
> VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})'
> execution failed: null
> 2018-05-10 03:24:19,106+02 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
> (DefaultQuartzScheduler7) [43f4eaec] FINISH,
> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6908121d
> 2018-05-10 03:24:19,107+02 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
> (DefaultQuartzScheduler7) [43f4eaec] START,
> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud,
> VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'}),
> log id: 735c6a5f
> 2018-05-10 03:24:19,109+02 ERROR
> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
> (DefaultQuartzScheduler7) [43f4eaec] Command
> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud,
> 

[ovirt-users] Re: Gluster quorum

2018-05-12 Thread Doug Ingham
The two key errors I'd investigate are these...

2018-05-10 03:24:21,048+02 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
> (DefaultQuartzScheduler8) [7715ceda] Could not associate brick '10.104.0.1:
> /gluster/brick/brick1' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27'
> with correct network as no gluster network found in cluster
> '59c10db3-0324-0320-0120-0339'
>
> 2018-05-10 03:24:20,749+02 ERROR 
> [org.ovirt.engine.core.bll.gluster.GlusterSyncJob]
> (DefaultQuartzScheduler7) [43f4eaec] Error while refreshing brick statuses
> for volume 'volume1' of cluster 'C6220': null
>
> 2018-05-10 11:59:26,051+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.
> GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler4)
> [400fa486] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName =
> n4.itsmart.cloud, VdsIdVDSCommandParametersBase:
> {hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' execution failed: null
>

I'd start with that first one. Is the network/interface group of your
storage layer actually defined as a Gluster & Migration network within
oVirt?


On 12 May 2018 at 03:44, Demeter Tibor  wrote:

> Hi,
>
> Could someone help me please ? I can't finish my upgrade process.
>
> Thanks
> R
> Tibor
>
>
>
> - 2018. máj.. 10., 12:51, Demeter Tibor  írta:
>
> Hi,
>
> I've attached the vdsm and supervdsm logs. But I don't have engine.log
> here, because that is on hosted engine vm. Should I send that ?
>
> Thank you
>
> Regards,
>
> Tibor
> - 2018. máj.. 10., 12:30, Sahina Bose  írta:
>
> There's a bug here. Can you log one attaching this engine.log and also
> vdsm.log & supervdsm.log from n3.itsmart.cloud
>
> On Thu, May 10, 2018 at 3:35 PM, Demeter Tibor 
> wrote:
>
>> Hi,
>>
>> I found this:
>>
>>
>> 2018-05-10 03:24:19,096+02 INFO  [org.ovirt.engine.core.
>> vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] FINISH, 
>> GetGlusterVolumeAdvancedDetailsVDSCommand,
>> return: org.ovirt.engine.core.common.businessentities.gluster.
>> GlusterVolumeAdvancedDetails@ca97448e, log id: 347435ae
>> 2018-05-10 03:24:19,097+02 ERROR 
>> [org.ovirt.engine.core.bll.gluster.GlusterSyncJob]
>> (DefaultQuartzScheduler7) [43f4eaec] Error while refreshing brick statuses
>> for volume 'volume2' of cluster 'C6220': null
>> 2018-05-10 03:24:19,097+02 INFO  
>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
>> (DefaultQuartzScheduler8) [7715ceda] Failed to acquire lock and wait lock
>> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-0339=GLUSTER]',
>> sharedLocks=''}'
>> 2018-05-10 03:24:19,104+02 INFO  [org.ovirt.engine.core.
>> vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] START, 
>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName
>> = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:
>> {hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), log id: 6908121d
>> 2018-05-10 03:24:19,106+02 ERROR [org.ovirt.engine.core.
>> vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] Command '
>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud,
>> VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})'
>> execution failed: null
>> 2018-05-10 03:24:19,106+02 INFO  [org.ovirt.engine.core.
>> vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] FINISH, 
>> GetGlusterLocalLogicalVolumeListVDSCommand,
>> log id: 6908121d
>> 2018-05-10 03:24:19,107+02 INFO  [org.ovirt.engine.core.
>> vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] START, 
>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName
>> = n1.itsmart.cloud, VdsIdVDSCommandParametersBase:
>> {hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'}), log id: 735c6a5f
>> 2018-05-10 03:24:19,109+02 ERROR [org.ovirt.engine.core.
>> vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] Command '
>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud,
>> VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})'
>> execution failed: null
>> 2018-05-10 03:24:19,109+02 INFO  [org.ovirt.engine.core.
>> vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] FINISH, 
>> GetGlusterLocalLogicalVolumeListVDSCommand,
>> log id: 735c6a5f
>> 2018-05-10 03:24:19,110+02 INFO  [org.ovirt.engine.core.
>> vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] START, 
>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName
>> = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:
>> {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 6f9e9f58
>> 2018-05-10 03:24:19,112+02 ERROR 

[ovirt-users] Re: Gluster quorum

2018-05-12 Thread Demeter Tibor
Hi, 

Could someone help me please ? I can't finish my upgrade process. 

Thanks 
R 
Tibor 

- 2018. máj.. 10., 12:51, Demeter Tibor  írta: 

> Hi,

> I've attached the vdsm and supervdsm logs. But I don't have engine.log here,
> because that is on hosted engine vm. Should I send that ?

> Thank you

> Regards,

> Tibor
> - 2018. máj.. 10., 12:30, Sahina Bose  írta:

>> There's a bug here. Can you log one attaching this engine.log and also 
>> vdsm.log
>> & supervdsm.log from n3.itsmart.cloud

>> On Thu, May 10, 2018 at 3:35 PM, Demeter Tibor < [ 
>> mailto:tdeme...@itsmart.hu |
>> tdeme...@itsmart.hu ] > wrote:

>>> Hi,

>>> I found this:

>>> 2018-05-10 03:24:19,096+02 INFO
>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand]
>>> (DefaultQuartzScheduler7) [43f4eaec] FINISH,
>>> GetGlusterVolumeAdvancedDetailsVDSCommand, return:
>>> org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@ca97448e,
>>> log id: 347435ae
>>> 2018-05-10 03:24:19,097+02 ERROR
>>> [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7)
>>> [43f4eaec] Error while refreshing brick statuses for volume 'volume2' of
>>> cluster 'C6220': null
>>> 2018-05-10 03:24:19,097+02 INFO
>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] 
>>> (DefaultQuartzScheduler8)
>>> [7715ceda] Failed to acquire lock and wait lock
>>> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-0339=GLUSTER]',
>>> sharedLocks=''}'
>>> 2018-05-10 03:24:19,104+02 INFO
>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>> (DefaultQuartzScheduler7) [43f4eaec] START,
>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud,
>>> VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'}),
>>> log id: 6908121d
>>> 2018-05-10 03:24:19,106+02 ERROR
>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>> (DefaultQuartzScheduler7) [43f4eaec] Command
>>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud,
>>> VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})'
>>> execution failed: null
>>> 2018-05-10 03:24:19,106+02 INFO
>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>> (DefaultQuartzScheduler7) [43f4eaec] FINISH,
>>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6908121d
>>> 2018-05-10 03:24:19,107+02 INFO
>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>> (DefaultQuartzScheduler7) [43f4eaec] START,
>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud,
>>> VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'}),
>>> log id: 735c6a5f
>>> 2018-05-10 03:24:19,109+02 ERROR
>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>> (DefaultQuartzScheduler7) [43f4eaec] Command
>>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud,
>>> VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})'
>>> execution failed: null
>>> 2018-05-10 03:24:19,109+02 INFO
>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>> (DefaultQuartzScheduler7) [43f4eaec] FINISH,
>>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 735c6a5f
>>> 2018-05-10 03:24:19,110+02 INFO
>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>> (DefaultQuartzScheduler7) [43f4eaec] START,
>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud,
>>> VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}),
>>> log id: 6f9e9f58
>>> 2018-05-10 03:24:19,112+02 ERROR
>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>> (DefaultQuartzScheduler7) [43f4eaec] Command
>>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud,
>>> VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})'
>>> execution failed: null
>>> 2018-05-10 03:24:19,112+02 INFO
>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>> (DefaultQuartzScheduler7) [43f4eaec] FINISH,
>>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6f9e9f58
>>> 2018-05-10 03:24:19,113+02 INFO
>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>> (DefaultQuartzScheduler7) [43f4eaec] START,
>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud,
>>> VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}),
>>> log id: 2ee46967
>>> 2018-05-10 03:24:19,115+02 ERROR
>>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>> (DefaultQuartzScheduler7) [43f4eaec] Command
>>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = 

[ovirt-users] Re: Gluster quorum

2018-05-10 Thread Sahina Bose
There's a bug here. Can you log one attaching this engine.log and also
vdsm.log & supervdsm.log from n3.itsmart.cloud

On Thu, May 10, 2018 at 3:35 PM, Demeter Tibor  wrote:

> Hi,
>
> I found this:
>
>
> 2018-05-10 03:24:19,096+02 INFO  [org.ovirt.engine.core.vdsbroker.gluster.
> GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7)
> [43f4eaec] FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return:
> org.ovirt.engine.core.common.businessentities.gluster.
> GlusterVolumeAdvancedDetails@ca97448e, log id: 347435ae
> 2018-05-10 03:24:19,097+02 ERROR 
> [org.ovirt.engine.core.bll.gluster.GlusterSyncJob]
> (DefaultQuartzScheduler7) [43f4eaec] Error while refreshing brick statuses
> for volume 'volume2' of cluster 'C6220': null
> 2018-05-10 03:24:19,097+02 INFO  
> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
> (DefaultQuartzScheduler8) [7715ceda] Failed to acquire lock and wait lock
> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-0339=GLUSTER]',
> sharedLocks=''}'
> 2018-05-10 03:24:19,104+02 INFO  [org.ovirt.engine.core.vdsbroker.gluster.
> GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7)
> [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName =
> n4.itsmart.cloud, VdsIdVDSCommandParametersBase:
> {hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), log id: 6908121d
> 2018-05-10 03:24:19,106+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.
> GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7)
> [43f4eaec] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName =
> n4.itsmart.cloud, VdsIdVDSCommandParametersBase:
> {hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' execution failed: null
> 2018-05-10 03:24:19,106+02 INFO  [org.ovirt.engine.core.vdsbroker.gluster.
> GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7)
> [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id:
> 6908121d
> 2018-05-10 03:24:19,107+02 INFO  [org.ovirt.engine.core.vdsbroker.gluster.
> GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7)
> [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName =
> n1.itsmart.cloud, VdsIdVDSCommandParametersBase:
> {hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'}), log id: 735c6a5f
> 2018-05-10 03:24:19,109+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.
> GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7)
> [43f4eaec] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName =
> n1.itsmart.cloud, VdsIdVDSCommandParametersBase:
> {hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})' execution failed: null
> 2018-05-10 03:24:19,109+02 INFO  [org.ovirt.engine.core.vdsbroker.gluster.
> GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7)
> [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id:
> 735c6a5f
> 2018-05-10 03:24:19,110+02 INFO  [org.ovirt.engine.core.vdsbroker.gluster.
> GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7)
> [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName =
> n2.itsmart.cloud, VdsIdVDSCommandParametersBase:
> {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 6f9e9f58
> 2018-05-10 03:24:19,112+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.
> GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7)
> [43f4eaec] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName =
> n2.itsmart.cloud, VdsIdVDSCommandParametersBase:
> {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})' execution failed: null
> 2018-05-10 03:24:19,112+02 INFO  [org.ovirt.engine.core.vdsbroker.gluster.
> GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7)
> [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id:
> 6f9e9f58
> 2018-05-10 03:24:19,113+02 INFO  [org.ovirt.engine.core.vdsbroker.gluster.
> GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7)
> [43f4eaec] START, GetGlusterLocalLogicalVolumeListVDSCommand(HostName =
> n3.itsmart.cloud, VdsIdVDSCommandParametersBase:
> {hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}), log id: 2ee46967
> 2018-05-10 03:24:19,115+02 ERROR [org.ovirt.engine.core.vdsbroker.gluster.
> GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7)
> [43f4eaec] Command 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName =
> n3.itsmart.cloud, VdsIdVDSCommandParametersBase:
> {hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'})' execution failed: null
> 2018-05-10 03:24:19,116+02 INFO  [org.ovirt.engine.core.vdsbroker.gluster.
> GetGlusterLocalLogicalVolumeListVDSCommand] (DefaultQuartzScheduler7)
> [43f4eaec] FINISH, GetGlusterLocalLogicalVolumeListVDSCommand, log id:
> 2ee46967
> 2018-05-10 03:24:19,117+02 INFO  [org.ovirt.engine.core.vdsbroker.gluster.
> GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler7)
> [43f4eaec] START, GetGlusterVolumeAdvancedDetailsVDSCommand(HostName =
> n1.itsmart.cloud, 

[ovirt-users] Re: Gluster quorum

2018-05-10 Thread Demeter Tibor
Hi, 

I found this: 

2018-05-10 03:24:19,096+02 INFO 
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand]
 (DefaultQuartzScheduler7) [43f4eaec] FINISH, 
GetGlusterVolumeAdvancedDetailsVDSCommand, return: 
org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@ca97448e,
 log id: 347435ae 
2018-05-10 03:24:19,097+02 ERROR 
[org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler7) 
[43f4eaec] Error while refreshing brick statuses for volume 'volume2' of 
cluster 'C6220': null 
2018-05-10 03:24:19,097+02 INFO 
[org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler8) 
[7715ceda] Failed to acquire lock and wait lock 
'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-0339=GLUSTER]', 
sharedLocks=''}' 
2018-05-10 03:24:19,104+02 INFO 
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
 (DefaultQuartzScheduler7) [43f4eaec] START, 
GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, 
VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), 
log id: 6908121d 
2018-05-10 03:24:19,106+02 ERROR 
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
 (DefaultQuartzScheduler7) [43f4eaec] Command 
'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud, 
VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})' 
execution failed: null 
2018-05-10 03:24:19,106+02 INFO 
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
 (DefaultQuartzScheduler7) [43f4eaec] FINISH, 
GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6908121d 
2018-05-10 03:24:19,107+02 INFO 
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
 (DefaultQuartzScheduler7) [43f4eaec] START, 
GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, 
VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'}), 
log id: 735c6a5f 
2018-05-10 03:24:19,109+02 ERROR 
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
 (DefaultQuartzScheduler7) [43f4eaec] Command 
'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud, 
VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})' 
execution failed: null 
2018-05-10 03:24:19,109+02 INFO 
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
 (DefaultQuartzScheduler7) [43f4eaec] FINISH, 
GetGlusterLocalLogicalVolumeListVDSCommand, log id: 735c6a5f 
2018-05-10 03:24:19,110+02 INFO 
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
 (DefaultQuartzScheduler7) [43f4eaec] START, 
GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, 
VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), 
log id: 6f9e9f58 
2018-05-10 03:24:19,112+02 ERROR 
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
 (DefaultQuartzScheduler7) [43f4eaec] Command 
'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud, 
VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})' 
execution failed: null 
2018-05-10 03:24:19,112+02 INFO 
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
 (DefaultQuartzScheduler7) [43f4eaec] FINISH, 
GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6f9e9f58 
2018-05-10 03:24:19,113+02 INFO 
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
 (DefaultQuartzScheduler7) [43f4eaec] START, 
GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, 
VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'}), 
log id: 2ee46967 
2018-05-10 03:24:19,115+02 ERROR 
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
 (DefaultQuartzScheduler7) [43f4eaec] Command 
'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n3.itsmart.cloud, 
VdsIdVDSCommandParametersBase:{hostId='fd2ee743-f5d4-403b-ba18-377e309169ec'})' 
execution failed: null 
2018-05-10 03:24:19,116+02 INFO 
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
 (DefaultQuartzScheduler7) [43f4eaec] FINISH, 
GetGlusterLocalLogicalVolumeListVDSCommand, log id: 2ee46967 
2018-05-10 03:24:19,117+02 INFO 
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand]
 (DefaultQuartzScheduler7) [43f4eaec] START, 
GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = n1.itsmart.cloud, 
GlusterVolumeAdvancedDetailsVDSParameters:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57',
 volumeName='volume1'}), log id: 7550e5c 
2018-05-10 03:24:20,748+02 INFO 
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand]
 (DefaultQuartzScheduler7) [43f4eaec] FINISH, 

[ovirt-users] Re: Gluster quorum

2018-05-10 Thread Sahina Bose
This doesn't affect the monitoring of state.
Any errors in vdsm.log?
Or errors in engine.log of the form "Error while refreshing brick statuses
for volume"

On Thu, May 10, 2018 at 2:33 PM, Demeter Tibor  wrote:

> Hi,
>
> Thank you for your fast reply :)
>
>
> 2018-05-10 11:01:51,574+02 INFO  
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
> (DefaultQuartzScheduler6) [7f01fc2d] START, 
> GlusterServersListVDSCommand(HostName
> = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:
> {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 39adbbb8
> 2018-05-10 11:01:51,768+02 INFO  
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
> (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterServersListVDSCommand,
> return: [10.101.0.2/24:CONNECTED, n1.cloudata.local:CONNECTED, 
> 10.104.0.3:CONNECTED,
> 10.104.0.4:CONNECTED], log id: 39adbbb8
> 2018-05-10 11:01:51,788+02 INFO  
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler6) [7f01fc2d] START, 
> GlusterVolumesListVDSCommand(HostName
> = n2.itsmart.cloud, GlusterVolumesListVDSParameter
> s:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 738a7261
> 2018-05-10 11:01:51,892+02 WARN  
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:
> /gluster/brick/brick1' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27'
> with correct network as no gluster network found in cluster
> '59c10db3-0324-0320-0120-0339'
> 2018-05-10 11:01:51,898+02 WARN  
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:
> /gluster/brick/brick2' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27'
> with correct network as no gluster network found in cluster
> '59c10db3-0324-0320-0120-0339'
> 2018-05-10 11:01:51,905+02 WARN  
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:
> /gluster/brick/brick3' of volume 'e0f568fa-987c-4f5c-b853-01bce718ee27'
> with correct network as no gluster network found in cluster
> '59c10db3-0324-0320-0120-0339'
> 2018-05-10 11:01:51,911+02 WARN  
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:
> /gluster2/brick/brick1' of volume '68cfb061-1320-4042-abcd-9228da23c0c8'
> with correct network as no gluster network found in cluster
> '59c10db3-0324-0320-0120-0339'
> 2018-05-10 11:01:51,917+02 WARN  
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:
> /gluster2/brick/brick2' of volume '68cfb061-1320-4042-abcd-9228da23c0c8'
> with correct network as no gluster network found in cluster
> '59c10db3-0324-0320-0120-0339'
> 2018-05-10 11:01:51,924+02 WARN  
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
> (DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick '10.104.0.1:
> /gluster2/brick/brick3' of volume '68cfb061-1320-4042-abcd-9228da23c0c8'
> with correct network as no gluster network found in cluster
> '59c10db3-0324-0320-0120-0339'
> 2018-05-10 11:01:51,925+02 INFO  
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterVolumesListVDSCommand,
> return: {68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine.
> core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d,
> e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine.
> core.common.businessentities.gluster.GlusterVolumeEntity@f88c521b}, log
> id: 738a7261
>
>
> This happening continuously.
>
> Thanks!
> Tibor
>
>
>
> - 2018. máj.. 10., 10:56, Sahina Bose  írta:
>
> Could you check the engine.log if there are errors related to getting
> GlusterVolumeAdvancedDetails ?
>
> On Thu, May 10, 2018 at 2:02 PM, Demeter Tibor 
> wrote:
>
>> Dear Ovirt Users,
>> I've followed up the self-hosted-engine upgrade documentation, I upgraded
>> my 4.1 system to 4.2.3.
>> I upgaded the first node with yum upgrade, it seems working now fine. But
>> since upgrade, the gluster informations seems to displayed incorrect on the
>> admin panel. The volume yellow, and there are red bricks from that node.
>> I've checked in console, I think my gluster is not degraded:
>>
>> root@n1 ~]# gluster volume list
>> volume1
>> volume2
>> [root@n1 ~]# gluster volume info
>>
>> Volume Name: volume1
>> Type: Distributed-Replicate
>> Volume ID: e0f568fa-987c-4f5c-b853-01bce718ee27
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 3 x 3 = 9
>> Transport-type: tcp
>> Bricks:
>> Brick1: 10.104.0.1:/gluster/brick/brick1
>> Brick2: 

[ovirt-users] Re: Gluster quorum

2018-05-10 Thread Demeter Tibor
Hi, 

Thank you for your fast reply :) 

2018-05-10 11:01:51,574+02 INFO 
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] 
(DefaultQuartzScheduler6) [7f01fc2d] START, 
GlusterServersListVDSCommand(HostName = n2.itsmart.cloud, 
VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), 
log id: 39adbbb8 
2018-05-10 11:01:51,768+02 INFO 
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] 
(DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterServersListVDSCommand, 
return: [10.101.0.2/24:CONNECTED, n1.cloudata.local:CONNECTED, 
10.104.0.3:CONNECTED, 10.104.0.4:CONNECTED], log id: 39adbbb8 
2018-05-10 11:01:51,788+02 INFO 
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] 
(DefaultQuartzScheduler6) [7f01fc2d] START, 
GlusterVolumesListVDSCommand(HostName = n2.itsmart.cloud, 
GlusterVolumesListVDSParameters:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}),
 log id: 738a7261 
2018-05-10 11:01:51,892+02 WARN 
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] 
(DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick 
'10.104.0.1:/gluster/brick/brick1' of volume 
'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster 
network found in cluster '59c10db3-0324-0320-0120-0339' 
2018-05-10 11:01:51,898+02 WARN 
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] 
(DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick 
'10.104.0.1:/gluster/brick/brick2' of volume 
'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster 
network found in cluster '59c10db3-0324-0320-0120-0339' 
2018-05-10 11:01:51,905+02 WARN 
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] 
(DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick 
'10.104.0.1:/gluster/brick/brick3' of volume 
'e0f568fa-987c-4f5c-b853-01bce718ee27' with correct network as no gluster 
network found in cluster '59c10db3-0324-0320-0120-0339' 
2018-05-10 11:01:51,911+02 WARN 
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] 
(DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick 
'10.104.0.1:/gluster2/brick/brick1' of volume 
'68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster 
network found in cluster '59c10db3-0324-0320-0120-0339' 
2018-05-10 11:01:51,917+02 WARN 
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] 
(DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick 
'10.104.0.1:/gluster2/brick/brick2' of volume 
'68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster 
network found in cluster '59c10db3-0324-0320-0120-0339' 
2018-05-10 11:01:51,924+02 WARN 
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] 
(DefaultQuartzScheduler6) [7f01fc2d] Could not associate brick 
'10.104.0.1:/gluster2/brick/brick3' of volume 
'68cfb061-1320-4042-abcd-9228da23c0c8' with correct network as no gluster 
network found in cluster '59c10db3-0324-0320-0120-0339' 
2018-05-10 11:01:51,925+02 INFO 
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] 
(DefaultQuartzScheduler6) [7f01fc2d] FINISH, GlusterVolumesListVDSCommand, 
return: 
{68cfb061-1320-4042-abcd-9228da23c0c8=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@7a6720d,
 
e0f568fa-987c-4f5c-b853-01bce718ee27=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@f88c521b},
 log id: 738a7261 

This happening continuously. 

Thanks! 
Tibor 

- 2018. máj.. 10., 10:56, Sahina Bose  írta: 

> Could you check the engine.log if there are errors related to getting
> GlusterVolumeAdvancedDetails ?

> On Thu, May 10, 2018 at 2:02 PM, Demeter Tibor < [ mailto:tdeme...@itsmart.hu 
> |
> tdeme...@itsmart.hu ] > wrote:

>> Dear Ovirt Users,
>> I've followed up the self-hosted-engine upgrade documentation, I upgraded my 
>> 4.1
>> system to 4.2.3.
>> I upgaded the first node with yum upgrade, it seems working now fine. But 
>> since
>> upgrade, the gluster informations seems to displayed incorrect on the admin
>> panel. The volume yellow, and there are red bricks from that node.
>> I've checked in console, I think my gluster is not degraded:

>> root@n1 ~]# gluster volume list
>> volume1
>> volume2
>> [root@n1 ~]# gluster volume info
>> Volume Name: volume1
>> Type: Distributed-Replicate
>> Volume ID: e0f568fa-987c-4f5c-b853-01bce718ee27
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 3 x 3 = 9
>> Transport-type: tcp
>> Bricks:
>> Brick1: 10.104.0.1:/gluster/brick/brick1
>> Brick2: 10.104.0.2:/gluster/brick/brick1
>> Brick3: 10.104.0.3:/gluster/brick/brick1
>> Brick4: 10.104.0.1:/gluster/brick/brick2
>> Brick5: 10.104.0.2:/gluster/brick/brick2
>> Brick6: 10.104.0.3:/gluster/brick/brick2
>> Brick7: 10.104.0.1:/gluster/brick/brick3
>> Brick8: 10.104.0.2:/gluster/brick/brick3
>> Brick9: 

[ovirt-users] Re: Gluster quorum

2018-05-10 Thread Sahina Bose
Could you check the engine.log if there are errors related to getting
GlusterVolumeAdvancedDetails ?

On Thu, May 10, 2018 at 2:02 PM, Demeter Tibor  wrote:

> Dear Ovirt Users,
> I've followed up the self-hosted-engine upgrade documentation, I upgraded
> my 4.1 system to 4.2.3.
> I upgaded the first node with yum upgrade, it seems working now fine. But
> since upgrade, the gluster informations seems to displayed incorrect on the
> admin panel. The volume yellow, and there are red bricks from that node.
> I've checked in console, I think my gluster is not degraded:
>
> root@n1 ~]# gluster volume list
> volume1
> volume2
> [root@n1 ~]# gluster volume info
>
> Volume Name: volume1
> Type: Distributed-Replicate
> Volume ID: e0f568fa-987c-4f5c-b853-01bce718ee27
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 3 x 3 = 9
> Transport-type: tcp
> Bricks:
> Brick1: 10.104.0.1:/gluster/brick/brick1
> Brick2: 10.104.0.2:/gluster/brick/brick1
> Brick3: 10.104.0.3:/gluster/brick/brick1
> Brick4: 10.104.0.1:/gluster/brick/brick2
> Brick5: 10.104.0.2:/gluster/brick/brick2
> Brick6: 10.104.0.3:/gluster/brick/brick2
> Brick7: 10.104.0.1:/gluster/brick/brick3
> Brick8: 10.104.0.2:/gluster/brick/brick3
> Brick9: 10.104.0.3:/gluster/brick/brick3
> Options Reconfigured:
> transport.address-family: inet
> performance.readdir-ahead: on
> nfs.disable: on
> storage.owner-uid: 36
> storage.owner-gid: 36
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.stat-prefetch: off
> performance.low-prio-threads: 32
> network.remote-dio: enable
> cluster.eager-lock: enable
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> cluster.data-self-heal-algorithm: full
> cluster.locking-scheme: granular
> cluster.shd-max-threads: 8
> cluster.shd-wait-qlength: 1
> features.shard: on
> user.cifs: off
> server.allow-insecure: on
>
> Volume Name: volume2
> Type: Distributed-Replicate
> Volume ID: 68cfb061-1320-4042-abcd-9228da23c0c8
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 3 x 3 = 9
> Transport-type: tcp
> Bricks:
> Brick1: 10.104.0.1:/gluster2/brick/brick1
> Brick2: 10.104.0.2:/gluster2/brick/brick1
> Brick3: 10.104.0.3:/gluster2/brick/brick1
> Brick4: 10.104.0.1:/gluster2/brick/brick2
> Brick5: 10.104.0.2:/gluster2/brick/brick2
> Brick6: 10.104.0.3:/gluster2/brick/brick2
> Brick7: 10.104.0.1:/gluster2/brick/brick3
> Brick8: 10.104.0.2:/gluster2/brick/brick3
> Brick9: 10.104.0.3:/gluster2/brick/brick3
> Options Reconfigured:
> nfs.disable: on
> performance.readdir-ahead: on
> transport.address-family: inet
> cluster.quorum-type: auto
> network.ping-timeout: 10
> auth.allow: *
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.stat-prefetch: off
> performance.low-prio-threads: 32
> network.remote-dio: enable
> cluster.eager-lock: enable
> cluster.server-quorum-type: server
> cluster.data-self-heal-algorithm: full
> cluster.locking-scheme: granular
> cluster.shd-max-threads: 8
> cluster.shd-wait-qlength: 1
> features.shard: on
> user.cifs: off
> storage.owner-uid: 36
> storage.owner-gid: 36
> server.allow-insecure: on
> [root@n1 ~]# gluster volume status
> Status of volume: volume1
> Gluster process TCP Port  RDMA Port  Online
> Pid
> 
> --
> Brick 10.104.0.1:/gluster/brick/brick1  49152 0  Y
>  3464
> Brick 10.104.0.2:/gluster/brick/brick1  49152 0  Y
>  68937
> Brick 10.104.0.3:/gluster/brick/brick1  49161 0  Y
>  94506
> Brick 10.104.0.1:/gluster/brick/brick2  49153 0  Y
>  3457
> Brick 10.104.0.2:/gluster/brick/brick2  49153 0  Y
>  68943
> Brick 10.104.0.3:/gluster/brick/brick2  49162 0  Y
>  94514
> Brick 10.104.0.1:/gluster/brick/brick3  49154 0  Y
>  3465
> Brick 10.104.0.2:/gluster/brick/brick3  49154 0  Y
>  68949
> Brick 10.104.0.3:/gluster/brick/brick3  49163 0  Y
>  94520
> Self-heal Daemon on localhost   N/A   N/AY
>  54356
> Self-heal Daemon on 10.104.0.2  N/A   N/AY
>  962
> Self-heal Daemon on 10.104.0.3  N/A   N/AY
>  108977
> Self-heal Daemon on 10.104.0.4  N/A   N/AY
>  61603
>
> Task Status of Volume volume1
> 
> --
> There are no active volume tasks
>
> Status of volume: volume2
> Gluster process TCP Port  RDMA Port  Online
> Pid
> 
> --
> Brick 10.104.0.1:/gluster2/brick/brick1 49155 0  Y
>  3852
> Brick 10.104.0.2:/gluster2/brick/brick1 49158 0  Y
>  68955
> Brick 10.104.0.3:/gluster2/brick/brick1 49164 0