2017-07-19 11:22 GMT+02:00 yayo (j) :
> running the "gluster volume heal engine" don't solve the problem...
>
> Some extra info:
>
> We have recently changed the gluster from: 2 (full repliacated) + 1
> arbiter to 3 full replicated cluster but i don't know this is the problem...
2017-07-25 7:42 GMT+02:00 Kasturi Narra :
> These errors are because not having glusternw assigned to the correct
> interface. Once you attach that these errors should go away. This has
> nothing to do with the problem you are seeing.
>
Hi,
You talking about errors like
These errors are because not having glusternw assigned to the correct
interface. Once you attach that these errors should go away. This has
nothing to do with the problem you are seeing.
sahina any idea about engine not showing the correct volume info ?
On Mon, Jul 24, 2017 at 7:30 PM, yayo (j)
>
> All these ip are pingable and hosts resolvible across all 3 nodes but,
>> only the 10.10.10.0 network is the decidated network for gluster (rosolved
>> using gdnode* host names) ... You think that remove other entries can fix
>> the problem? So, sorry, but, how can I remove other entries?
>>
Hi,
UI refreshed but problem still remain ...
No specific error, I've only these errors but I've read that there is no
problem if I have this kind of errors:
2017-07-24 15:53:59,823+02 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
(DefaultQuartzScheduler2)
Hi,
Regarding the UI showing incorrect information about engine and data
volumes, can you please refresh the UI and see if the issue persists plus
any errors in the engine.log files ?
Thanks
kasturi
On Sat, Jul 22, 2017 at 11:43 AM, Ravishankar N
wrote:
>
> On
On 07/21/2017 11:41 PM, yayo (j) wrote:
Hi,
Sorry for follow up again, but, checking the ovirt interface I've
found that ovirt report the "engine" volume as an "arbiter"
configuration and the "data" volume as full replicated volume. Check
these screenshots:
This is probably some refresh
Hi,
Sorry for follow up again, but, checking the ovirt interface I've found
that ovirt report the "engine" volume as an "arbiter" configuration and the
"data" volume as full replicated volume. Check these screenshots:
2017-07-20 14:48 GMT+02:00 Ravishankar N :
>
> But it does say something. All these gfids of completed heals in the log
> below are the for the ones that you have given the getfattr output of. So
> what is likely happening is there is an intermittent connection problem
>
On 07/21/2017 02:55 PM, yayo (j) wrote:
2017-07-20 14:48 GMT+02:00 Ravishankar N >:
But it does say something. All these gfids of completed heals in
the log below are the for the ones that you have given the
getfattr output
2017-07-20 14:48 GMT+02:00 Ravishankar N :
>
> But it does say something. All these gfids of completed heals in the log
> below are the for the ones that you have given the getfattr output of. So
> what is likely happening is there is an intermittent connection problem
>
On 07/20/2017 03:42 PM, yayo (j) wrote:
2017-07-20 11:34 GMT+02:00 Ravishankar N >:
Could you check if the self-heal daemon on all nodes is connected
to the 3 bricks? You will need to check the glustershd.log for that.
If
2017-07-20 11:34 GMT+02:00 Ravishankar N :
>
> Could you check if the self-heal daemon on all nodes is connected to the 3
> bricks? You will need to check the glustershd.log for that.
> If it is not connected, try restarting the shd using `gluster volume start
> engine
On 07/20/2017 02:20 PM, yayo (j) wrote:
Hi,
Thank you for the answer and sorry for delay:
2017-07-19 16:55 GMT+02:00 Ravishankar N >:
1. What does the glustershd.log say on all 3 nodes when you run
the command? Does it complain
Hi,
Thank you for the answer and sorry for delay:
2017-07-19 16:55 GMT+02:00 Ravishankar N :
1. What does the glustershd.log say on all 3 nodes when you run the
> command? Does it complain anything about these files?
>
No, glustershd.log is clean, no extra log after
On 07/19/2017 08:02 PM, Sahina Bose wrote:
[Adding gluster-users]
On Wed, Jul 19, 2017 at 2:52 PM, yayo (j) > wrote:
Hi all,
We have an ovirt cluster hyperconverged with hosted engine on 3
full replicated node . This cluster have 2
[Adding gluster-users]
On Wed, Jul 19, 2017 at 2:52 PM, yayo (j) wrote:
> Hi all,
>
> We have an ovirt cluster hyperconverged with hosted engine on 3 full
> replicated node . This cluster have 2 gluster volume:
>
> - data: volume for the Data (Master) Domain (For vm)
> -
Hi all,
We have an ovirt cluster hyperconverged with hosted engine on 3 full
replicated node . This cluster have 2 gluster volume:
- data: volume for the Data (Master) Domain (For vm)
- engine: volume fro the hosted_storage Domain (for hosted engine)
We have this problem: "engine" gluster
18 matches
Mail list logo