[ovirt-users] ovirt 4.1 : Can't deploy second node on self hosted engine cluster and host with hosted engine deployed (and also hosted engine VM) is not added to interface

2017-07-07 Thread yayo (j)
Hi at all,

I have correctly deployed an hosted engine using node01 via:

hosted-engine --deploy

Using FC shared storage.

Seems all work good but, when I login in to the ovirt web interface I can't
find the hosted engine under the VM tab (also the node01 server).

So, I have tried to add node02 (no problem) and added the "Data Domain"
storage (another FC share storage).

Now (I Think) I needs to deploy also here the hosted engine but I can't.
There is only one way, to put node02 in maintenance mode but, after that,
"Data Domain" added going down and, when I try to deploy the hosted engine
interface says:


Error while executing action:

node02:
Cannot edit Host. You are using an unmanaged hosted engine VM. Please add
the first storage domain in order to start the hosted engine import process.

what is this "first storage domain"?

I have not added the node01 yet because last time hosted engine crashed and
the only way was to restart the installation 


Note: Hosted engine seems to be correcty deployed because all work good,
but deploy script ended with error:

[ INFO  ] Engine-setup successfully completed
[ INFO  ] Engine is still unreachable
[ INFO  ] Engine is still not reachable, waiting...
[more of these messages]
[ ERROR ] Engine is still not reachable
[ ERROR ] Failed to execute stage 'Closing up': Engine is still not
reachable
[ INFO  ] Stage: Clean up
[ INFO  ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20170123113113.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed: this system is not reliable,
please check the issue,fix and redeploy
  Log file is located at
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20170123105043-kvzx84.log

Is related to this bug? https://bugzilla.redhat.com/show_bug.cgi?id=1415822
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.1 : Can't deploy second node on self hosted engine cluster and host with hosted engine deployed (and also hosted engine VM) is not added to interface

2017-07-07 Thread yayo (j)
Hi and thank you for the answer,

After many many errors and problem, I have tried to start deploying
hosted-engine from node02 and... everything is working good, the last
message from ovirt activities is:

*Jul 7, 2017 4:38:00 PM Hosted Engine VM was imported successfully*


I think that something as gone wrong in the first time i've tried to
install and this is have compromised every next Installation attempts. Now
i will reinstall the node01 and I will join it in cluster from interface

Thank you for support

2017-07-07 14:35 GMT+02:00 Phillip Bailey <phbai...@redhat.com>:

> Hi yayo,
>
> Based off of the information you've provided here, it doesn't sound like
> the deployment was successful despite being able to access the web
> interface. You can check the status of the deployment from node01 by
> running 'hosted-engine --check-deployed'. You can also check the status of
> the VM by running 'hosted-engine --vm-status'. Could you run those commands
> and provide the output? It would also be helpful if you could provide the
> log indicated at the end of the deploy script.
>
> As far as the storage issue, you must have an active host in order to add
> a storage/data domain. Typically, the first data domain added will become
> the master data domain, which will be indicated in the web interface by
> '(Master)' following the type in the Domain Type column in the Storage main
> tab. The host that was selected during the domain setup will become the
> Storage Pool Manager (SPM) and will have 'SPM' in the SPM column in the
> Hosts main tab. If you attempt to move this host to maintenance mode,
> another host will be selected to become the SPM. If no other hosts are
> available, I believe you'll get an error message which will prevent you
> from being able to move it to maintenance. I believe in this situation you
> can put the storage domain into maintenance mode by going to the Storage
> sub tab under the Data Center main tab, selecting the domain, and clicking
> the 'Maintenance' button. This should allow you to then put the last host
> into maintenance mode.
>
> However, I think your initial deployment needs to be fixed before adding
> more hosts. Hopefully, I or someone else will be able to help you with that
> once you've responded with the output from the status commands and the
> setup log.
>
> -Phillip Bailey
>
> On Fri, Jul 7, 2017 at 7:22 AM, yayo (j) <jag...@gmail.com> wrote:
>
>> Hi at all,
>>
>> I have correctly deployed an hosted engine using node01 via:
>>
>> hosted-engine --deploy
>>
>> Using FC shared storage.
>>
>> Seems all work good but, when I login in to the ovirt web interface I
>> can't find the hosted engine under the VM tab (also the node01 server).
>>
>> So, I have tried to add node02 (no problem) and added the "Data Domain"
>> storage (another FC share storage).
>>
>> Now (I Think) I needs to deploy also here the hosted engine but I can't.
>> There is only one way, to put node02 in maintenance mode but, after that,
>> "Data Domain" added going down and, when I try to deploy the hosted engine
>> interface says:
>>
>>
>> Error while executing action:
>>
>> node02:
>> Cannot edit Host. You are using an unmanaged hosted engine VM. Please add
>> the first storage domain in order to start the hosted engine import process.
>>
>> what is this "first storage domain"?
>>
>> I have not added the node01 yet because last time hosted engine crashed
>> and the only way was to restart the installation 
>>
>>
>> Note: Hosted engine seems to be correcty deployed because all work good,
>> but deploy script ended with error:
>>
>> [ INFO  ] Engine-setup successfully completed
>> [ INFO  ] Engine is still unreachable
>> [ INFO  ] Engine is still not reachable, waiting...
>> [more of these messages]
>> [ ERROR ] Engine is still not reachable
>> [ ERROR ] Failed to execute stage 'Closing up': Engine is still not
>> reachable
>> [ INFO  ] Stage: Clean up
>> [ INFO  ] Generating answer file '/var/lib/ovirt-hosted-engine-
>> setup/answers/answers-20170123113113.conf'
>> [ INFO  ] Stage: Pre-termination
>> [ INFO  ] Stage: Termination
>> [ ERROR ] Hosted Engine deployment failed: this system is not reliable,
>> please check the issue,fix and redeploy
>>   Log file is located at /var/log/ovirt-hosted-engine-s
>> etup/ovirt-hosted-engine-setup-20170123105043-kvzx84.log
>>
>> Is related to this bug? https://bugzilla.redhat.com/sh
>> ow_bug.cgi?id=1415822
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>


-- 
Linux User: 369739 http://counter.li.org
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements

2017-07-20 Thread yayo (j)
Hi,

Thank you for the answer and sorry for delay:

2017-07-19 16:55 GMT+02:00 Ravishankar N :

1. What does the glustershd.log say on all 3 nodes when you run the
> command? Does it complain anything about these files?
>

No, glustershd.log is clean, no extra log after command on all 3 nodes


> 2. Are these 12 files also present in the 3rd data brick?
>

I've checked right now: all files exists in all 3 nodes


> 3. Can you provide the output of `gluster volume info` for the this volume?
>


*Volume Name: engine*
*Type: Replicate*
*Volume ID: d19c19e3-910d-437b-8ba7-4f2a23d17515*
*Status: Started*
*Snapshot Count: 0*
*Number of Bricks: 1 x 3 = 3*
*Transport-type: tcp*
*Bricks:*
*Brick1: node01:/gluster/engine/brick*
*Brick2: node02:/gluster/engine/brick*
*Brick3: node04:/gluster/engine/brick*
*Options Reconfigured:*
*nfs.disable: on*
*performance.readdir-ahead: on*
*transport.address-family: inet*
*storage.owner-uid: 36*
*performance.quick-read: off*
*performance.read-ahead: off*
*performance.io-cache: off*
*performance.stat-prefetch: off*
*performance.low-prio-threads: 32*
*network.remote-dio: off*
*cluster.eager-lock: enable*
*cluster.quorum-type: auto*
*cluster.server-quorum-type: server*
*cluster.data-self-heal-algorithm: full*
*cluster.locking-scheme: granular*
*cluster.shd-max-threads: 8*
*cluster.shd-wait-qlength: 1*
*features.shard: on*
*user.cifs: off*
*storage.owner-gid: 36*
*features.shard-block-size: 512MB*
*network.ping-timeout: 30*
*performance.strict-o-direct: on*
*cluster.granular-entry-heal: on*
*auth.allow: **

  server.allow-insecure: on





>
> Some extra info:
>>
>> We have recently changed the gluster from: 2 (full repliacated) + 1
>> arbiter to 3 full replicated cluster
>>
>
> Just curious, how did you do this? `remove-brick` of arbiter brick
> followed by an `add-brick` to increase to replica-3?
>
>
Yes


#gluster volume remove-brick engine replica 2 node03:/gluster/data/brick
force *(OK!)*

#gluster volume heal engine info *(no entries!)*

#gluster volume add-brick engine replica 3 node04:/gluster/engine/brick
*(OK!)*

*After some minutes*

[root@node01 ~]#  gluster volume heal engine info
Brick node01:/gluster/engine/brick
Status: Connected
Number of entries: 0

Brick node02:/gluster/engine/brick
Status: Connected
Number of entries: 0

Brick node04:/gluster/engine/brick
Status: Connected
Number of entries: 0



> Thanks,
> Ravi
>

Another extra info (I don't know if this can be the problem): Five days ago
A black out has suddenly shut down the networks switch (also gluster
network) of node 03 and 04 ... But I don't know this problem is in place
after this black out

Thank you!
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements

2017-07-20 Thread yayo (j)
2017-07-20 11:34 GMT+02:00 Ravishankar N :

>
> Could you check if the self-heal daemon on all nodes is connected to the 3
> bricks? You will need to check the glustershd.log for that.
> If it is not connected, try restarting the shd using `gluster volume start
> engine force`, then launch the heal command like you did earlier and see if
> heals happen.
>
>
I've executed the command on all 3 nodes (Know is enougth only one) , after
that the "heal" command report elements between 6 and 10 ... (sometime 6,
sometime 8, sometime 10)


Log on glustershd.log don't say anything :

*[2017-07-20 09:58:46.573079] I [MSGID: 108026]
[afr-self-heal-common.c:1254:afr_log_selfheal] 0-engine-replicate-0:
Completed data selfheal on e6dfd556-340b-4b76-b47b-7b6f5bd74327.
sources=[0] 1  sinks=2*
*[2017-07-20 09:59:22.995003] I [MSGID: 108026]
[afr-self-heal-metadata.c:51:__afr_selfheal_metadata_do]
0-engine-replicate-0: performing metadata selfheal on
f05b9742-2771-484a-85fc-5b6974bcef81*
*[2017-07-20 09:59:22.999372] I [MSGID: 108026]
[afr-self-heal-common.c:1254:afr_log_selfheal] 0-engine-replicate-0:
Completed metadata selfheal on f05b9742-2771-484a-85fc-5b6974bcef81.
sources=[0] 1  sinks=2*




> If it doesn't, please provide the getfattr outputs of the 12 files from
> all 3 nodes using `getfattr -d -m . -e hex */gluster/engine/brick/*
> path-to-file` ?
>
>
*NODE01:*
*getfattr: Removing leading '/' from absolute path names*
*# file:
gluster/engine/brick/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.68*
*trusted.afr.dirty=0x*
*trusted.afr.engine-client-1=0x*
*trusted.afr.engine-client-2=0x0012*
*trusted.bit-rot.version=0x090059647d5b000447e9*
*trusted.gfid=0xe3565b5014954e5bae883bceca47b7d9*

*getfattr: Removing leading '/' from absolute path names*
*# file:
gluster/engine/brick/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.48*
*trusted.afr.dirty=0x*
*trusted.afr.engine-client-1=0x*
*trusted.afr.engine-client-2=0x000e*
*trusted.bit-rot.version=0x090059647d5b000447e9*
*trusted.gfid=0x676067891f344c1586b8c0d05b07f187*

*getfattr: Removing leading '/' from absolute path names*
*# file:
gluster/engine/brick/8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/19d71267-52a4-42a3-bb1e-e3145361c0c2/7a215635-02f3-47db-80db-8b689c6a8f01*
*trusted.afr.dirty=0x*
*trusted.afr.engine-client-1=0x*
*trusted.afr.engine-client-2=0x0055*
*trusted.bit-rot.version=0x090059647d5b000447e9*
*trusted.gfid=0x8aa745646740403ead51f56d9ca5d7a7*
*trusted.glusterfs.shard.block-size=0x2000*
*trusted.glusterfs.shard.file-size=0x000c80d4f229*

*getfattr: Removing leading '/' from absolute path names*
*# file:
gluster/engine/brick/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.60*
*trusted.afr.dirty=0x*
*trusted.afr.engine-client-1=0x*
*trusted.afr.engine-client-2=0x0007*
*trusted.bit-rot.version=0x090059647d5b000447e9*
*trusted.gfid=0x4e33ac33dddb4e29b4a351770b81166a*

*getfattr: Removing leading '/' from absolute path names*
*# file:
gluster/engine/brick/8f215dd2-8531-4a4f-b6ed-ea789dd8821b/dom_md/ids*
*trusted.afr.dirty=0x*
*trusted.afr.engine-client-1=0x*
*trusted.afr.engine-client-2=0x*
*trusted.bit-rot.version=0x0f0059647d5b000447e9*
*trusted.gfid=0x2581cb9ac2b74bd9ac17a09bd2f001b3*
*trusted.glusterfs.shard.block-size=0x2000*
*trusted.glusterfs.shard.file-size=0x00100800*

*getfattr: Removing leading '/' from absolute path names*
*# file: gluster/engine/brick/__DIRECT_IO_TEST__*
*trusted.afr.dirty=0x*
*trusted.afr.engine-client-1=0x*
*trusted.afr.engine-client-2=0x*
*trusted.gfid=0xf05b97422771484a85fc5b6974bcef81*
*trusted.glusterfs.shard.block-size=0x2000*
*trusted.glusterfs.shard.file-size=0x*

*getfattr: Removing leading '/' from absolute path names*
*# file:
gluster/engine/brick/8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/88d41053-a257-4272-9e2e-2f3de0743b81/6573ed08-d3ed-4d12-9227-2c95941e1ad6*
*trusted.afr.dirty=0x*
*trusted.afr.engine-client-1=0x*
*trusted.afr.engine-client-2=0x0001*
*trusted.bit-rot.version=0x0f0059647d5b000447e9*
*trusted.gfid=0xe6dfd556340b4b76b47b7b6f5bd74327*
*trusted.glusterfs.shard.block-size=0x2000*
*trusted.glusterfs.shard.file-size=0x00100800*

*getfattr: Removing leading '/' from absolute 

[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements

2017-07-19 Thread yayo (j)
Hi all,

We have an ovirt cluster hyperconverged with hosted engine on 3 full
replicated node . This cluster have 2 gluster volume:

- data: volume for the Data (Master) Domain (For vm)
- engine: volume fro the hosted_storage  Domain (for hosted engine)

We have this problem: "engine" gluster volume have always unsynced elements
and we cant' fix the problem, on command line we have tried to use the
"heal" command but elements remain always unsynced 

Below the heal command "status":

[root@node01 ~]# gluster volume heal engine info
Brick node01:/gluster/engine/brick
/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.48
/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.64
/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.60
/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.2
/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.68
/8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/19d71267-52a4-42a3-bb1e-e3145361c0c2/7a215635-02f3-47db-80db-8b689c6a8f01
/8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/88d41053-a257-4272-9e2e-2f3de0743b81/6573ed08-d3ed-4d12-9227-2c95941e1ad6
/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.61
/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.1
/8f215dd2-8531-4a4f-b6ed-ea789dd8821b/dom_md/ids
/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.20
/__DIRECT_IO_TEST__
Status: Connected
Number of entries: 12

Brick node02:/gluster/engine/brick
/8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/19d71267-52a4-42a3-bb1e-e3145361c0c2/7a215635-02f3-47db-80db-8b689c6a8f01

/8f215dd2-8531-4a4f-b6ed-ea789dd8821b/dom_md/ids



/__DIRECT_IO_TEST__


/8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/88d41053-a257-4272-9e2e-2f3de0743b81/6573ed08-d3ed-4d12-9227-2c95941e1ad6


Status: Connected
Number of entries: 12

Brick node04:/gluster/engine/brick
Status: Connected
Number of entries: 0



running the "gluster volume heal engine" don't solve the problem...

Some extra info:

We have recently changed the gluster from: 2 (full repliacated) + 1 arbiter
to 3 full replicated cluster but i don't know this is the problem...

The "data" volume is good and healty and have no unsynced entry.

Ovirt refuse to put the node02 and node01 in "maintenance mode" and
complains about "unsynced elements"

How can I fix this?
Thank you
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements

2017-07-21 Thread yayo (j)
2017-07-20 14:48 GMT+02:00 Ravishankar N :

>
> But it does  say something. All these gfids of completed heals in the log
> below are the for the ones that you have given the getfattr output of. So
> what is likely happening is there is an intermittent connection problem
> between your mount and the brick process, leading to pending heals again
> after the heal gets completed, which is why the numbers are varying each
> time. You would need to check why that is the case.
> Hope this helps,
> Ravi
>
>
>
> *[2017-07-20 09:58:46.573079] I [MSGID: 108026]
> [afr-self-heal-common.c:1254:afr_log_selfheal] 0-engine-replicate-0:
> Completed data selfheal on e6dfd556-340b-4b76-b47b-7b6f5bd74327.
> sources=[0] 1  sinks=2*
> *[2017-07-20 09:59:22.995003] I [MSGID: 108026]
> [afr-self-heal-metadata.c:51:__afr_selfheal_metadata_do]
> 0-engine-replicate-0: performing metadata selfheal on
> f05b9742-2771-484a-85fc-5b6974bcef81*
> *[2017-07-20 09:59:22.999372] I [MSGID: 108026]
> [afr-self-heal-common.c:1254:afr_log_selfheal] 0-engine-replicate-0:
> Completed metadata selfheal on f05b9742-2771-484a-85fc-5b6974bcef81.
> sources=[0] 1  sinks=2*
>
>

Hi,

But we ha1e 2 gluster volume on the same network and the other one (the
"Data" gluster) don't have any problems. Why you think there is a network
problem?  How to check this on a gluster infrastructure?

Thank you
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements

2017-07-21 Thread yayo (j)
Hi,

Sorry for follow up again, but, checking the ovirt interface I've found
that ovirt report the "engine" volume as an "arbiter" configuration and the
"data" volume as full replicated volume. Check these screenshots:

https://drive.google.com/drive/folders/0ByUV7xQtP1gCTE8tUTFfVmR5aDQ?usp=sharing

But the "gluster volume info" command report that all 2 volume are full
replicated:


*Volume Name: data*
*Type: Replicate*
*Volume ID: c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d*
*Status: Started*
*Snapshot Count: 0*
*Number of Bricks: 1 x 3 = 3*
*Transport-type: tcp*
*Bricks:*
*Brick1: gdnode01:/gluster/data/brick*
*Brick2: gdnode02:/gluster/data/brick*
*Brick3: gdnode04:/gluster/data/brick*
*Options Reconfigured:*
*nfs.disable: on*
*performance.readdir-ahead: on*
*transport.address-family: inet*
*storage.owner-uid: 36*
*performance.quick-read: off*
*performance.read-ahead: off*
*performance.io-cache: off*
*performance.stat-prefetch: off*
*performance.low-prio-threads: 32*
*network.remote-dio: enable*
*cluster.eager-lock: enable*
*cluster.quorum-type: auto*
*cluster.server-quorum-type: server*
*cluster.data-self-heal-algorithm: full*
*cluster.locking-scheme: granular*
*cluster.shd-max-threads: 8*
*cluster.shd-wait-qlength: 1*
*features.shard: on*
*user.cifs: off*
*storage.owner-gid: 36*
*features.shard-block-size: 512MB*
*network.ping-timeout: 30*
*performance.strict-o-direct: on*
*cluster.granular-entry-heal: on*
*auth.allow: **
*server.allow-insecure: on*





*Volume Name: engine*
*Type: Replicate*
*Volume ID: d19c19e3-910d-437b-8ba7-4f2a23d17515*
*Status: Started*
*Snapshot Count: 0*
*Number of Bricks: 1 x 3 = 3*
*Transport-type: tcp*
*Bricks:*
*Brick1: gdnode01:/gluster/engine/brick*
*Brick2: gdnode02:/gluster/engine/brick*
*Brick3: gdnode04:/gluster/engine/brick*
*Options Reconfigured:*
*nfs.disable: on*
*performance.readdir-ahead: on*
*transport.address-family: inet*
*storage.owner-uid: 36*
*performance.quick-read: off*
*performance.read-ahead: off*
*performance.io-cache: off*
*performance.stat-prefetch: off*
*performance.low-prio-threads: 32*
*network.remote-dio: off*
*cluster.eager-lock: enable*
*cluster.quorum-type: auto*
*cluster.server-quorum-type: server*
*cluster.data-self-heal-algorithm: full*
*cluster.locking-scheme: granular*
*cluster.shd-max-threads: 8*
*cluster.shd-wait-qlength: 1*
*features.shard: on*
*user.cifs: off*
*storage.owner-gid: 36*
*features.shard-block-size: 512MB*
*network.ping-timeout: 30*
*performance.strict-o-direct: on*
*cluster.granular-entry-heal: on*
*auth.allow: **

  server.allow-insecure: on


2017-07-21 19:13 GMT+02:00 yayo (j) <jag...@gmail.com>:

> 2017-07-20 14:48 GMT+02:00 Ravishankar N <ravishan...@redhat.com>:
>
>>
>> But it does  say something. All these gfids of completed heals in the log
>> below are the for the ones that you have given the getfattr output of. So
>> what is likely happening is there is an intermittent connection problem
>> between your mount and the brick process, leading to pending heals again
>> after the heal gets completed, which is why the numbers are varying each
>> time. You would need to check why that is the case.
>> Hope this helps,
>> Ravi
>>
>>
>>
>> *[2017-07-20 09:58:46.573079] I [MSGID: 108026]
>> [afr-self-heal-common.c:1254:afr_log_selfheal] 0-engine-replicate-0:
>> Completed data selfheal on e6dfd556-340b-4b76-b47b-7b6f5bd74327.
>> sources=[0] 1  sinks=2*
>> *[2017-07-20 09:59:22.995003] I [MSGID: 108026]
>> [afr-self-heal-metadata.c:51:__afr_selfheal_metadata_do]
>> 0-engine-replicate-0: performing metadata selfheal on
>> f05b9742-2771-484a-85fc-5b6974bcef81*
>> *[2017-07-20 09:59:22.999372] I [MSGID: 108026]
>> [afr-self-heal-common.c:1254:afr_log_selfheal] 0-engine-replicate-0:
>> Completed metadata selfheal on f05b9742-2771-484a-85fc-5b6974bcef81.
>> sources=[0] 1  sinks=2*
>>
>>
>
> Hi,
>
> following your suggestion, I've checked the "peer" status and I found that
> there is too many name for the hosts, I don't know if this can be the
> problem or part of it:
>
> *gluster peer status on NODE01:*
> *Number of Peers: 2*
>
> *Hostname: dnode02.localdomain.local*
> *Uuid: 7c0ebfa3-5676-4d3f-9bfa-7fff6afea0dd*
> *State: Peer in Cluster (Connected)*
> *Other names:*
> *192.168.10.52*
> *dnode02.localdomain.local*
> *10.10.20.90*
> *10.10.10.20*
>
>
>
>
> *gluster peer status on NODE02:*
> *Number of Peers: 2*
>
> *Hostname: dnode01.localdomain.local*
> *Uuid: a568bd60-b3e4-4432-a9bc-996c52eaaa12*
> *State: Peer in Cluster (Connected)*
> *Other names:*
> *gdnode01*
> *10.10.10.10*
>
> *Hostname: gdnode04*
> *Uuid: ce6e0f6b-12cf-4e40-8f01-d1609dfc5828*
> *State: Peer in Cluster (Connected)*

Re: [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements

2017-07-21 Thread yayo (j)
2017-07-20 14:48 GMT+02:00 Ravishankar N :

>
> But it does  say something. All these gfids of completed heals in the log
> below are the for the ones that you have given the getfattr output of. So
> what is likely happening is there is an intermittent connection problem
> between your mount and the brick process, leading to pending heals again
> after the heal gets completed, which is why the numbers are varying each
> time. You would need to check why that is the case.
> Hope this helps,
> Ravi
>
>
>
> *[2017-07-20 09:58:46.573079] I [MSGID: 108026]
> [afr-self-heal-common.c:1254:afr_log_selfheal] 0-engine-replicate-0:
> Completed data selfheal on e6dfd556-340b-4b76-b47b-7b6f5bd74327.
> sources=[0] 1  sinks=2*
> *[2017-07-20 09:59:22.995003] I [MSGID: 108026]
> [afr-self-heal-metadata.c:51:__afr_selfheal_metadata_do]
> 0-engine-replicate-0: performing metadata selfheal on
> f05b9742-2771-484a-85fc-5b6974bcef81*
> *[2017-07-20 09:59:22.999372] I [MSGID: 108026]
> [afr-self-heal-common.c:1254:afr_log_selfheal] 0-engine-replicate-0:
> Completed metadata selfheal on f05b9742-2771-484a-85fc-5b6974bcef81.
> sources=[0] 1  sinks=2*
>
>

Hi,

following your suggestion, I've checked the "peer" status and I found that
there is too many name for the hosts, I don't know if this can be the
problem or part of it:

*gluster peer status on NODE01:*
*Number of Peers: 2*

*Hostname: dnode02.localdomain.local*
*Uuid: 7c0ebfa3-5676-4d3f-9bfa-7fff6afea0dd*
*State: Peer in Cluster (Connected)*
*Other names:*
*192.168.10.52*
*dnode02.localdomain.local*
*10.10.20.90*
*10.10.10.20*




*gluster peer status on NODE02:*
*Number of Peers: 2*

*Hostname: dnode01.localdomain.local*
*Uuid: a568bd60-b3e4-4432-a9bc-996c52eaaa12*
*State: Peer in Cluster (Connected)*
*Other names:*
*gdnode01*
*10.10.10.10*

*Hostname: gdnode04*
*Uuid: ce6e0f6b-12cf-4e40-8f01-d1609dfc5828*
*State: Peer in Cluster (Connected)*
*Other names:*
*192.168.10.54*
*10.10.10.40*


*gluster peer status on NODE04:*
*Number of Peers: 2*

*Hostname: dnode02.neridom.dom*
*Uuid: 7c0ebfa3-5676-4d3f-9bfa-7fff6afea0dd*
*State: Peer in Cluster (Connected)*
*Other names:*
*10.10.20.90*
*gdnode02*
*192.168.10.52*
*10.10.10.20*

*Hostname: dnode01.localdomain.local*
*Uuid: a568bd60-b3e4-4432-a9bc-996c52eaaa12*
*State: Peer in Cluster (Connected)*
*Other names:*
*gdnode01*
*10.10.10.10*



All these ip are pingable and hosts resolvible across all 3 nodes but, only
the 10.10.10.0 network is the decidated network for gluster  (rosolved
using gdnode* host names) ... You think that remove other entries can fix
the problem? So, sorry, but, how can I remove other entries?

And, what about the selinux?

Thank you
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Gluster-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements

2017-07-25 Thread yayo (j)
2017-07-25 11:31 GMT+02:00 Sahina Bose :

>
>> Other errors on unsync gluster elements still remain... This is a
>> production env, so, there is any chance to subscribe to RH support?
>>
>
> The unsynced entries - did you check for disconnect messages in the mount
> log as suggested by Ravi?
>
>
Hi have provided this (check past mails): * tail -f
/var/log/glusterfs/rhev-data-center-mnt-glusterSD-dvirtgluster\:engine.log*


Is enougth?

Thank you
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements

2017-07-25 Thread yayo (j)
2017-07-25 7:42 GMT+02:00 Kasturi Narra :

> These errors are because not having glusternw assigned to the correct
> interface. Once you attach that these errors should go away.  This has
> nothing to do with the problem you are seeing.
>

Hi,

You talking  about errors like these?

2017-07-24 15:54:02,209+02 WARN  [org.ovirt.engine.core.vdsbro
ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2) [b7590c4]
Could not associate brick 'gdnode01:/gluster/engine/brick' of volume
'd19c19e3-910d-437b-8ba7-4f2a23d17515' with correct network as no gluster
network found in cluster '0002-0002-0002-0002-017a'


How to assign "glusternw (???)" to the correct interface?

Other errors on unsync gluster elements still remain... This is a
production env, so, there is any chance to subscribe to RH support?

Thank you
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements

2017-07-24 Thread yayo (j)
Hi,

UI refreshed but problem still remain ...

No specific error, I've only these errors but I've read that there is no
problem if I have this kind of errors:


2017-07-24 15:53:59,823+02 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
(DefaultQuartzScheduler2) [b7590c4] START,
GlusterServersListVDSCommand(HostName
= node01.localdomain.local, VdsIdVDSCommandParametersBase:{runAsync='true',
hostId='4c89baa5-e8f7-4132-a4b3-af332247570c'}), log id: 29a62417
2017-07-24 15:54:01,066+02 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
(DefaultQuartzScheduler2) [b7590c4] FINISH, GlusterServersListVDSCommand,
return: [10.10.20.80/24:CONNECTED, node02.localdomain.local:CONNECTED,
gdnode04:CONNECTED], log id: 29a62417
2017-07-24 15:54:01,076+02 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler2) [b7590c4] START,
GlusterVolumesListVDSCommand(HostName
= node01.localdomain.local, GlusterVolumesListVDSParameters:{runAsync='true',
hostId='4c89baa5-e8f7-4132-a4b3-af332247570c'}), log id: 7fce25d3
2017-07-24 15:54:02,209+02 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
(DefaultQuartzScheduler2) [b7590c4] Could not associate brick
'gdnode01:/gluster/engine/brick' of volume 'd19c19e3-910d-437b-8ba7-
4f2a23d17515' with correct network as no gluster network found in cluster
'0002-0002-0002-0002-017a'
2017-07-24 15:54:02,212+02 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
(DefaultQuartzScheduler2) [b7590c4] Could not associate brick
'gdnode02:/gluster/engine/brick' of volume 'd19c19e3-910d-437b-8ba7-
4f2a23d17515' with correct network as no gluster network found in cluster
'0002-0002-0002-0002-017a'
2017-07-24 15:54:02,215+02 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
(DefaultQuartzScheduler2) [b7590c4] Could not associate brick
'gdnode04:/gluster/engine/brick' of volume 'd19c19e3-910d-437b-8ba7-
4f2a23d17515' with correct network as no gluster network found in cluster
'0002-0002-0002-0002-017a'
2017-07-24 15:54:02,218+02 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
(DefaultQuartzScheduler2) [b7590c4] Could not associate brick
'gdnode01:/gluster/data/brick' of volume 'c7a5dfc9-3e72-4ea1-843e-
c8275d4a7c2d' with correct network as no gluster network found in cluster
'0002-0002-0002-0002-017a'
2017-07-24 15:54:02,221+02 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
(DefaultQuartzScheduler2) [b7590c4] Could not associate brick
'gdnode02:/gluster/data/brick' of volume 'c7a5dfc9-3e72-4ea1-843e-
c8275d4a7c2d' with correct network as no gluster network found in cluster
'0002-0002-0002-0002-017a'
2017-07-24 15:54:02,224+02 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
(DefaultQuartzScheduler2) [b7590c4] Could not associate brick
'gdnode04:/gluster/data/brick' of volume 'c7a5dfc9-3e72-4ea1-843e-
c8275d4a7c2d' with correct network as no gluster network found in cluster
'0002-0002-0002-0002-017a'
2017-07-24 15:54:02,224+02 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler2) [b7590c4] FINISH, GlusterVolumesListVDSCommand,
return: {d19c19e3-910d-437b-8ba7-4f2a23d17515=org.ovirt.engine.core.
common.businessentities.gluster.GlusterVolumeEntity@fdc91062, c7a5dfc9-3e72
-4ea1-843e-c8275d4a7c2d=org.ovirt.engine.core.common.businessentities.
gluster.GlusterVolumeEntity@999a6f23}, log id: 7fce25d3


Thank you


2017-07-24 8:12 GMT+02:00 Kasturi Narra <kna...@redhat.com>:

> Hi,
>
>Regarding the UI showing incorrect information about engine and data
> volumes, can you please refresh the UI and see if the issue persists  plus
> any errors in the engine.log files ?
>
> Thanks
> kasturi
>
> On Sat, Jul 22, 2017 at 11:43 AM, Ravishankar N <ravishan...@redhat.com>
> wrote:
>
>>
>> On 07/21/2017 11:41 PM, yayo (j) wrote:
>>
>> Hi,
>>
>> Sorry for follow up again, but, checking the ovirt interface I've found
>> that ovirt report the "engine" volume as an "arbiter" configuration and the
>> "data" volume as full replicated volume. Check these screenshots:
>>
>>
>> This is probably some refresh bug in the UI, Sahina might be able to tell
>> you.
>>
>>
>> https://drive.google.com/drive/folders/0ByUV7xQtP1gCTE8tUTFf
>> VmR5aDQ?usp=sharing
>>
>> But the "gluster volume info" command report that all 2 volume are full
>> replicated:
>>
>>
>> *Volume Name: data*
>> *Type: Replicate*
>> *Volume ID: c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d*
>> *Status: Started*
>> *Snapshot Count: 0*
>> *Number of Bricks: 1 x 3 = 3*
>> *Transpor

Re: [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements

2017-07-24 Thread yayo (j)
>
> All these ip are pingable and hosts resolvible across all 3 nodes but,
>> only the 10.10.10.0 network is the decidated network for gluster  (rosolved
>> using gdnode* host names) ... You think that remove other entries can fix
>> the problem? So, sorry, but, how can I remove other entries?
>>
> I don't think having extra entries could be a problem. Did you check the
> fuse mount logs for disconnect messages that I referred to in the other
> email?
>



* tail -f
/var/log/glusterfs/rhev-data-center-mnt-glusterSD-dvirtgluster\:engine.log*

*NODE01:*


[2017-07-24 07:34:00.799347] E [glusterfsd-mgmt.c:1908:mgmt_rpc_notify] 0
-glusterfsd-mgmt: failed to connect with remote-host: gdnode03 (Transport
endpoint is not connected)
[2017-07-24 07:44:46.687334] I [glusterfsd-mgmt.c:1926:mgmt_rpc_notify] 0
-glusterfsd-mgmt: Exhausted all volfile servers
[2017-07-24 09:04:25.951350] E [glusterfsd-mgmt.c:1908:mgmt_rpc_notify] 0
-glusterfsd-mgmt: failed to connect with remote-host: gdnode03 (Transport
endpoint is not connected)
[2017-07-24 09:15:11.839357] I [glusterfsd-mgmt.c:1926:mgmt_rpc_notify] 0
-glusterfsd-mgmt: Exhausted all volfile servers
[2017-07-24 10:34:51.231353] E [glusterfsd-mgmt.c:1908:mgmt_rpc_notify] 0
-glusterfsd-mgmt: failed to connect with remote-host: gdnode03 (Transport
endpoint is not connected)
[2017-07-24 10:45:36.991321] I [glusterfsd-mgmt.c:1926:mgmt_rpc_notify] 0
-glusterfsd-mgmt: Exhausted all volfile servers
[2017-07-24 12:05:16.383323] E [glusterfsd-mgmt.c:1908:mgmt_rpc_notify] 0
-glusterfsd-mgmt: failed to connect with remote-host: gdnode03 (Transport
endpoint is not connected)
[2017-07-24 12:16:02.271320] I [glusterfsd-mgmt.c:1926:mgmt_rpc_notify] 0
-glusterfsd-mgmt: Exhausted all volfile servers
[2017-07-24 13:35:41.535308] E [glusterfsd-mgmt.c:1908:mgmt_rpc_notify] 0
-glusterfsd-mgmt: failed to connect with remote-host: gdnode03 (Transport
endpoint is not connected)
[2017-07-24 13:46:27.423304] I [glusterfsd-mgmt.c:1926:mgmt_rpc_notify] 0
-glusterfsd-mgmt: Exhausted all volfile servers



Why again gdnode03? Was removed from gluster! was the arbiter node...


*NODE02:*


[2017-07-24 14:08:18.709209] I [MSGID: 108026] [
afr-self-heal-common.c:1254:afr_log_selfheal] 0-engine-replicate-0:
Completed data selfheal on db56ac00-fd5b-4326-a879-326ff56181de. sources=0 [
1]  sinks=2
[2017-07-24 14:08:38.746688] I [MSGID: 108026] [
afr-self-heal-metadata.c:51:__afr_selfheal_metadata_do]
0-engine-replicate-0: performing metadata selfheal on
f05b9742-2771-484a-85fc-5b6974bcef81
[2017-07-24 14:08:38.749379] I [MSGID: 108026] [
afr-self-heal-common.c:1254:afr_log_selfheal] 0-engine-replicate-0:
Completed metadata selfheal on f05b9742-2771-484a-85fc-5b6974bcef81.
sources=0 [1]  sinks=2
[2017-07-24 14:08:46.068001] I [MSGID: 108026] [
afr-self-heal-common.c:1254:afr_log_selfheal] 0-engine-replicate-0:
Completed data selfheal on db56ac00-fd5b-4326-a879-326ff56181de. sources=0 [
1]  sinks=2
The message "I [MSGID: 108026] [
afr-self-heal-metadata.c:51:__afr_selfheal_metadata_do]
0-engine-replicate-0: performing metadata selfheal on
f05b9742-2771-484a-85fc-5b6974bcef81" repeated 3 times between [2017-07-24
14:08:38.746688] and [2017-07-24 14:10:09.088625]
The message "I [MSGID: 108026] [afr-self-heal-common.c:1254:afr_log_selfheal]
0-engine-replicate-0: Completed metadata selfheal on
f05b9742-2771-484a-85fc-5b6974bcef81. sources=0 [1]  sinks=2 " repeated 3
times between [2017-07-24 14:08:38.749379] and [2017-07-24 14:10:09.091377]
[2017-07-24 14:10:19.384379] I [MSGID: 108026]
[afr-self-heal-common.c:1254:afr_log_selfheal] 0-engine-replicate-0:
Completed data selfheal on db56ac00-fd5b-4326-a879-326ff56181de. sources=0
[1]  sinks=2
[2017-07-24 14:10:39.433155] I [MSGID: 108026] [afr-self-heal-metadata.c:51:
__afr_selfheal_metadata_do] 0-engine-replicate-0: performing metadata
selfheal on f05b9742-2771-484a-85fc-5b6974bcef81
[2017-07-24 14:10:39.435847] I [MSGID: 108026]
[afr-self-heal-common.c:1254:afr_log_selfheal] 0-engine-replicate-0:
Completed metadata selfheal on f05b9742-2771-484a-85fc-5b6974bcef81.
sources=0 [1]  sinks=2



*NODE04:*


[2017-07-24 14:08:56.789598] I [MSGID: 108026] [afr-self-heal-common.c:1254
:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on
e6dfd556-340b-4b76-b47b-7b6f5bd74327. sources=[0] 1  sinks=2
[2017-07-24 14:09:17.231987] I [MSGID: 108026] [afr-self-heal-common.c:1254
:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on db56ac00
-fd5b-4326-a879-326ff56181de. sources=[0] 1  sinks=2
[2017-07-24 14:09:38.039541] I [MSGID: 108026] [afr-self-heal-common.c:1254
:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on
e6dfd556-340b-4b76-b47b-7b6f5bd74327. sources=[0] 1  sinks=2
[2017-07-24 14:09:48.875602] I [MSGID: 108026] [afr-self-heal-common.c:1254
:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on db56ac00
-fd5b-4326-a879-326ff56181de. sources=[0] 1  sinks=2
[2017-07-24 14:10:39.832068] I [MSGID: 108026] 

[ovirt-users] add fourth full gluster node and remove arbiter: ovirt 4.1 with hosted engine

2017-06-30 Thread yayo (j)
Hi at all,

we have a 3 node cluster with this configuration:

ovirtzz 4.1 with 3 node hyperconverged with gluster. 2 node are "full
replicated" and 1 node is the arbiter.

Now we have a new server to add to cluster then we want to add this new
server and remove the arbiter (or, make this new server a "full replicated"
gluster with arbiter role? I don't know)

Can you please help me to know what is the right way to do this? Or, Can
you give me any doc or link that explain the steps to do this?

Thank you in advance!
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] add fourth full gluster node and remove arbiter: ovirt 4.1 with hosted engine

2017-06-30 Thread yayo (j)
2017-06-30 11:01 GMT+02:00 knarra :

> You do not need to remove the arbiter node as you are getting the
> advantage of saving on space by having this config.
>
> Since you have a new you can add this as fourth node and create another
> gluster volume (replica 3) out of this node plus the other two nodes and
> run vm images there as well.
>

Hi,

And thanks for the answer. The actual arbiter must be removed because is
too obsolete. So, I needs to add the new "full replicated" node but I want
to know what are the steps for add a new "full replicated" node and remove
the arbiter node (Also a way to move the arbiter role to the new node, If
needed) . Extra info: I want to know if I can do this on an existing ovirt
gluster Data Domain (called Data01) because we have many vm runnig on it.

thank you
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] add fourth full gluster node and remove arbiter: ovirt 4.1 with hosted engine

2017-06-30 Thread yayo (j)
2017-06-30 12:54 GMT+02:00 yayo (j) <jag...@gmail.com>:

> The actual arbiter must be removed because is too obsolete. So, I needs to
> add the new "full replicated" node but I want to know what are the steps
> for add a new "full replicated" node and remove the arbiter node (Also a
> way to move the arbiter role to the new node, If needed) . Extra info: I
> want to know if I can do this on an existing ovirt gluster Data Domain
> (called Data01) because we have many vm runnig on it.
>

Hi,

I have found this doc from RH about replacing host in a gluster env:
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/sect-Replacing_Hosts.html

I can use the command described at point 7 ?


# *gluster volume replace-brick vol sys0.example.com:/rhs/brick1/b1
sys5.example.com:/rhs/brick1/b1 commit force*
*volume replace-brick: success: replace-brick commit successful*


The question is: The replaced node will be a data node (a "full replicated"
node) or will be again an arbiter?

Thank you
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] add fourth full gluster node and remove arbiter: ovirt 4.1 with hosted engine

2017-07-03 Thread yayo (j)
Hi,

And sorry for delay

2017-06-30 14:09 GMT+02:00 knarra :

> To add a fully replicated node  you need to reduce the replica count to 2
> and add new brick to the volume so that it becomes replica 3. Reducing
> replica count by removing a brick from replica / arbiter cannot be done
> from UI currently and this has to be done using gluster CLI.
>  AFAIR, there was an issue where vm's were going to paused state when
> reducing the replica count and increasing it to 3. Not sure if this still
> holds good with the latest release.
>
> Any specific reason why you want to move to full replication instead of
> using an arbiter node ?
>
>
We have a new server with the same hard disk size of other two node, so,
why not? Why join the cluster as an arbiter when we can have the same disk
capacity to add extra replication?



>
> and remove the arbiter node (Also a way to move the arbiter role to the
> new node, If needed)
>
> To move arbiter role to a new node you can move the node to maintenance ,
> add  new node and replace  old brick with new brick. You can follow the
> steps below to do that.
>
>
>- Move the node to be replaced into Maintenance mode
>- Prepare the replacement node
>- Prepare bricks on that node.
>- Create replacement brick directories
>- Ensure the new directories are owned by the vdsm user and the kvm
>group.
>- # mkdir /rhgs/bricks/engine
>- # chmod vdsm:kvm /rhgs/bricks/engine
>- # mkdir /rhgs/bricks/data
>- # chmod vdsm:kvm /rhgs/bricks/data
>- Run the following command from one of the healthy cluster members:
>- # gluster peer probe 
>-  add the new host to the cluster.
>- Add new host address to gluster network
>- Click Network Interfaces sub-tab.
>- Click Set up Host Networks.
>- Drag and drop the glusternw network onto the IP address of the new
>host.
>- Click OK
>- Replace the old brick with the brick on the new host
>- Click the Bricks sub-tab.
>- Verify that brick heal completes successfully.
>- In the Hosts tab, right-click on the old host and click Remove.
>- Clean old host metadata
>- # hosted-engine --clean-metadata --host-id=
>--force-clean
>
>
>
I need this (reads: I need the arbiter role) if I reduce replica count then
I add the new node as full replica and increasing replica count again to 3?
(As you expained above)

Thank you
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] add fourth full gluster node and remove arbiter: ovirt 4.1 with hosted engine

2017-07-03 Thread yayo (j)
2017-07-03 15:42 GMT+02:00 knarra :

> So, please poweroff your vms while performing this.


Thank you,

Ok, no problem, cluster is not (yet) in production

Thank you again!
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Configure an ovirt 4.1 hosted engine using SAS storage (aka DAS storage)

2017-07-06 Thread yayo (j)
Hi all,

I'm tring to install a new cluster ovirt 4.1 (Centos 7) configured to use a
SAN that expose LUN via SAS . When I start to deploy ovirt and the engine
using "hosted-engine --deploy" the only options I have are:

(glusterfs, iscsi, fc, nfs3, nfs4)

There is no option for "local" storage (that are not local naturally, but
are multipath device exposed by SAN via LUN)

Can you help me? What is the right configuration?

Thank you
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Configure an ovirt 4.1 hosted engine using SAS storage (aka DAS storage)

2017-07-06 Thread yayo (j)
Hi,

I have seen that if I choose the option "FC" the wizard "hosted-engine
--deploy" show me the available LUN exposed via SAS:


  Please specify the storage you would like to use (glusterfs,
iscsi, fc, nfs3, nfs4)[nfs3]: fc
  The following luns have been found on the requested target:
[1] 3600a098000afe51200eb5940ba27   2048GiB
NETAPP  INF-01-00
status: used, paths: 2 active

[2] 3600a098000afe57e01045940baa3   2048GiB
NETAPP  INF-01-00
status: used, paths: 2 active



Can I go forward or is this totally not supported?

Thank you


2017-07-06 11:43 GMT+02:00 yayo (j) <jag...@gmail.com>:

>
> Hi all,
>
> I'm tring to install a new cluster ovirt 4.1 (Centos 7) configured to use
> a SAN that expose LUN via SAS . When I start to deploy ovirt and the engine
> using "hosted-engine --deploy" the only options I have are:
>
> (glusterfs, iscsi, fc, nfs3, nfs4)
>
> There is no option for "local" storage (that are not local naturally, but
> are multipath device exposed by SAN via LUN)
>
> Can you help me? What is the right configuration?
>
> Thank you
>



-- 
Linux User: 369739 http://counter.li.org
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.1 : Can't install guest tools for Zentyal 5.0 (based on ubuntu 16.04 xenial)

2017-08-07 Thread yayo (j)
>
> Agreed.
>>
>> Open a bug with Zentyal. They broke the packages from Ubuntu and should
>> fix it themselves. They have to backport newer version of python-apt.
>> The one from yakkety (1.1.0~beta5) should be good enough to fix the
>> problem.
>>
>> In the bug report note that the ovirt-guest-agent from Ubuntu repository
>> cannot be installed. It is not only related to the package from the
>> private repo.
>
>
>
> Ok, Thank you!
>


Done: https://tracker.zentyal.org/issues/5279

Thank you Again
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.1 : Can't install guest tools for Zentyal 5.0 (based on ubuntu 16.04 xenial)

2017-08-07 Thread yayo (j)
>
> This is the problem!
>
> I looked at the packages for conflict and figured the issue is in gnpug.
> Zentyal repository contains gnupg version 2.1.15-1ubuntu6 which breaks
> python-apt <= 1.1.0~beta4.
>
>
Ok, thank you! Any workaround (something like packege pinning?) to fix this
problem?


>
> > And this is a BIG problem ... Can I open a bug ? Where?
>
> Agreed.
>
> Open a bug with Zentyal. They broke the packages from Ubuntu and should
> fix it themselves. They have to backport newer version of python-apt.
> The one from yakkety (1.1.0~beta5) should be good enough to fix the
> problem.
>
> In the bug report note that the ovirt-guest-agent from Ubuntu repository
> cannot be installed. It is not only related to the package from the
> private repo.



Ok, Thank you!
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] ovirt 4.1 : Can't install guest tools for Zentyal 5.0 (based on ubuntu 16.04 xenial)

2017-08-04 Thread yayo (j)
Hi all,

I have this problem: I'm tring to install the guest tools following this
guide:
https://www.ovirt.org/documentation/how-to/guest-agent/install-the-guest-agent-in-ubuntu/#for-ubuntu-1604

This is the sources:

deb http://it.archive.ubuntu.com/ubuntu/ xenial main restricted
deb http://it.archive.ubuntu.com/ubuntu/ xenial-updates main restricted
deb http://it.archive.ubuntu.com/ubuntu/ xenial universe
deb http://it.archive.ubuntu.com/ubuntu/ xenial-updates universe
deb http://it.archive.ubuntu.com/ubuntu/ xenial multiverse
deb http://it.archive.ubuntu.com/ubuntu/ xenial-updates multiverse
deb http://it.archive.ubuntu.com/ubuntu/ xenial-backports main restricted
universe multiverse

deb http://archive.zentyal.org/zentyal 5.0 main extra

deb http://security.ubuntu.com/ubuntu xenial-security main restricted
deb http://security.ubuntu.com/ubuntu xenial-security universe
deb http://security.ubuntu.com/ubuntu xenial-security multiverse


But when I try to install, I have this error:


root@vmdczen01:~# apt-get install ovirt-guest-agent
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 ovirt-guest-agent : Depends: python-apt but it is not going to be installed
E: Unable to correct problems, you have held broken packages.


I've found that python-apt is in "main" repo:


https://packages.ubuntu.com/xenial/amd64/python-apt/download


So, what's the problem?


Can you help me?
Thank you
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.1 : Can't install guest tools for Zentyal 5.0 (based on ubuntu 16.04 xenial)

2017-08-07 Thread yayo (j)
Hi,


I just tried that with development version of Zentyal and it works for
> me. Well, there are some caveats, see below.
>
>
Please provide steps not just "works for me" ... Thank you



> > Just wanted to add my input.  I just recently noticed the same thing.
> > Luckily i was just testing Zentyal, but when I installed python-apt after
> > reading the error message, apt seemed to completely break.  I would be
> > curious on a workaround/fix for this as well.
>
> Could you be more specific? What was the problem? Was it problem with
> python-apt per-se or with ovirt-guest-agent using python-apt?
>
>
In the past with Zentyal 5 Dev Edition I had the same error: Added
suggested repository that want install "python-apt" and remove "apt-get"
(because conflicts)



>
> >
> >
> > On Fri, Aug 4, 2017 at 9:28 AM, yayo (j) <jag...@gmail.com> wrote:
> >
> > > Hi all,
> > >
> > > I have this problem: I'm tring to install the guest tools following
> this
> > > guide: https://www.ovirt.org/documentation/how-to/guest-
> > > agent/install-the-guest-agent-in-ubuntu/#for-ubuntu-1604
>
> I've noticed that the ovirt-guest-agent package available in the
> repository mentioned on the oVirt site is missing dependency on
> qemu-guest-agent. You have to install it additionally for oVirt to work
> properly.
>


*Steps with repository:*
http://download.opensuse.org/repositories/home:/evilissimo:/ubuntu:/16.04/xUbuntu_16.04/

*Update:*


root@vmdczen01:~# apt-get update
Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
Hit:2 http://it.archive.ubuntu.com/ubuntu xenial InRelease
Get:3 http://it.archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB]
Get:4 http://it.archive.ubuntu.com/ubuntu xenial-backports InRelease [102
kB]
Hit:5 http://archive.zentyal.org/zentyal 5.0 InRelease
Ign:6
http://download.opensuse.org/repositories/home:/evilissimo:/ubuntu:/16.04/xUbuntu_16.04
InRelease
Hit:7
http://download.opensuse.org/repositories/home:/evilissimo:/ubuntu:/16.04/xUbuntu_16.04
Release
Fetched 306 kB in 0s (325 kB/s)
Reading package lists... Done



*Tring to install:*


root@vmdczen01:~# apt-get install ovirt-guest-agent
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 ovirt-guest-agent : Depends: python-apt but it is not going to be installed
E: Unable to correct problems, you have held broken packages.


*Zentyal version is: *5.0.8

So, this repository is *totally broken*



>
> If you, however, instal the ovirt-guest-agent from official Ubuntu
> repository there is different issue. There is this unresolved bug:
>
> https://bugs.launchpad.net/ubuntu/+source/ovirt-guest-agent/+bug/1609130
>
> You have to fix permissions on /var/log/ovirt-guest-agent as mentioned
> in the bug report.
>
>

The problem is the same, if you remove the extra repisotory and then try to
use the "main" repository, you have the problem with python-apt


Extra tests:

I have tried to install "python-apt" directly and I can reproduce the
problem mentioned by Stewart:


Update (Check that extra repository is commented out):


root@vmdczen01:~# apt-get update
Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
Get:2 http://archive.zentyal.org/zentyal 5.0 InRelease [4,887 B]
Get:3 http://it.archive.ubuntu.com/ubuntu xenial InRelease [247 kB]
Get:4 http://archive.zentyal.org/zentyal 5.0/main amd64 Packages [28.1 kB]
Get:5 http://archive.zentyal.org/zentyal 5.0/main i386 Packages [6,218 B]
Get:6 http://security.ubuntu.com/ubuntu xenial-security/main amd64 Packages
[325 kB]
Get:7 http://it.archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB]
Get:8 http://it.archive.ubuntu.com/ubuntu xenial-backports InRelease [102
kB]
Get:9 http://it.archive.ubuntu.com/ubuntu xenial/main amd64 Packages [1,201
kB]
Get:10 http://it.archive.ubuntu.com/ubuntu xenial/main i386 Packages [1,196
kB]
Get:11 http://security.ubuntu.com/ubuntu xenial-security/main i386 Packages
[306 kB]
Get:12 http://it.archive.ubuntu.com/ubuntu xenial/main Translation-en [568
kB]
Get:13 http://it.archive.ubuntu.com/ubuntu xenial/universe amd64 Packages
[7,532 kB]
Get:14 http://it.archive.ubuntu.com/ubuntu xenial/universe i386 Packages
[7,512 kB]
Get:15 http://it.archive.ubuntu.com/ubuntu xenial/universe Translation-en
[4,354 kB]
Get:16 http://it.archive.ubuntu.com/ubuntu xenial/multiverse amd64 Packages
[144 kB]
Get:17 http://it.archive.ubuntu.com/ubun

Re: [ovirt-users] ovirt 4.1 : Can't install guest tools for Zentyal 5.0 (based on ubuntu 16.04 xenial)

2017-08-07 Thread yayo (j)
>
>
>
> Better workaround is to download newer python-apt from here [1] and
> install it manually with dpkg. The guest agent seems to work OK with it.
> There's much less chance of breaking something else, also newer
> python-apt will be picked-up automatically on upgrades.
>
> Tomas
>
>
> [1] https://packages.ubuntu.com/yakkety/python-apt
>
>

Thank you, this is fix the problem!


root@vmdczen01:~# dpkg -i python-apt_1.1.0~beta5_amd64.deb
Selecting previously unselected package python-apt.
(Reading database ... 155994 files and directories currently installed.)
Preparing to unpack python-apt_1.1.0~beta5_amd64.deb ...
Unpacking python-apt (1.1.0~beta5) ...
dpkg: dependency problems prevent configuration of python-apt:
 python-apt depends on dirmngr | gnupg (<< 2); however:
  Package dirmngr is not installed.
  Version of gnupg on system is 2.1.15-1ubuntu6.

dpkg: error processing package python-apt (--install):
 dependency problems - leaving unconfigured
Errors were encountered while processing:
 python-apt

root@vmdczen01:~# apt-get -f install
Reading package lists... Done
Building dependency tree
Reading state information... Done
Correcting dependencies... Done
The following packages were automatically installed and are no longer
required:
  linux-headers-4.4.0-83 linux-headers-4.4.0-83-generic
linux-image-4.4.0-83-generic linux-image-extra-4.4.0-83-generic
Use 'sudo apt autoremove' to remove them.
The following additional packages will be installed:
  dirmngr
The following NEW packages will be installed:
  dirmngr
0 upgraded, 1 newly installed, 0 to remove and 2 not upgraded.
1 not fully installed or removed.
Need to get 235 kB of archives.
After this operation, 644 kB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://it.archive.ubuntu.com/ubuntu xenial/main amd64 dirmngr amd64
2.1.11-6ubuntu2 [235 kB]
Fetched 235 kB in 0s (775 kB/s)
Selecting previously unselected package dirmngr.
(Reading database ... 156021 files and directories currently installed.)
Preparing to unpack .../dirmngr_2.1.11-6ubuntu2_amd64.deb ...
Unpacking dirmngr (2.1.11-6ubuntu2) ...
Processing triggers for man-db (2.7.5-1) ...
Setting up dirmngr (2.1.11-6ubuntu2) ...
Setting up python-apt (1.1.0~beta5) ...

root@vmdczen01:~# apt-get install ovirt-guest-agent
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages were automatically installed and are no longer
required:
  linux-headers-4.4.0-83 linux-headers-4.4.0-83-generic
linux-image-4.4.0-83-generic linux-image-extra-4.4.0-83-generic
Use 'sudo apt autoremove' to remove them.
The following additional packages will be installed:
  libnl-route-3-200 python-dbus python-ethtool qemu-guest-agent
Suggested packages:
  python-dbus-doc python-dbus-dbg
Recommended packages:
  python-gi | python-qt4-dbus
The following NEW packages will be installed:
  libnl-route-3-200 ovirt-guest-agent python-dbus python-ethtool
qemu-guest-agent
0 upgraded, 5 newly installed, 0 to remove and 2 not upgraded.
Need to get 383 kB of archives.
After this operation, 1,574 kB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://it.archive.ubuntu.com/ubuntu xenial-updates/main amd64
libnl-route-3-200 amd64 3.2.27-1ubuntu0.16.04.1 [124 kB]
Get:2 http://it.archive.ubuntu.com/ubuntu xenial/main amd64 python-dbus
amd64 1.2.0-3 [83.5 kB]
Get:3 http://it.archive.ubuntu.com/ubuntu xenial/universe amd64
python-ethtool amd64 0.11-3 [18.0 kB]
Get:4 http://it.archive.ubuntu.com/ubuntu xenial-updates/universe amd64
qemu-guest-agent amd64 1:2.5+dfsg-5ubuntu10.14 [135 kB]
Get:5 http://it.archive.ubuntu.com/ubuntu xenial/universe amd64
ovirt-guest-agent all 1.0.11.2.dfsg-1 [23.4 kB]
Fetched 383 kB in 0s (993 kB/s)
Selecting previously unselected package libnl-route-3-200:amd64.
(Reading database ... 156038 files and directories currently installed.)
Preparing to unpack .../libnl-route-3-200_3.2.27-1ubuntu0.16.04.1_amd64.deb
...
Unpacking libnl-route-3-200:amd64 (3.2.27-1ubuntu0.16.04.1) ...
Selecting previously unselected package python-dbus.
Preparing to unpack .../python-dbus_1.2.0-3_amd64.deb ...
Unpacking python-dbus (1.2.0-3) ...
Selecting previously unselected package python-ethtool.
Preparing to unpack .../python-ethtool_0.11-3_amd64.deb ...
Unpacking python-ethtool (0.11-3) ...
Selecting previously unselected package qemu-guest-agent.
Preparing to unpack
.../qemu-guest-agent_1%3a2.5+dfsg-5ubuntu10.14_amd64.deb ...
Unpacking qemu-guest-agent (1:2.5+dfsg-5ubuntu10.14) ...
Selecting previously unselected package ovirt-guest-agent.
Preparing to unpack .../ovirt-guest-agent_1.0.11.2.dfsg-1_all.deb ...
Unpacking ovirt-guest-agent (1.0.11.2.dfsg-1) ...
Processing triggers for libc-bin (2.23-0ubuntu9) ...
Processing triggers for man-db (2.7.5-1) ...
Processing triggers for systemd (229-4ubuntu19) ...
Processing triggers for ureadahead (0.100.0-19) ...
Processing triggers for dbus (1.10.6-1ubuntu3.3) ...

Re: [ovirt-users] ovirt 4.1: migrate vm to node 2 results in lost connectivity

2017-05-15 Thread yayo (j)
2017-05-11 17:08 GMT+02:00 Sandro Bonazzola :

> Can you be a bit more specific? Is it a Hosted Engine deployment?
> Hyperconverged? Using oVirt Node for nodes?
>

Hi and sorry for delay, It's an hosted engine deployment Hyperconverged
with gluster 2 node + 1 arbiter ... We have used the official repository
for ovirt on centos 7 ... Is this answering to your question about node ?

Thank you
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] ovirt 4.1: migrate vm to node 2 results in lost connectivity

2017-05-11 Thread yayo (j)
Hi all,

I have a simple 3 node ovirt 4.1 gluster. All works fine but when I create
or move a vm to node 2 the connection is lost, if I move the vm back to
node 1 all works fine again. Looking to ovirt engine, network seems
identical (for now I have only ovirtmng network). If I ping something from
node 2, ping works fine (internet in general). Where can I check also?
thank you
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.1: migrate vm to node 2 results in lost connectivity

2017-05-17 Thread yayo (j)
Hi,

Did you have any news about this topic? I did some other tests but without
success... When a vm migrate to second node, internet connection is lost

Thank you

2017-05-15 9:36 GMT+02:00 Sandro Bonazzola <sbona...@redhat.com>:

>
>
> On Mon, May 15, 2017 at 9:33 AM, yayo (j) <jag...@gmail.com> wrote:
>
>>
>> 2017-05-11 17:08 GMT+02:00 Sandro Bonazzola <sbona...@redhat.com>:
>>
>>> Can you be a bit more specific? Is it a Hosted Engine deployment?
>>> Hyperconverged? Using oVirt Node for nodes?
>>>
>>
>> Hi and sorry for delay, It's an hosted engine deployment Hyperconverged
>> with gluster 2 node + 1 arbiter ... We have used the official repository
>> for ovirt on centos 7 ... Is this answering to your question about node ?
>>
>
> Thanks, I added some people who may help with this issue.
>
>
>
>>
>> Thank you
>>
>
>
>
> --
>
> SANDRO BONAZZOLA
>
> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>
> Red Hat EMEA <https://www.redhat.com/>
> <https://red.ht/sig>
> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
>



-- 
Linux User: 369739 http://counter.li.org
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] ovirt 4.1 hosted engine: clear Task error: Operation is not allowed in this task state: ("can't clean in state running", )

2017-10-11 Thread yayo (j)
Hi all,

ovirt 4.1 hosted engine on 2 node cluster and FC LUN Storage

I'm trying to clear some task pending from months using vdsClient but I
can't do anything.  Below are the steps (on node 1, the SPM):

1. Show all tasks:



















*# vdsClient -s 0 getAllTasksInfofd319af4-d160-48ce-b682-5a908333a5e1 :
 verb = createVolume id =
fd319af4-d160-48ce-b682-5a908333a5e19bbc2bc4-3c73-4814-a785-6ea737904528 :
   verb = prepareMerge id =
9bbc2bc4-3c73-4814-a785-6ea737904528e70feb21-964d-49d9-9b5a-8e3f70a92db1 :
   verb = prepareMerge id =
e70feb21-964d-49d9-9b5a-8e3f70a92db1cf064461-f0ab-4e44-a68f-b2d58fa83a21 :
   verb = prepareMerge id =
cf064461-f0ab-4e44-a68f-b2d58fa83a2185b7cf4e-d658-4785-94f0-391fe9616b41 :
   verb = prepareMerge id =
85b7cf4e-d658-4785-94f0-391fe9616b417416627a-fe50-4353-b129-e01bba066a66 :
   verb = prepareMerge id =
7416627a-fe50-4353-b129-e01bba066a66*


2. Stop all tasks (repeted for every task):

*# vdsClient -s 0 stopTask 7416627a-fe50-4353-b129-e01bba066a66 *
Task is aborted: u'7416627a-fe50-4353-b129-e01bba066a66' - code 411

3. Tring to clear tasks:

* # vdsClient -s 0 clearTask 7416627a-fe50-4353-b129-e01bba066a66*
*Operation is not allowed in this task state: ("can't clean in state
running",)*



*On Node 01 (the SPM) I have multiple errors in /var/log/vdsm/vdsm.log like
this:*

*2017-10-11 15:09:53,719+0200 INFO  (jsonrpc/3) [storage.TaskManager.Task]
(Task='9519d4db-2960-4b88-82f2-e4c1094eac54') aborting: Task is aborted:
u'Operation is not allowed in this task state: ("can\'t clean in state
running",)' - code 100 (task:1175)*
*2017-10-11 15:09:53,719+0200 ERROR (jsonrpc/3) [storage.Dispatcher] FINISH
clearTask error=Operation is not allowed in this task state: ("can't clean
in state running",) (dispatcher:78)*
*2017-10-11 15:09:53,720+0200 INFO  (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC
call Task.clear failed (error 410) in 0.01 seconds (__init__:539)*
*2017-10-11 15:09:53,743+0200 INFO  (jsonrpc/6) [vdsm.api] START
clearTask(taskID=u'7416627a-fe50-4353-b129-e01bba066a66', spUUID=None,
options=None) from=:::192.168.0.226,36724, flow_id=7cd340ec (api:46)*
*2017-10-11 15:09:53,743+0200 INFO  (jsonrpc/6) [vdsm.api] FINISH clearTask
error=Operation is not allowed in this task state: ("can't clean in state
running",) from=:::192.168.0.226,36724, flow_id=7cd340ec (api:50)*
*2017-10-11 15:09:53,743+0200 ERROR (jsonrpc/6) [storage.TaskManager.Task]
(Task='0e12e052-2aca-480d-b50f-5de01ddebe35') Unexpected error (task:870)*
*Traceback (most recent call last):*
*  File "/usr/share/vdsm/storage/task.py", line 877, in _run*
*return fn(*args, **kargs)*
*  File "", line 2, in clearTask*
*  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in
method*
*ret = func(*args, **kwargs)*
*  File "/usr/share/vdsm/storage/hsm.py", line 2258, in clearTask*
*return self.taskMng.clearTask(taskID=taskID)*
*  File "/usr/share/vdsm/storage/taskManager.py", line 175, in clearTask*
*t.clean()*
*  File "/usr/share/vdsm/storage/task.py", line 1047, in clean*
*raise se.TaskStateError("can't clean in state %s" % self.state)*
*TaskStateError: Operation is not allowed in this task state: ("can't clean
in state running",)*


*On Node 02 (is a 2 node cluster) I have other errors (I don't know if are
related):*

*2017-10-11 15:11:57,083+0200 INFO  (jsonrpc/7) [storage.LVM] Refreshing
lvs: vg=b50c1f5c-aa2c-4a53-9f89-83517fa70d3b lvs=['leases'] (lvm:1291)*
*2017-10-11 15:11:57,084+0200 INFO  (jsonrpc/7) [storage.LVM] Refreshing
LVs (vg=b50c1f5c-aa2c-4a53-9f89-83517fa70d3b, lvs=['leases']) (lvm:1319)*
*2017-10-11 15:11:57,124+0200 INFO  (jsonrpc/7) [storage.VolumeManifest]
b50c1f5c-aa2c-4a53-9f89-83517fa70d3b/d42f671e-1745-46c1-9e1c-2833245675fc/c86afaa5-6ca8-4fcb-a27e-ffbe0133fe23
info is {'status': 'OK', 'domain': 'b50c1f5c-aa2c-4a53-9f89-83517fa70d3b',
'voltype': 'LEAF', 'description': 'hosted-engine.metadata', 'parent':
'----', 'format': 'RAW', 'generation': 0,
'image': 'd42f671e-1745-46c1-9e1c-2833245675fc', 'ctime': '1499437345',
'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize':
'134217728', 'children': [], 'pool': '', 'capacity': '134217728', 'uuid':
u'c86afaa5-6ca8-4fcb-a27e-ffbe0133fe23', 'truesize': '134217728', 'type':
'PREALLOCATED', 'lease': {'owners': [], 'version': None}} (volume:272)*
*2017-10-11 15:11:57,125+0200 INFO  (jsonrpc/7) [vdsm.api] FINISH
getVolumeInfo return={'info': {'status': 'OK', 'domain':
'b50c1f5c-aa2c-4a53-9f89-83517fa70d3b', 'voltype': 'LEAF', 'description':
'hosted-engine.metadata', 'parent': '----',
'format': 'RAW', 'generation': 0, 'image':
'd42f671e-1745-46c1-9e1c-2833245675fc', 'ctime': '1499437345', 'disktype':
'2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '134217728',
'children': [], 'pool': '', 'capacity': '134217728', 'uuid':

Re: [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements

2017-09-08 Thread yayo (j)
2017-07-19 11:22 GMT+02:00 yayo (j) <jag...@gmail.com>:

> running the "gluster volume heal engine" don't solve the problem...
>
> Some extra info:
>
> We have recently changed the gluster from: 2 (full repliacated) + 1
> arbiter to 3 full replicated cluster but i don't know this is the problem...
>
>

Hi,

I'm sorry for the follow up. I want to say that after upgrade all nodes to
the same level, all problems are solved and cluster works very well now!

Thank you all for the support!
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.1 hosted engine: clear Task error: Operation is not allowed in this task state: ("can't clean in state running", )

2017-10-11 Thread yayo (j)
Hi,


2017-10-11 16:13 GMT+02:00 Adam Litke :

> What is the status of your Datacenter?
>

The status of datacenter is "operational"



> Are these hosts both operational?
>

yes


> Are you experiencing other problems with your storage other than the
> inconsistent task state?
>

No, what kind of problem?


> Do you see the KeyError: 'VERSION' message related to domain 
> *b6730d64-2cf8-42a3-8f08-24b8cc2c0cd8
> also on Node02?  *
>

Is from node 2


> *Did you experience any disaster (power outage, FC storage outage,
> network, etc) around the time this started happening?*
>


No
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.1 hosted engine: Resize or Change "hosted-engine" Data Domain

2018-01-07 Thread yayo (j)
Hi,

Sorry but I needs to migrate from one hosted-engine to another, so, where I
can restore backup? Before or after the autoimport triggered?

* Create new hosted-engine lun
* Backup current hosted-engine
* From one node execute hosted-engine --deploy --he-remove-storage-vm
--he-remove-hosts
Right? And after that?

Can you help me to better understand?
Thank you!

Il 03 Gen 2018 14:39, "Martin Sivak" <msi...@redhat.com> ha scritto:

Hi,

we do not have any nice procedure to do that. Moving hosted engine to
a different storage usually involves backup and restore of the engine
database. See for example here:
http://lists.ovirt.org/pipermail/users/2017-June/082466.html

Best regards

--
Martin Sivak
SLA / oVirt

On Wed, Jan 3, 2018 at 12:20 PM, yayo (j) <jag...@gmail.com> wrote:
> hi at all,
>
> We have the "hosted engine" Data domain on a FC LUN too big for only the
> hosted-engine so we want to create another little FC LUN and move the
> hosted-engine vm on this new LUN and destroy old one ...
>
> Is there any official workflow or how to to do this operation? Or, someone
> can  guide me?
>
> Thank you!
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.1 hosted engine: Resize or Change "hosted-engine" Data Domain

2018-01-08 Thread yayo (j)
Hi,

Sorry for ask it again but the steps are not clear ...

Thank you

2018-01-07 16:39 GMT+01:00 yayo (j) <jag...@gmail.com>:

> Hi,
>
> Sorry but I needs to migrate from one hosted-engine to another, so, where
> I can restore backup? Before or after the autoimport triggered?
>
> * Create new hosted-engine lun
> * Backup current hosted-engine
> * From one node execute hosted-engine --deploy --he-remove-storage-vm
> --he-remove-hosts
> Right? And after that?
>
> Can you help me to better understand?
> Thank you!
>
> Il 03 Gen 2018 14:39, "Martin Sivak" <msi...@redhat.com> ha scritto:
>
> Hi,
>
> we do not have any nice procedure to do that. Moving hosted engine to
> a different storage usually involves backup and restore of the engine
> database. See for example here:
> http://lists.ovirt.org/pipermail/users/2017-June/082466.html
>
> Best regards
>
> --
> Martin Sivak
> SLA / oVirt
>
> On Wed, Jan 3, 2018 at 12:20 PM, yayo (j) <jag...@gmail.com> wrote:
> > hi at all,
> >
> > We have the "hosted engine" Data domain on a FC LUN too big for only the
> > hosted-engine so we want to create another little FC LUN and move the
> > hosted-engine vm on this new LUN and destroy old one ...
> >
> > Is there any official workflow or how to to do this operation? Or,
> someone
> > can  guide me?
> >
> > Thank you!
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
>
>
>


-- 
Linux User: 369739 http://counter.li.org
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.1 hosted engine: Resize or Change "hosted-engine" Data Domain

2018-01-08 Thread yayo (j)
2018-01-08 15:12 GMT+01:00 Yedidyah Bar David :

>
> Please see this page:
>
> https://www.ovirt.org/documentation/self-hosted/
> chap-Backing_up_and_Restoring_an_EL-Based_Self-Hosted_Environment/
>
> There, note that when you restore the engine ('engine-backup
> --mode=restore'),
> you should pass also '--he-remove-storage-vm' and '--he-remove-hosts'.
>
> Best regards,
>

Hi and thanks for the answer,

Sorry but that page is not clear at all.. For What I have undestood  I
needs an phisical computer to restore a self hosted engine? Is there any
simple steps to follow?

Thank you!
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Use the "hosted_engine" data domain as data domain for others VM

2018-02-09 Thread yayo (j)
Hi,

Is there any problem to use the "hosted_engine" data domain to put disk of
others VM? I have created a "too big" "hosted_engine" data domain so I want
to use that space...

Thank you
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] ovirt 4.1 hosted engine: Resize or Change "hosted-engine" Data Domain

2018-01-03 Thread yayo (j)
hi at all,

We have the "hosted engine" Data domain on a FC LUN too big for only the
hosted-engine so we want to create another little FC LUN and move the
hosted-engine vm on this new LUN and destroy old one ...

Is there any official workflow or how to to do this operation? Or, someone
can  guide me?

Thank you!
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users