Re: [Gluster-users] Q'apla brick does not come online with gluster 5.0, even with fresh install

2018-10-31 Thread Computerisms Corporation
My troubleshooting took me to confirming that all my package versions 
were lined up and I came to realized that I had gotten version 5.0 from 
the debian repos instead of the repo at download.gluster.org.  I 
downgraded everything to 4.1.5-1 from gluster.org, rebooted, and messed 
around a bit, and my gluster is back online.




On 2018-10-31 10:32 a.m., Computerisms Corporation wrote:

forgot to add output of glusterd console when starting the volume:

[2018-10-31 17:31:33.887923] D [MSGID: 0] 
[glusterd-volume-ops.c:572:__glusterd_handle_cli_start_volume] 
0-management: Received start vol req for volume moogle-gluster
[2018-10-31 17:31:33.887976] D [MSGID: 0] 
[glusterd-locks.c:573:glusterd_mgmt_v3_lock] 0-management: Trying to 
acquire lock of vol moogle-gluster for 
bb8c61eb-f321-4485-8a8d-ddc369ac2203 as moogle-gluster_vol
[2018-10-31 17:31:33.888171] D [MSGID: 0] 
[glusterd-locks.c:657:glusterd_mgmt_v3_lock] 0-management: Lock for vol 
moogle-gluster successfully held by bb8c61eb-f321-4485-8a8d-ddc369ac2203
[2018-10-31 17:31:33.888189] D [MSGID: 0] 
[glusterd-locks.c:519:glusterd_multiple_mgmt_v3_lock] 0-management: 
Returning 0
[2018-10-31 17:31:33.888204] D [MSGID: 0] 
[glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume 
moogle-gluster found
[2018-10-31 17:31:33.888213] D [MSGID: 0] 
[glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning 0
[2018-10-31 17:31:33.888229] D [MSGID: 0] 
[glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume 
moogle-gluster found
[2018-10-31 17:31:33.888237] D [MSGID: 0] 
[glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning 0
[2018-10-31 17:31:33.888247] D [MSGID: 0] 
[glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume 
moogle-gluster found
[2018-10-31 17:31:33.888256] D [MSGID: 0] 
[glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning 0
[2018-10-31 17:31:33.888269] D [MSGID: 0] 
[glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume 
moogle-gluster found
[2018-10-31 17:31:33.888277] D [MSGID: 0] 
[glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning 0
[2018-10-31 17:31:33.888294] D [MSGID: 0] 
[glusterd-utils.c:1142:glusterd_resolve_brick] 0-management: Returning 0
[2018-10-31 17:31:33.888318] D [MSGID: 0] 
[glusterd-mgmt.c:223:gd_mgmt_v3_pre_validate_fn] 0-management: OP = 5. 
Returning 0
[2018-10-31 17:31:33.888668] D [MSGID: 0] 
[glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume 
moogle-gluster found
[2018-10-31 17:31:33.888682] D [MSGID: 0] 
[glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning 0
[2018-10-31 17:31:33.888719] E [MSGID: 101012] 
[common-utils.c:4070:gf_is_service_running] 0-: Unable to read pidfile: 
/var/run/gluster/vols/moogle-gluster/sand1lian.computerisms.ca-var-GlusterBrick-moogle-gluster.pid 

[2018-10-31 17:31:33.888757] I 
[glusterd-utils.c:6300:glusterd_brick_start] 0-management: starting a 
fresh brick process for brick /var/GlusterBrick/moogle-gluster
[2018-10-31 17:31:33.898943] D [logging.c:1998:_gf_msg_internal] 
0-logging-infra: Buffer overflow of a buffer whose size limit is 5. 
About to flush least recently used log message to disk
[2018-10-31 17:31:33.888780] E [MSGID: 101012] 
[common-utils.c:4070:gf_is_service_running] 0-: Unable to read pidfile: 
/var/run/gluster/vols/moogle-gluster/sand1lian.computerisms.ca-var-GlusterBrick-moogle-gluster.pid 

[2018-10-31 17:31:33.898942] E [MSGID: 106005] 
[glusterd-utils.c:6305:glusterd_brick_start] 0-management: Unable to 
start brick sand1lian.computerisms.ca:/var/GlusterBrick/moogle-gluster
[2018-10-31 17:31:33.899068] D [MSGID: 0] 
[glusterd-utils.c:6315:glusterd_brick_start] 0-management: returning -107
[2018-10-31 17:31:33.899088] E [MSGID: 106122] 
[glusterd-mgmt.c:308:gd_mgmt_v3_commit_fn] 0-management: Volume start 
commit failed.
[2018-10-31 17:31:33.899100] D [MSGID: 0] 
[glusterd-mgmt.c:392:gd_mgmt_v3_commit_fn] 0-management: OP = 5. 
Returning -107
[2018-10-31 17:31:33.899114] E [MSGID: 106122] 
[glusterd-mgmt.c:1557:glusterd_mgmt_v3_commit] 0-management: Commit 
failed for operation Start on local node
[2018-10-31 17:31:33.899128] D [MSGID: 0] 
[glusterd-op-sm.c:5109:glusterd_op_modify_op_ctx] 0-management: op_ctx 
modification not required
[2018-10-31 17:31:33.899140] E [MSGID: 106122] 
[glusterd-mgmt.c:2160:glusterd_mgmt_v3_initiate_all_phases] 
0-management: Commit Op Failed
[2018-10-31 17:31:33.899168] D [MSGID: 0] 
[glusterd-locks.c:785:glusterd_mgmt_v3_unlock] 0-management: Trying to 
release lock of vol moogle-gluster for 
bb8c61eb-f321-4485-8a8d-ddc369ac2203 as moogle-gluster_vol
[2018-10-31 17:31:33.899195] D [MSGID: 0] 
[glusterd-locks.c:834:glusterd_mgmt_v3_unlock] 0-management: Lock for 
vol moogle-gluster successfully released
[2018-10-31 17:31:33.899211] D [MSGID: 0] 
[glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume 
moogle-gluster found
[2018-10-31 17:31:33.899221] D [MSGID: 0] 

Re: [Gluster-users] brick does not come online with gluster 5.0, even with fresh install

2018-10-31 Thread Computerisms Corporation

forgot to add output of glusterd console when starting the volume:

[2018-10-31 17:31:33.887923] D [MSGID: 0] 
[glusterd-volume-ops.c:572:__glusterd_handle_cli_start_volume] 
0-management: Received start vol req for volume moogle-gluster
[2018-10-31 17:31:33.887976] D [MSGID: 0] 
[glusterd-locks.c:573:glusterd_mgmt_v3_lock] 0-management: Trying to 
acquire lock of vol moogle-gluster for 
bb8c61eb-f321-4485-8a8d-ddc369ac2203 as moogle-gluster_vol
[2018-10-31 17:31:33.888171] D [MSGID: 0] 
[glusterd-locks.c:657:glusterd_mgmt_v3_lock] 0-management: Lock for vol 
moogle-gluster successfully held by bb8c61eb-f321-4485-8a8d-ddc369ac2203
[2018-10-31 17:31:33.888189] D [MSGID: 0] 
[glusterd-locks.c:519:glusterd_multiple_mgmt_v3_lock] 0-management: 
Returning 0
[2018-10-31 17:31:33.888204] D [MSGID: 0] 
[glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume 
moogle-gluster found
[2018-10-31 17:31:33.888213] D [MSGID: 0] 
[glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning 0
[2018-10-31 17:31:33.888229] D [MSGID: 0] 
[glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume 
moogle-gluster found
[2018-10-31 17:31:33.888237] D [MSGID: 0] 
[glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning 0
[2018-10-31 17:31:33.888247] D [MSGID: 0] 
[glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume 
moogle-gluster found
[2018-10-31 17:31:33.888256] D [MSGID: 0] 
[glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning 0
[2018-10-31 17:31:33.888269] D [MSGID: 0] 
[glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume 
moogle-gluster found
[2018-10-31 17:31:33.888277] D [MSGID: 0] 
[glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning 0
[2018-10-31 17:31:33.888294] D [MSGID: 0] 
[glusterd-utils.c:1142:glusterd_resolve_brick] 0-management: Returning 0
[2018-10-31 17:31:33.888318] D [MSGID: 0] 
[glusterd-mgmt.c:223:gd_mgmt_v3_pre_validate_fn] 0-management: OP = 5. 
Returning 0
[2018-10-31 17:31:33.888668] D [MSGID: 0] 
[glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume 
moogle-gluster found
[2018-10-31 17:31:33.888682] D [MSGID: 0] 
[glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning 0
[2018-10-31 17:31:33.888719] E [MSGID: 101012] 
[common-utils.c:4070:gf_is_service_running] 0-: Unable to read pidfile: 
/var/run/gluster/vols/moogle-gluster/sand1lian.computerisms.ca-var-GlusterBrick-moogle-gluster.pid
[2018-10-31 17:31:33.888757] I 
[glusterd-utils.c:6300:glusterd_brick_start] 0-management: starting a 
fresh brick process for brick /var/GlusterBrick/moogle-gluster
[2018-10-31 17:31:33.898943] D [logging.c:1998:_gf_msg_internal] 
0-logging-infra: Buffer overflow of a buffer whose size limit is 5. 
About to flush least recently used log message to disk
[2018-10-31 17:31:33.888780] E [MSGID: 101012] 
[common-utils.c:4070:gf_is_service_running] 0-: Unable to read pidfile: 
/var/run/gluster/vols/moogle-gluster/sand1lian.computerisms.ca-var-GlusterBrick-moogle-gluster.pid
[2018-10-31 17:31:33.898942] E [MSGID: 106005] 
[glusterd-utils.c:6305:glusterd_brick_start] 0-management: Unable to 
start brick sand1lian.computerisms.ca:/var/GlusterBrick/moogle-gluster
[2018-10-31 17:31:33.899068] D [MSGID: 0] 
[glusterd-utils.c:6315:glusterd_brick_start] 0-management: returning -107
[2018-10-31 17:31:33.899088] E [MSGID: 106122] 
[glusterd-mgmt.c:308:gd_mgmt_v3_commit_fn] 0-management: Volume start 
commit failed.
[2018-10-31 17:31:33.899100] D [MSGID: 0] 
[glusterd-mgmt.c:392:gd_mgmt_v3_commit_fn] 0-management: OP = 5. 
Returning -107
[2018-10-31 17:31:33.899114] E [MSGID: 106122] 
[glusterd-mgmt.c:1557:glusterd_mgmt_v3_commit] 0-management: Commit 
failed for operation Start on local node
[2018-10-31 17:31:33.899128] D [MSGID: 0] 
[glusterd-op-sm.c:5109:glusterd_op_modify_op_ctx] 0-management: op_ctx 
modification not required
[2018-10-31 17:31:33.899140] E [MSGID: 106122] 
[glusterd-mgmt.c:2160:glusterd_mgmt_v3_initiate_all_phases] 
0-management: Commit Op Failed
[2018-10-31 17:31:33.899168] D [MSGID: 0] 
[glusterd-locks.c:785:glusterd_mgmt_v3_unlock] 0-management: Trying to 
release lock of vol moogle-gluster for 
bb8c61eb-f321-4485-8a8d-ddc369ac2203 as moogle-gluster_vol
[2018-10-31 17:31:33.899195] D [MSGID: 0] 
[glusterd-locks.c:834:glusterd_mgmt_v3_unlock] 0-management: Lock for 
vol moogle-gluster successfully released
[2018-10-31 17:31:33.899211] D [MSGID: 0] 
[glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume 
moogle-gluster found
[2018-10-31 17:31:33.899221] D [MSGID: 0] 
[glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning 0
[2018-10-31 17:31:33.899232] D [MSGID: 0] 
[glusterd-locks.c:464:glusterd_multiple_mgmt_v3_unlock] 0-management: 
Returning 0
[2018-10-31 17:31:33.899314] D [MSGID: 0] 
[glusterd-rpc-ops.c:199:glusterd_op_send_cli_response] 0-management: 
Returning 0
[2018-10-31 17:31:33.900750] D [socket.c:2927:socket_event_handler] 
0-transport: EPOLLERR 

Re: [Gluster-users] brick does not come online with gluster 5.0, even with fresh install

2018-10-31 Thread Computerisms Corporation

Hi,

it occurs maybe the previous email was too many words and not enough 
data.  so will try to display the issue differently.


gluster created (single brick volume following advice from 
https://lists.gluster.org/pipermail/gluster-users/2016-October/028821.html):


root@sand1lian:~# gluster volume create moogle-gluster 
sand1lian.computerisms.ca:/var/GlusterBrick/moogle-gluster


Gluster was started from cli with --debug, console reports the following 
with creation of the volume:


[2018-10-31 17:00:51.555918] D [MSGID: 0] 
[glusterd-volume-ops.c:328:__glusterd_handle_create_volume] 
0-management: Received create volume req
[2018-10-31 17:00:51.555963] D [MSGID: 0] 
[glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning -1
[2018-10-31 17:00:51.556072] D [MSGID: 0] 
[glusterd-op-sm.c:209:glusterd_generate_txn_id] 0-management: 
Transaction_id = 3f5d14c9-ee08-493c-afac-d04d53c12aad
[2018-10-31 17:00:51.556090] D [MSGID: 0] 
[glusterd-op-sm.c:302:glusterd_set_txn_opinfo] 0-management: 
Successfully set opinfo for transaction ID : 
3f5d14c9-ee08-493c-afac-d04d53c12aad
[2018-10-31 17:00:51.556099] D [MSGID: 0] 
[glusterd-op-sm.c:309:glusterd_set_txn_opinfo] 0-management: Returning 0
[2018-10-31 17:00:51.556108] D [MSGID: 0] 
[glusterd-syncop.c:1809:gd_sync_task_begin] 0-management: Transaction ID 
: 3f5d14c9-ee08-493c-afac-d04d53c12aad
[2018-10-31 17:00:51.556127] D [MSGID: 0] 
[glusterd-locks.c:573:glusterd_mgmt_v3_lock] 0-management: Trying to 
acquire lock of vol moogle-gluster for 
bb8c61eb-f321-4485-8a8d-ddc369ac2203 as moogle-gluster_vol
[2018-10-31 17:00:51.556293] D [MSGID: 0] 
[glusterd-locks.c:657:glusterd_mgmt_v3_lock] 0-management: Lock for vol 
moogle-gluster successfully held by bb8c61eb-f321-4485-8a8d-ddc369ac2203
[2018-10-31 17:00:51.556333] D [MSGID: 0] 
[glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning -1
[2018-10-31 17:00:51.556368] D [logging.c:1998:_gf_msg_internal] 
0-logging-infra: Buffer overflow of a buffer whose size limit is 5. 
About to flush least recently used log message to disk
[2018-10-31 17:00:51.556345] D [MSGID: 0] 
[glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning -1
[2018-10-31 17:00:51.556368] D [MSGID: 0] 
[glusterd-utils.c:1094:glusterd_brickinfo_new] 0-management: Returning 0
[2018-10-31 17:00:51.556608] D [MSGID: 0] 
[glusterd-utils.c:1308:glusterd_brickinfo_new_from_brick] 0-management: 
Returning 0
[2018-10-31 17:00:51.556656] D [MSGID: 0] 
[glusterd-utils.c:678:glusterd_volinfo_new] 0-management: Returning 0
[2018-10-31 17:00:51.556669] D [MSGID: 0] 
[store.c:473:gf_store_handle_destroy] 0-: Returning 0
[2018-10-31 17:00:51.556681] D [MSGID: 0] 
[glusterd-utils.c:990:glusterd_volume_brickinfos_delete] 0-management: 
Returning 0
[2018-10-31 17:00:51.556690] D [MSGID: 0] 
[store.c:473:gf_store_handle_destroy] 0-: Returning 0
[2018-10-31 17:00:51.556699] D [logging.c:1998:_gf_msg_internal] 
0-logging-infra: Buffer overflow of a buffer whose size limit is 5. 
About to flush least recently used log message to disk
The message "D [MSGID: 0] [store.c:473:gf_store_handle_destroy] 0-: 
Returning 0" repeated 3 times between [2018-10-31 17:00:51.556690] and 
[2018-10-31 17:00:51.556698]
[2018-10-31 17:00:51.556699] D [MSGID: 0] 
[glusterd-utils.c:1042:glusterd_volinfo_delete] 0-management: Returning 0
[2018-10-31 17:00:51.556728] D [MSGID: 0] 
[glusterd-utils.c:1094:glusterd_brickinfo_new] 0-management: Returning 0
[2018-10-31 17:00:51.556738] D [MSGID: 0] 
[glusterd-utils.c:1308:glusterd_brickinfo_new_from_brick] 0-management: 
Returning 0
[2018-10-31 17:00:51.556752] D [MSGID: 0] 
[glusterd-utils.c:678:glusterd_volinfo_new] 0-management: Returning 0
[2018-10-31 17:00:51.556764] D [MSGID: 0] 
[store.c:473:gf_store_handle_destroy] 0-: Returning 0
[2018-10-31 17:00:51.556772] D [MSGID: 0] 
[glusterd-utils.c:990:glusterd_volume_brickinfos_delete] 0-management: 
Returning 0
[2018-10-31 17:00:51.556781] D [MSGID: 0] 
[store.c:473:gf_store_handle_destroy] 0-: Returning 0
[2018-10-31 17:00:51.556791] D [logging.c:1998:_gf_msg_internal] 
0-logging-infra: Buffer overflow of a buffer whose size limit is 5. 
About to flush least recently used log message to disk
The message "D [MSGID: 0] [store.c:473:gf_store_handle_destroy] 0-: 
Returning 0" repeated 3 times between [2018-10-31 17:00:51.556781] and 
[2018-10-31 17:00:51.556790]
[2018-10-31 17:00:51.556791] D [MSGID: 0] 
[glusterd-utils.c:1042:glusterd_volinfo_delete] 0-management: Returning 0
[2018-10-31 17:00:51.556818] D [MSGID: 0] 
[glusterd-utils.c:1094:glusterd_brickinfo_new] 0-management: Returning 0
[2018-10-31 17:00:51.556955] D [MSGID: 0] 
[glusterd-peer-utils.c:130:glusterd_peerinfo_find_by_hostname] 
0-management: Unable to find friend: sand1lian.computerisms.ca
[2018-10-31 17:00:51.557033] D [MSGID: 0] 
[common-utils.c:3590:gf_is_local_addr] 0-management: 192.168.25.52
[2018-10-31 17:00:51.557140] D [MSGID: 0] 
[common-utils.c:3478:gf_interface_search] 

Re: [Gluster-users] posix_handle_hard [file exists]

2018-10-31 Thread Krutika Dhananjay
These log messages represent a transient state and are harmless and can be
ignored. This happens when a lookup and mknod to create shards happen in
parallel.

Regarding the preallocated disk creation issue, could you check if there
are any errors/warnings in the fuse mount logs (these are named as the
hyphenated mountpoint name followed by a ".log" and are found under
/var/log/glusterfs).

-Krutika


On Wed, Oct 31, 2018 at 4:58 PM Jorick Astrego  wrote:

> Hi,
>
> I have the similar issues with ovirt 4.2 on a glusterfs-3.8.15 cluster.
> This was a new volume and I created first a thin provisioned disk, then I
> tried to create a preallocated disk but it hangs after 4MB. The only issue
> I can find in the logs sofar are the [File exists] errors with the sharding.
>
>
> The message "W [MSGID: 113096] [posix-handle.c:761:posix_handle_hard]
> 0-hdd2-posix: link
> /data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125365 ->
> /data/hdd2/brick1/.glusterfs/16/a1/16a18a01-4f77-4c37-923d-9f0bc59f5cc7failed
> [File exists]" repeated 2 times between [2018-10-31 10:46:33.810987] and
> [2018-10-31 10:46:33.810988]
> [2018-10-31 10:46:33.970949] W [MSGID: 113096]
> [posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
> /data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125366 ->
> /data/hdd2/brick1/.glusterfs/90/85/9085ea11-4089-4d10-8848-fa2d518fd86dfailed
> [File exists]
> [2018-10-31 10:46:33.970950] W [MSGID: 113096]
> [posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
> /data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125366 ->
> /data/hdd2/brick1/.glusterfs/90/85/9085ea11-4089-4d10-8848-fa2d518fd86dfailed
> [File exists]
> [2018-10-31 10:46:35.601064] W [MSGID: 113096]
> [posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
> /data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125369 ->
> /data/hdd2/brick1/.glusterfs/9b/eb/9bebaaac-f460-496f-b30d-aabe77bffbc8failed
> [File exists]
> [2018-10-31 10:46:35.601065] W [MSGID: 113096]
> [posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
> /data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125369 ->
> /data/hdd2/brick1/.glusterfs/9b/eb/9bebaaac-f460-496f-b30d-aabe77bffbc8failed
> [File exists]
> [2018-10-31 10:46:36.040564] W [MSGID: 113096]
> [posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
> /data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125370 ->
> /data/hdd2/brick1/.glusterfs/30/93/3093fdb6-e62c-48b8-90e7-d4d72036fb69failed
> [File exists]
> [2018-10-31 10:46:36.040565] W [MSGID: 113096]
> [posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
> /data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125370 ->
> /data/hdd2/brick1/.glusterfs/30/93/3093fdb6-e62c-48b8-90e7-d4d72036fb69failed
> [File exists]
> [2018-10-31 10:46:36.319247] W [MSGID: 113096]
> [posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
> /data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125372 ->
> /data/hdd2/brick1/.glusterfs/c3/c2/c3c272f5-50af-4e82-94bb-b76eaa7a9a39failed
> [File exists]
> [2018-10-31 10:46:36.319250] W [MSGID: 113096]
> [posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
> /data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125372 ->
> /data/hdd2/brick1/.glusterfs/c3/c2/c3c272f5-50af-4e82-94bb-b76eaa7a9a39failed
> [File exists]
> [2018-10-31 10:46:36.319309] E [MSGID: 113020] [posix.c:1407:posix_mknod]
> 0-hdd2-posix: setting gfid on
> /data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125372 failed
>
>
> -rw-rw. 2 root root 4194304 Oct 31 11:46
> /data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125366
>
> -rw-rw. 2 root root 4194304 Oct 31 11:46
> /data/hdd2/brick1/.glusterfs/9b/eb/9bebaaac-f460-496f-b30d-aabe77bffbc8
>
> On 10/01/2018 12:36 PM, Jose V. Carrión wrote:
>
> Hi,
>
> I have a gluster 3.12.6-1 installation with 2 configured volumes.
>
> Several times at day , some bricks are reporting the lines below:
>
> [2018-09-30 20:36:27.348015] W [MSGID: 113096]
> [posix-handle.c:770:posix_handle_hard] 0-volumedisk0-posix: link
> /mnt/glusterfs/vol0/brick1/6349/20180921/20180921.h5 ->
> /mnt/glusterfs/vol0/brick1/.glusterfs/3b/1c/3b1c5fe1-b141-4687-8eaf-2c28f9505277failed
> [File exists]
> [2018-09-30 20:36:27.383957] E [MSGID: 113020] [posix.c:3162:posix_create]
> 0-volumedisk0-posix: setting gfid on
> /mnt/glusterfs/vol0/brick1/6349/20180921/20180921.h5 failed
>
> I can access to the /mnt/glusterfs/vol0/brick1/6349/20180921/20180921.h5
> and
> /mnt/glusterfs/vol0/brick1/.glusterfs/3b/1c/3b1c5fe1-b141-4687-8eaf-2c28f9505277,
> both files are hard links .
>
> What is the meaning of the error lines?
>
> Thanks in advance.
>
> Cheers.
>
>
> ___
> Gluster-users mailing 
> listGluster-users@gluster.orghttps://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
>
>
>
> Met vriendelijke groet, With kind regards,
>
> Jorick Astrego
>
> *Netbulae 

Re: [Gluster-users] Should I be using gluster 3 or gluster 4?

2018-10-31 Thread Kaleb S. KEITHLEY
On 10/31/18 2:53 AM, Jeevan Patnaik wrote:
> Hi Vlad,
> 
> Can gluster 4.1.5 too be used for production? There's no documentation
> for gluster 4.
> 

The documentation for GlusterFS 4.x and 5 is about 99.99% the same as
GlusterFS 3.x.

That's not a reason not to use GlusterFS 4.x or GlusterFS 5 IMO.

(This is a community open source project. If you think there's something
missing, that's a good place to jump in and get involved.)

> Regards,
> Jeevan.
> 
> On Wed, Oct 31, 2018, 9:37 AM Vlad Kopylov  > wrote:
> 
> 3.12.14 working fine in production for file access
> you can find vol and mount settings in mailing list archive
> 
> On Tue, Oct 30, 2018 at 11:05 AM Jeevan Patnaik  > wrote:
> 
> Hi All,
> 
> I see gluster 3 has reached end of life and gluster 5 has just
> been introduced.
> 
> Is gluster 4.1.5 stable enough for production deployment? I see
> by default gluster docs point  to v3  only  and there  are no
> gluster docs  for 4 or 5.  Why so? And I'm mainly looking for a
> stable gluster tiering feature and Kernek NFS support. I faced
> few issues with tiering in 3.14 and so thinking if I should
> switch to 4.1.5, as it will be a production deployment.
> 
> Thank you.
> 
> Regards,
> Jeevan.
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org 
> https://lists.gluster.org/mailman/listinfo/gluster-users
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
> 

-- 

Kaleb
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] posix_handle_hard [file exists]

2018-10-31 Thread Jorick Astrego
Hi,

I have the similar issues with ovirt 4.2 on a glusterfs-3.8.15 cluster.
This was a new volume and I created first a thin provisioned disk, then
I tried to create a preallocated disk but it hangs after 4MB. The only
issue I can find in the logs sofar are the [File exists] errors with the
sharding.


The message "W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125365
->

/data/hdd2/brick1/.glusterfs/16/a1/16a18a01-4f77-4c37-923d-9f0bc59f5cc7failed 
[File exists]" repeated 2 times between [2018-10-31 10:46:33.810987]
and [2018-10-31 10:46:33.810988]
[2018-10-31 10:46:33.970949] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125366
->

/data/hdd2/brick1/.glusterfs/90/85/9085ea11-4089-4d10-8848-fa2d518fd86dfailed 
[File exists]
[2018-10-31 10:46:33.970950] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125366
->

/data/hdd2/brick1/.glusterfs/90/85/9085ea11-4089-4d10-8848-fa2d518fd86dfailed 
[File exists]
[2018-10-31 10:46:35.601064] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125369
->

/data/hdd2/brick1/.glusterfs/9b/eb/9bebaaac-f460-496f-b30d-aabe77bffbc8failed 
[File exists]
[2018-10-31 10:46:35.601065] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125369
->

/data/hdd2/brick1/.glusterfs/9b/eb/9bebaaac-f460-496f-b30d-aabe77bffbc8failed 
[File exists]
[2018-10-31 10:46:36.040564] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125370
->

/data/hdd2/brick1/.glusterfs/30/93/3093fdb6-e62c-48b8-90e7-d4d72036fb69failed 
[File exists]
[2018-10-31 10:46:36.040565] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125370
->

/data/hdd2/brick1/.glusterfs/30/93/3093fdb6-e62c-48b8-90e7-d4d72036fb69failed 
[File exists]
[2018-10-31 10:46:36.319247] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125372
->

/data/hdd2/brick1/.glusterfs/c3/c2/c3c272f5-50af-4e82-94bb-b76eaa7a9a39failed 
[File exists]
[2018-10-31 10:46:36.319250] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125372
->

/data/hdd2/brick1/.glusterfs/c3/c2/c3c272f5-50af-4e82-94bb-b76eaa7a9a39failed 
[File exists]
[2018-10-31 10:46:36.319309] E [MSGID: 113020]
[posix.c:1407:posix_mknod] 0-hdd2-posix: setting gfid on
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125372
failed

   
        -rw-rw. 2 root root 4194304 Oct 31 11:46
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125366

-rw-rw. 2 root root 4194304 Oct 31 11:46
/data/hdd2/brick1/.glusterfs/9b/eb/9bebaaac-f460-496f-b30d-aabe77bffbc8

On 10/01/2018 12:36 PM, Jose V. Carrión wrote:
>
> Hi,
>
> I have a gluster 3.12.6-1 installation with 2 configured volumes.
>
> Several times at day , some bricks are reporting the lines below:
>
> [2018-09-30 20:36:27.348015] W [MSGID: 113096]
> [posix-handle.c:770:posix_handle_hard] 0-volumedisk0-posix: link
> /mnt/glusterfs/vol0/brick1/6349/20180921/20180921.h5 ->
> /mnt/glusterfs/vol0/brick1/.glusterfs/3b/1c/3b1c5fe1-b141-4687-8eaf-2c28f9505277failed
>  
> [File exists]
> [2018-09-30 20:36:27.383957] E [MSGID: 113020]
> [posix.c:3162:posix_create] 0-volumedisk0-posix: setting gfid on
> /mnt/glusterfs/vol0/brick1/6349/20180921/20180921.h5 failed
>
> I can access to the
> /mnt/glusterfs/vol0/brick1/6349/20180921/20180921.h5 and
> /mnt/glusterfs/vol0/brick1/.glusterfs/3b/1c/3b1c5fe1-b141-4687-8eaf-2c28f9505277,
> both files are hard links .
>
> What is the meaning of the error lines?
>
> Thanks in advance.
>
> Cheers.
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users





Met vriendelijke groet, With kind regards,

Jorick Astrego

Netbulae Virtualization Experts 



Tel: 053 20 30 270  i...@netbulae.euStaalsteden 4-3A
KvK 08198180
Fax: 053 20 30 271  www.netbulae.eu 7547 TA Enschede
BTW NL821234584B01



___
Gluster-users mailing list

Re: [Gluster-users] How to use system.affinity/distributed.migrate-data on distributed/replicated volume?

2018-10-31 Thread nl
Hi Vlad,

because I never got affinity and distributed.migrate-data using attrs
working, I finally invested time in nufa and found out that (against
docs) that "gluster volume set gv0 cluster.nufa 1" works to enable nufa,
so I did also with the data I had.

Then I started to copy files off the storage and then copied them back
on the correct host and so re-distributed them as I liked it to be.

It would have been easier if setting a "custom" affinity and such would
have worked but now I reached my goal.

Ingo

Am 31.10.18 um 04:57 schrieb Vlad Kopylov:
> nufa helps you write to local brick, if replication is involved it will
> still copy it to other bricks (or suppose to do so)
> what might be happening is that when initial file was created other
> nodes were down and it didn't replicate properly and now heal is failing
> check your
> gluster vol heal Volname info
> 
> maybe you will find out where second copy of the file suppose to be -
> and just copy it to that brick
> 
> On Sun, Oct 28, 2018 at 6:07 PM Ingo Fischer  > wrote:
> 
> Hi All,
> 
> has noone an idea on system.affinity/distributed.migrate-data ?
> Or how to correctly enable nufa?
> 
> BTW: the used gluster version is 4.1.5
> 
> Thank you for your help on this!
> 
> Ingo
> 
> Am 24.10.18 um 12:54 schrieb Ingo Fischer:
> > Hi,
> >
> > I have setup a glusterfs volume gv0 as distributed/replicated:
> >
> > root@pm1:~# gluster volume info gv0
> >
> > Volume Name: gv0
> > Type: Distributed-Replicate
> > Volume ID: 64651501-6df2-4106-b330-fdb3e1fbcdf4
> > Status: Started
> > Snapshot Count: 0
> > Number of Bricks: 3 x 2 = 6
> > Transport-type: tcp
> > Bricks:
> > Brick1: 192.168.178.50:/gluster/brick1/gv0
> > Brick2: 192.168.178.76:/gluster/brick1/gv0
> > Brick3: 192.168.178.50:/gluster/brick2/gv0
> > Brick4: 192.168.178.81:/gluster/brick1/gv0
> > Brick5: 192.168.178.50:/gluster/brick3/gv0
> > Brick6: 192.168.178.82:/gluster/brick1/gv0
> > Options Reconfigured:
> > performance.client-io-threads: off
> > nfs.disable: on
> > transport.address-family: inet
> >
> >
> > root@pm1:~# gluster volume status
> > Status of volume: gv0
> > Gluster process                             TCP Port  RDMA Port 
> Online  Pid
> >
> 
> --
> > Brick 192.168.178.50:/gluster/brick1/gv0    49152     0          Y
> > 1665
> > Brick 192.168.178.76:/gluster/brick1/gv0    49152     0          Y
> > 26343
> > Brick 192.168.178.50:/gluster/brick2/gv0    49153     0          Y
> > 1666
> > Brick 192.168.178.81:/gluster/brick1/gv0    49152     0          Y
> > 1161
> > Brick 192.168.178.50:/gluster/brick3/gv0    49154     0          Y
> > 1679
> > Brick 192.168.178.82:/gluster/brick1/gv0    49152     0          Y
> > 1334
> > Self-heal Daemon on localhost               N/A       N/A        Y
> > 5022
> > Self-heal Daemon on 192.168.178.81          N/A       N/A        Y
> > 935
> > Self-heal Daemon on 192.168.178.82          N/A       N/A        Y
> > 1057
> > Self-heal Daemon on pm2.fritz.box           N/A       N/A        Y
> > 1651
> >
> >
> > I use the fs to store VM files, so not many, but big files.
> >
> > The distribution now put 4 big files on one brick set and only one
> file
> > on an other. This means that the one brick set it "overcommited"
> now as
> > soon as all VMs using max space. SO I would like to manually
> > redistribute the files a bit better.
> >
> > After log googling I found that the following should work:
> > setfattr -n 'system.affinity' -v $location $filepath
> > setfattr -n 'distribute.migrate-data' -v 'force' $filepath
> >
> > But I have problems with it because it gives errors or doing
> nothing at all.
> >
> > The mounting looks like:
> > 192.168.178.50:gv0 on /mnt/pve/glusterfs type fuse.glusterfs
> >
> 
> (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
> >
> >
> > Here is what I tried for the first xattr:
> >
> > root@pm1:~# setfattr -n 'system.affinity' -v 'gv0-client-5'
> > /mnt/pve/glusterfs/201/imagesvm201.qcow2
> > setfattr: /mnt/pve/glusterfs/201/imagesvm201.qcow2: Operation not
> supported
> >
> > So I found on google to use trusted.affinity instead and yes this
> works.
> > I'm only not sure if the location "gv0-client-5" is correct to
> move the
> > file to "Brick 5" from "gluster volume info gv0" ... or how this
> > location is build?
> > Commit Message from
> http://review.gluster.org/#/c/glusterfs/+/5233/ says
> >> The value is the internal client or AFR brick name where you want the
> > file to be.
> >

[Gluster-users] Self-healing not healing 27k files on GlusterFS 4.1.5 3 nodes replica

2018-10-31 Thread mabi
Hello,

I have a GlusterFS 4.1.5 cluster with 3 nodes (including 1 arbiter) and 
currently have a volume with around 27174 files which are not being healed. The 
"volume heal info" command shows the same 27k files under the first node and 
the second node but there is nothing under the 3rd node (arbiter).

I already tried running a "volume heal" but none of the files got healed.

In the glfsheal log file for that particular volume the only error I see is a 
few of these entries:

[2018-10-31 10:06:41.524300] E [rpc-clnt.c:184:call_bail] 
0-myvol-private-client-0: bailing out frame type(GlusterFS 4.x v1) 
op(INODELK(29)) xid = 0x108b sent = 2018-10-31 09:36:41.314203. timeout = 1800 
for 127.0.1.1:49152

and then a few of these warnings:

[2018-10-31 10:08:12.161498] W [dict.c:671:dict_ref] 
(-->/usr/lib/x86_64-linux-gnu/glusterfs/4.1.5/xlator/cluster/replicate.so(+0x6734a)
 [0x7f2a6dff434a] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x5da84) 
[0x7f2a798e8a84] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_ref+0x58) 
[0x7f2a798a37f8] ) 0-dict: dict is NULL [Invalid argument]

the glustershd.log file shows the following:

[2018-10-31 10:10:52.502453] E [rpc-clnt.c:184:call_bail] 
0-myvol-private-client-0: bailing out frame type(GlusterFS 4.x v1) 
op(INODELK(29)) xid = 0xaa398 sent = 2018-10-31 09:40:50.927816. timeout = 1800 
for 127.0.1.1:49152
[2018-10-31 10:10:52.502502] E [MSGID: 114031] 
[client-rpc-fops_v2.c:1306:client4_0_inodelk_cbk] 0-myvol-private-client-0: 
remote operation failed [Transport endpoint is not connected]

any idea what could be wrong here?

Regards,
Mabi

___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] quota: error returned while attempting to connect to host:(null), port:0

2018-10-31 Thread mabi
I also noticed in the quotad.log file a lot of the following error messages:

The message "W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 
'trusted.glusterfs.quota.size' is not sent on wire [Invalid argument]" repeated 
107 times between [2018-10-31 08:00:27.718645] and [2018-10-31 08:02:04.476875]
The message "W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 
'volume-uuid' is not sent on wire [Invalid argument]" repeated 107 times 
between [2018-10-31 08:00:27.718696] and [2018-10-31 08:02:04.476876]
[2018-10-31 08:02:14.629667] W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 
0-dict: key 'trusted.glusterfs.quota.size' is not sent on wire [Invalid 
argument]
[2018-10-31 08:02:14.629746] W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 
0-dict: key 'volume-uuid' is not sent on wire [Invalid argument]

Maybe this is related...


‐‐‐ Original Message ‐‐‐
On Tuesday, October 30, 2018 6:24 PM, mabi  wrote:

> Hello,
>
> Since I upgraded my 3-node (with arbiter) GlusreFS from 3.12.14 to 4.1.5 I 
> see quite a lot of the following error message in the brick log file for one 
> of my volumes where I have quota enabled:
>
> [2018-10-21 05:03:25.158311] W [rpc-clnt.c:1753:rpc_clnt_submit] 
> 0-myvol-private-quota: error returned while attempting to connect to 
> host:(null), port:0
>
> Is this a bug? should I file a bug report? or does anyone know what is wrong 
> here maybe with my system?
>
> Best regards,
> Mabi


___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Should I be using gluster 3 or gluster 4?

2018-10-31 Thread Jeevan Patnaik
Hi Vlad,

Can gluster 4.1.5 too be used for production? There's no documentation for
gluster 4.

Regards,
Jeevan.

On Wed, Oct 31, 2018, 9:37 AM Vlad Kopylov  wrote:

> 3.12.14 working fine in production for file access
> you can find vol and mount settings in mailing list archive
>
> On Tue, Oct 30, 2018 at 11:05 AM Jeevan Patnaik 
> wrote:
>
>> Hi All,
>>
>> I see gluster 3 has reached end of life and gluster 5 has just been
>> introduced.
>>
>> Is gluster 4.1.5 stable enough for production deployment? I see by
>> default gluster docs point  to v3  only  and there  are no gluster docs
>> for 4 or 5.  Why so? And I'm mainly looking for a stable gluster tiering
>> feature and Kernek NFS support. I faced few issues with tiering in 3.14 and
>> so thinking if I should switch to 4.1.5, as it will be a production
>> deployment.
>>
>> Thank you.
>>
>> Regards,
>> Jeevan.
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users