Just to note that the mentioned logs below are from the dd with bs=512,
which were failing.
Attached the full logs from mount and brick.

Alex

On Tue, Jun 6, 2017 at 3:18 PM, Abi Askushi <rightkickt...@gmail.com> wrote:

> Hi Krutika,
>
> My comments inline.
>
> Also attached the strace of:
> strace -y -ff -o /root/512-trace-on-root.log dd if=/dev/zero
> of=/mnt/test2.img oflag=direct bs=512 count=1
>
> and of:
> strace -y -ff -o /root/4096-trace-on-root.log dd if=/dev/zero
> of=/mnt/test2.img oflag=direct bs=4096 count=16
>
> I have mounted gluster volume at /mnt.
> The dd with bs=4096 is successful.
>
> The gluster mount log gives only the following:
> [2017-06-06 12:04:54.102576] W [MSGID: 114031] 
> [client-rpc-fops.c:854:client3_3_writev_cbk]
> 0-engine-client-0: remote operation failed [Invalid argument]
> [2017-06-06 12:04:54.102591] W [MSGID: 114031] 
> [client-rpc-fops.c:854:client3_3_writev_cbk]
> 0-engine-client-1: remote operation failed [Invalid argument]
> [2017-06-06 12:04:54.103355] W [fuse-bridge.c:2312:fuse_writev_cbk]
> 0-glusterfs-fuse: 205: WRITE => -1 gfid=075ab3a5-0274-4f07-a075-2748c3b4d394
> fd=0x7faf1d08706c (Transport endpoint is not connected)
>
> The gluster brick log gives:
> [2017-06-06 12:07:03.793080] E [MSGID: 113072] [posix.c:3453:posix_writev]
> 0-engine-posix: write failed: offset 0, [Invalid argument]
> [2017-06-06 12:07:03.793172] E [MSGID: 115067] 
> [server-rpc-fops.c:1346:server_writev_cbk]
> 0-engine-server: 291: WRITEV 0 (075ab3a5-0274-4f07-a075-2748c3b4d394) ==>
> (Invalid argument) [Invalid argument]
>
>
>
> On Tue, Jun 6, 2017 at 12:50 PM, Krutika Dhananjay <kdhan...@redhat.com>
> wrote:
>
>> OK.
>>
>> So for the 'Transport endpoint is not connected' issue, could you share
>> the mount and brick logs?
>>
>> Hmmm.. 'Invalid argument' error even on the root partition. What if you
>> change bs to 4096 and run?
>>
> If I use bs=4096 the dd is successful on /root and at gluster mounted
> volume.
>
>>
>> The logs I showed in my earlier mail shows that gluster is merely
>> returning the error it got from the disk file system where the
>> brick is hosted. But you're right about the fact that the offset 127488
>> is not 4K-aligned.
>>
>> If the dd on /root worked for you with bs=4096, could you try the same
>> directly on gluster mount point on a dummy file and capture the strace
>> output of dd?
>> You can perhaps reuse your existing gluster volume by mounting it at
>> another location and doing the dd.
>> Here's what you need to execute:
>>
>> strace -ff -T -p <pid-of-mount-process> -o 
>> <path-to-the-file-where-you-want-the-output-saved>`
>>
>> FWIW, here's something I found in man(2) open:
>>
>>
>>
>>
>> *Under  Linux  2.4,  transfer  sizes,  and  the alignment of the user
>> buffer and the file offset must all be multiples of the logical block size
>> of the filesystem.  Since Linux 2.6.0, alignment to the logical block size
>> of the       underlying storage (typically 512 bytes) suffices.  The
>> logical block size can be determined using the ioctl(2) BLKSSZGET operation
>> or from the shell using the command:           blockdev --getss*
>>
> Please note also that the physical disks are of 4K sector size (native).
> Thus OS is having 4096/4096 local/physical sector size.
> [root@v0 ~]# blockdev --getss /dev/sda
> 4096
> [root@v0 ~]# blockdev --getpbsz /dev/sda
> 4096
>
>>
>>
>> -Krutika
>>
>>
>> On Tue, Jun 6, 2017 at 1:18 AM, Abi Askushi <rightkickt...@gmail.com>
>> wrote:
>>
>>> Also when testing with dd i get the following:
>>>
>>> *Testing on the gluster mount: *
>>> dd if=/dev/zero of=/rhev/data-center/mnt/glusterSD/10.100.100.1:
>>> _engine/test2.img oflag=direct bs=512 count=1
>>> dd: error writing β/rhev/data-center/mnt/glusterSD/10.100.100.1:
>>> _engine/test2.imgβ: *Transport endpoint is not connected*
>>> 1+0 records in
>>> 0+0 records out
>>> 0 bytes (0 B) copied, 0.00336755 s, 0.0 kB/s
>>>
>>> *Testing on the /root directory (XFS): *
>>> dd if=/dev/zero of=/test2.img oflag=direct bs=512 count=1
>>> dd: error writing β/test2.imgβ:* Invalid argument*
>>> 1+0 records in
>>> 0+0 records out
>>> 0 bytes (0 B) copied, 0.000321239 s, 0.0 kB/s
>>>
>>> Seems that the gluster is trying to do the same and fails.
>>>
>>>
>>>
>>> On Mon, Jun 5, 2017 at 10:10 PM, Abi Askushi <rightkickt...@gmail.com>
>>> wrote:
>>>
>>>> The question that rises is what is needed to make gluster aware of the
>>>> 4K physical sectors presented to it (the logical sector is also 4K). The
>>>> offset (127488) at the log does not seem aligned at 4K.
>>>>
>>>> Alex
>>>>
>>>> On Mon, Jun 5, 2017 at 2:47 PM, Abi Askushi <rightkickt...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hi Krutika,
>>>>>
>>>>> I am saying that I am facing this issue with 4k drives. I never
>>>>> encountered this issue with 512 drives.
>>>>>
>>>>> Alex
>>>>>
>>>>> On Jun 5, 2017 14:26, "Krutika Dhananjay" <kdhan...@redhat.com> wrote:
>>>>>
>>>>>> This seems like a case of O_DIRECT reads and writes gone wrong,
>>>>>> judging by the 'Invalid argument' errors.
>>>>>>
>>>>>> The two operations that have failed on gluster bricks are:
>>>>>>
>>>>>> [2017-06-05 09:40:39.428979] E [MSGID: 113072]
>>>>>> [posix.c:3453:posix_writev] 0-engine-posix: write failed: offset 0,
>>>>>> [Invalid argument]
>>>>>> [2017-06-05 09:41:00.865760] E [MSGID: 113040]
>>>>>> [posix.c:3178:posix_readv] 0-engine-posix: read failed on
>>>>>> gfid=8c94f658-ac3c-4e3a-b368-8c038513a914, fd=0x7f408584c06c,
>>>>>> offset=127488 size=512, buf=0x7f4083c0b000 [Invalid argument]
>>>>>>
>>>>>> But then, both the write and the read have 512byte-aligned offset,
>>>>>> size and buf address (which is correct).
>>>>>>
>>>>>> Are you saying you don't see this issue with 4K block-size?
>>>>>>
>>>>>> -Krutika
>>>>>>
>>>>>> On Mon, Jun 5, 2017 at 3:21 PM, Abi Askushi <rightkickt...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Hi Sahina,
>>>>>>>
>>>>>>> Attached are the logs. Let me know if sth else is needed.
>>>>>>>
>>>>>>> I have 5 disks (with 4K physical sector) in RAID5. The RAID has 64K
>>>>>>> stripe size at the moment.
>>>>>>> I have prepared the storage as below:
>>>>>>>
>>>>>>> pvcreate --dataalignment 256K /dev/sda4
>>>>>>> vgcreate --physicalextentsize 256K gluster /dev/sda4
>>>>>>>
>>>>>>> lvcreate -n engine --size 120G gluster
>>>>>>> mkfs.xfs -f -i size=512 /dev/gluster/engine
>>>>>>>
>>>>>>> Thanx,
>>>>>>> Alex
>>>>>>>
>>>>>>> On Mon, Jun 5, 2017 at 12:14 PM, Sahina Bose <sab...@redhat.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Can we have the gluster mount logs and brick logs to check if it's
>>>>>>>> the same issue?
>>>>>>>>
>>>>>>>> On Sun, Jun 4, 2017 at 11:21 PM, Abi Askushi <
>>>>>>>> rightkickt...@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> I clean installed everything and ran into the same.
>>>>>>>>> I then ran gdeploy and encountered the same issue when deploying
>>>>>>>>> engine.
>>>>>>>>> Seems that gluster (?) doesn't like 4K sector drives. I am not
>>>>>>>>> sure if it has to do with alignment. The weird thing is that gluster
>>>>>>>>> volumes are all ok, replicating normally and no split brain is 
>>>>>>>>> reported.
>>>>>>>>>
>>>>>>>>> The solution to the mentioned bug (1386443
>>>>>>>>> <https://bugzilla.redhat.com/show_bug.cgi?id=1386443>) was to
>>>>>>>>> format with 512 sector size, which for my case is not an option:
>>>>>>>>>
>>>>>>>>> mkfs.xfs -f -i size=512 -s size=512 /dev/gluster/engine
>>>>>>>>> illegal sector size 512; hw sector is 4096
>>>>>>>>>
>>>>>>>>> Is there any workaround to address this?
>>>>>>>>>
>>>>>>>>> Thanx,
>>>>>>>>> Alex
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Sun, Jun 4, 2017 at 5:48 PM, Abi Askushi <
>>>>>>>>> rightkickt...@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Hi Maor,
>>>>>>>>>>
>>>>>>>>>> My disk are of 4K block size and from this bug seems that gluster
>>>>>>>>>> replica needs 512B block size.
>>>>>>>>>> Is there a way to make gluster function with 4K drives?
>>>>>>>>>>
>>>>>>>>>> Thank you!
>>>>>>>>>>
>>>>>>>>>> On Sun, Jun 4, 2017 at 2:34 PM, Maor Lipchuk <mlipc...@redhat.com
>>>>>>>>>> > wrote:
>>>>>>>>>>
>>>>>>>>>>> Hi Alex,
>>>>>>>>>>>
>>>>>>>>>>> I saw a bug that might be related to the issue you encountered at
>>>>>>>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1386443
>>>>>>>>>>>
>>>>>>>>>>> Sahina, maybe you have any advise? Do you think that BZ1386443is
>>>>>>>>>>> related?
>>>>>>>>>>>
>>>>>>>>>>> Regards,
>>>>>>>>>>> Maor
>>>>>>>>>>>
>>>>>>>>>>> On Sat, Jun 3, 2017 at 8:45 PM, Abi Askushi <
>>>>>>>>>>> rightkickt...@gmail.com> wrote:
>>>>>>>>>>> > Hi All,
>>>>>>>>>>> >
>>>>>>>>>>> > I have installed successfully several times oVirt (version
>>>>>>>>>>> 4.1) with 3 nodes
>>>>>>>>>>> > on top glusterfs.
>>>>>>>>>>> >
>>>>>>>>>>> > This time, when trying to configure the same setup, I am
>>>>>>>>>>> facing the
>>>>>>>>>>> > following issue which doesn't seem to go away. During
>>>>>>>>>>> installation i get the
>>>>>>>>>>> > error:
>>>>>>>>>>> >
>>>>>>>>>>> > Failed to execute stage 'Misc configuration': Cannot acquire
>>>>>>>>>>> host id:
>>>>>>>>>>> > (u'a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922',
>>>>>>>>>>> SanlockException(22, 'Sanlock
>>>>>>>>>>> > lockspace add failure', 'Invalid argument'))
>>>>>>>>>>> >
>>>>>>>>>>> > The only different in this setup is that instead of standard
>>>>>>>>>>> partitioning i
>>>>>>>>>>> > have GPT partitioning and the disks have 4K block size instead
>>>>>>>>>>> of 512.
>>>>>>>>>>> >
>>>>>>>>>>> > The /var/log/sanlock.log has the following lines:
>>>>>>>>>>> >
>>>>>>>>>>> > 2017-06-03 19:21:15+0200 23450 [943]: s9 lockspace
>>>>>>>>>>> > ba6bd862-c2b8-46e7-b2c8-91e4a5bb2047:250:/rhev/data-center/m
>>>>>>>>>>> nt/_var_lib_ovirt-hosted-engin-setup_tmptjkIDI/ba6bd862-c2b8
>>>>>>>>>>> -46e7-b2c8-91e4a5bb2047/dom_md/ids:0
>>>>>>>>>>> > 2017-06-03 19:21:36+0200 23471 [944]: s9:r5 resource
>>>>>>>>>>> > ba6bd862-c2b8-46e7-b2c8-91e4a5bb2047:SDM:/rhev/data-center/m
>>>>>>>>>>> nt/_var_lib_ovirt-hosted-engine-setup_tmptjkIDI/ba6bd862-c2b
>>>>>>>>>>> 8-46e7-b2c8-91e4a5bb2047/dom_md/leases:1048576
>>>>>>>>>>> > for 2,9,23040
>>>>>>>>>>> > 2017-06-03 19:21:36+0200 23471 [943]: s10 lockspace
>>>>>>>>>>> > a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922:250:/rhev/data-center/m
>>>>>>>>>>> nt/glusterSD/10.100.100.1:_engine/a5a6b0e7-fc3f-4838-8e26-c8
>>>>>>>>>>> b4d5e5e922/dom_md/ids:0
>>>>>>>>>>> > 2017-06-03 19:21:36+0200 23471 [23522]: a5a6b0e7 aio collect RD
>>>>>>>>>>> > 0x7f59b00008c0:0x7f59b00008d0:0x7f59b0101000 result -22:0
>>>>>>>>>>> match res
>>>>>>>>>>> > 2017-06-03 19:21:36+0200 23471 [23522]: read_sectors
>>>>>>>>>>> delta_leader offset
>>>>>>>>>>> > 127488 rv -22
>>>>>>>>>>> > /rhev/data-center/mnt/glusterSD/10.100.100.1:_engine/a5a6b0e
>>>>>>>>>>> 7-fc3f-4838-8e26-c8b4d5e5e922/dom_md/ids
>>>>>>>>>>> > 2017-06-03 19:21:37+0200 23472 [930]: s9 host 250 1 23450
>>>>>>>>>>> > 88c2244c-a782-40ed-9560-6cfa4d46f853.v0.neptune
>>>>>>>>>>> > 2017-06-03 19:21:37+0200 23472 [943]: s10 add_lockspace fail
>>>>>>>>>>> result -22
>>>>>>>>>>> >
>>>>>>>>>>> > And /var/log/vdsm/vdsm.log says:
>>>>>>>>>>> >
>>>>>>>>>>> > 2017-06-03 19:19:38,176+0200 WARN  (jsonrpc/3)
>>>>>>>>>>> > [storage.StorageServer.MountConnection] Using user specified
>>>>>>>>>>> > backup-volfile-servers option (storageServer:253)
>>>>>>>>>>> > 2017-06-03 19:21:12,379+0200 WARN  (periodic/1) [throttled]
>>>>>>>>>>> MOM not
>>>>>>>>>>> > available. (throttledlog:105)
>>>>>>>>>>> > 2017-06-03 19:21:12,380+0200 WARN  (periodic/1) [throttled]
>>>>>>>>>>> MOM not
>>>>>>>>>>> > available, KSM stats will be missing. (throttledlog:105)
>>>>>>>>>>> > 2017-06-03 19:21:14,714+0200 WARN  (jsonrpc/1)
>>>>>>>>>>> > [storage.StorageServer.MountConnection] Using user specified
>>>>>>>>>>> > backup-volfile-servers option (storageServer:253)
>>>>>>>>>>> > 2017-06-03 19:21:15,515+0200 ERROR (jsonrpc/4)
>>>>>>>>>>> [storage.initSANLock] Cannot
>>>>>>>>>>> > initialize SANLock for domain a5a6b0e7-fc3f-4838-8e26-c8b4d5
>>>>>>>>>>> e5e922
>>>>>>>>>>> > (clusterlock:238)
>>>>>>>>>>> > Traceback (most recent call last):
>>>>>>>>>>> >   File "/usr/lib/python2.7/site-packa
>>>>>>>>>>> ges/vdsm/storage/clusterlock.py", line
>>>>>>>>>>> > 234, in initSANLock
>>>>>>>>>>> >     sanlock.init_lockspace(sdUUID, idsPath)
>>>>>>>>>>> > SanlockException: (107, 'Sanlock lockspace init failure',
>>>>>>>>>>> 'Transport
>>>>>>>>>>> > endpoint is not connected')
>>>>>>>>>>> > 2017-06-03 19:21:15,515+0200 WARN  (jsonrpc/4)
>>>>>>>>>>> > [storage.StorageDomainManifest] lease did not initialize
>>>>>>>>>>> successfully
>>>>>>>>>>> > (sd:557)
>>>>>>>>>>> > Traceback (most recent call last):
>>>>>>>>>>> >   File "/usr/share/vdsm/storage/sd.py", line 552, in
>>>>>>>>>>> initDomainLock
>>>>>>>>>>> >     self._domainLock.initLock(self.getDomainLease())
>>>>>>>>>>> >   File "/usr/lib/python2.7/site-packa
>>>>>>>>>>> ges/vdsm/storage/clusterlock.py", line
>>>>>>>>>>> > 271, in initLock
>>>>>>>>>>> >     initSANLock(self._sdUUID, self._idsPath, lease)
>>>>>>>>>>> >   File "/usr/lib/python2.7/site-packa
>>>>>>>>>>> ges/vdsm/storage/clusterlock.py", line
>>>>>>>>>>> > 239, in initSANLock
>>>>>>>>>>> >     raise se.ClusterLockInitError()
>>>>>>>>>>> > ClusterLockInitError: Could not initialize cluster lock: ()
>>>>>>>>>>> > 2017-06-03 19:21:37,867+0200 ERROR (jsonrpc/2)
>>>>>>>>>>> [storage.StoragePool] Create
>>>>>>>>>>> > pool hosted_datacenter canceled  (sp:655)
>>>>>>>>>>> > Traceback (most recent call last):
>>>>>>>>>>> >   File "/usr/share/vdsm/storage/sp.py", line 652, in create
>>>>>>>>>>> >     self.attachSD(sdUUID)
>>>>>>>>>>> >   File "/usr/lib/python2.7/site-packa
>>>>>>>>>>> ges/vdsm/storage/securable.py", line
>>>>>>>>>>> > 79, in wrapper
>>>>>>>>>>> >     return method(self, *args, **kwargs)
>>>>>>>>>>> >   File "/usr/share/vdsm/storage/sp.py", line 971, in attachSD
>>>>>>>>>>> >     dom.acquireHostId(self.id)
>>>>>>>>>>> >   File "/usr/share/vdsm/storage/sd.py", line 790, in
>>>>>>>>>>> acquireHostId
>>>>>>>>>>> >     self._manifest.acquireHostId(hostId, async)
>>>>>>>>>>> >   File "/usr/share/vdsm/storage/sd.py", line 449, in
>>>>>>>>>>> acquireHostId
>>>>>>>>>>> >     self._domainLock.acquireHostId(hostId, async)
>>>>>>>>>>> >   File "/usr/lib/python2.7/site-packa
>>>>>>>>>>> ges/vdsm/storage/clusterlock.py", line
>>>>>>>>>>> > 297, in acquireHostId
>>>>>>>>>>> >     raise se.AcquireHostIdFailure(self._sdUUID, e)
>>>>>>>>>>> > AcquireHostIdFailure: Cannot acquire host id:
>>>>>>>>>>> > (u'a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922',
>>>>>>>>>>> SanlockException(22, 'Sanlock
>>>>>>>>>>> > lockspace add failure', 'Invalid argument'))
>>>>>>>>>>> > 2017-06-03 19:21:37,870+0200 ERROR (jsonrpc/2)
>>>>>>>>>>> [storage.StoragePool] Domain
>>>>>>>>>>> > ba6bd862-c2b8-46e7-b2c8-91e4a5bb2047 detach from MSD
>>>>>>>>>>> > ba6bd862-c2b8-46e7-b2c8-91e4a5bb2047 Ver 1 failed. (sp:528)
>>>>>>>>>>> > Traceback (most recent call last):
>>>>>>>>>>> >   File "/usr/share/vdsm/storage/sp.py", line 525, in
>>>>>>>>>>> __cleanupDomains
>>>>>>>>>>> >     self.detachSD(sdUUID)
>>>>>>>>>>> >   File "/usr/lib/python2.7/site-packa
>>>>>>>>>>> ges/vdsm/storage/securable.py", line
>>>>>>>>>>> > 79, in wrapper
>>>>>>>>>>> >     return method(self, *args, **kwargs)
>>>>>>>>>>> >   File "/usr/share/vdsm/storage/sp.py", line 1046, in detachSD
>>>>>>>>>>> >     raise se.CannotDetachMasterStorageDomain(sdUUID)
>>>>>>>>>>> > CannotDetachMasterStorageDomain: Illegal action:
>>>>>>>>>>> > (u'ba6bd862-c2b8-46e7-b2c8-91e4a5bb2047',)
>>>>>>>>>>> > 2017-06-03 19:21:37,872+0200 ERROR (jsonrpc/2)
>>>>>>>>>>> [storage.StoragePool] Domain
>>>>>>>>>>> > a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922 detach from MSD
>>>>>>>>>>> > ba6bd862-c2b8-46e7-b2c8-91e4a5bb2047 Ver 1 failed. (sp:528)
>>>>>>>>>>> > Traceback (most recent call last):
>>>>>>>>>>> >   File "/usr/share/vdsm/storage/sp.py", line 525, in
>>>>>>>>>>> __cleanupDomains
>>>>>>>>>>> >     self.detachSD(sdUUID)
>>>>>>>>>>> >   File "/usr/lib/python2.7/site-packa
>>>>>>>>>>> ges/vdsm/storage/securable.py", line
>>>>>>>>>>> > 79, in wrapper
>>>>>>>>>>> >     return method(self, *args, **kwargs)
>>>>>>>>>>> >   File "/usr/share/vdsm/storage/sp.py", line 1043, in detachSD
>>>>>>>>>>> >     self.validateAttachedDomain(dom)
>>>>>>>>>>> >   File "/usr/lib/python2.7/site-packa
>>>>>>>>>>> ges/vdsm/storage/securable.py", line
>>>>>>>>>>> > 79, in wrapper
>>>>>>>>>>> >     return method(self, *args, **kwargs)
>>>>>>>>>>> >   File "/usr/share/vdsm/storage/sp.py", line 542, in
>>>>>>>>>>> validateAttachedDomain
>>>>>>>>>>> >     self.validatePoolSD(dom.sdUUID)
>>>>>>>>>>> >   File "/usr/lib/python2.7/site-packa
>>>>>>>>>>> ges/vdsm/storage/securable.py", line
>>>>>>>>>>> > 79, in wrapper
>>>>>>>>>>> >     return method(self, *args, **kwargs)
>>>>>>>>>>> >   File "/usr/share/vdsm/storage/sp.py", line 535, in
>>>>>>>>>>> validatePoolSD
>>>>>>>>>>> >     raise se.StorageDomainNotMemberOfPool(self.spUUID, sdUUID)
>>>>>>>>>>> > StorageDomainNotMemberOfPool: Domain is not member in pool:
>>>>>>>>>>> > u'pool=a1e7e9dd-0cf4-41ae-ba13-36297ed66309,
>>>>>>>>>>> > domain=a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922'
>>>>>>>>>>> > 2017-06-03 19:21:40,063+0200 ERROR (jsonrpc/2)
>>>>>>>>>>> [storage.TaskManager.Task]
>>>>>>>>>>> > (Task='a2476a33-26f8-4ebd-876d-02fe5d13ef78') Unexpected
>>>>>>>>>>> error (task:870)
>>>>>>>>>>> > Traceback (most recent call last):
>>>>>>>>>>> >  File "/usr/share/vdsm/storage/task.py", line 877, in _run
>>>>>>>>>>> >     return fn(*args, **kargs)
>>>>>>>>>>> >   File "/usr/lib/python2.7/site-packages/vdsm/logUtils.py",
>>>>>>>>>>> line 52, in
>>>>>>>>>>> > wrapper
>>>>>>>>>>> >     res = f(*args, **kwargs)
>>>>>>>>>>> >   File "/usr/share/vdsm/storage/hsm.py", line 959, in
>>>>>>>>>>> createStoragePool
>>>>>>>>>>> >     leaseParams)
>>>>>>>>>>> >   File "/usr/share/vdsm/storage/sp.py", line 652, in create
>>>>>>>>>>> >     self.attachSD(sdUUID)
>>>>>>>>>>> >   File "/usr/lib/python2.7/site-packa
>>>>>>>>>>> ges/vdsm/storage/securable.py", line
>>>>>>>>>>> > 79, in wrapper
>>>>>>>>>>> >     return method(self, *args, **kwargs)
>>>>>>>>>>> >   File "/usr/share/vdsm/storage/sp.py", line 971, in attachSD
>>>>>>>>>>> >     dom.acquireHostId(self.id)
>>>>>>>>>>> >   File "/usr/share/vdsm/storage/sd.py", line 790, in
>>>>>>>>>>> acquireHostId
>>>>>>>>>>> >     self._manifest.acquireHostId(hostId, async)
>>>>>>>>>>> >   File "/usr/share/vdsm/storage/sd.py", line 449, in
>>>>>>>>>>> acquireHostId
>>>>>>>>>>> >     self._domainLock.acquireHostId(hostId, async)
>>>>>>>>>>> >   File "/usr/lib/python2.7/site-packa
>>>>>>>>>>> ges/vdsm/storage/clusterlock.py", line
>>>>>>>>>>> > 297, in acquireHostId
>>>>>>>>>>> >     raise se.AcquireHostIdFailure(self._sdUUID, e)
>>>>>>>>>>> > AcquireHostIdFailure: Cannot acquire host id:
>>>>>>>>>>> > (u'a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922',
>>>>>>>>>>> SanlockException(22, 'Sanlock
>>>>>>>>>>> > lockspace add failure', 'Invalid argument'))
>>>>>>>>>>> > 2017-06-03 19:21:40,067+0200 ERROR (jsonrpc/2)
>>>>>>>>>>> [storage.Dispatcher]
>>>>>>>>>>> > {'status': {'message': "Cannot acquire host id:
>>>>>>>>>>> > (u'a5a6b0e7-fc3f-4838-8e26-c8b4d5e5e922',
>>>>>>>>>>> SanlockException(22, 'Sanlock
>>>>>>>>>>> > lockspace add failure', 'Invalid argument'))", 'code': 661}}
>>>>>>>>>>> (dispatcher:77)
>>>>>>>>>>> >
>>>>>>>>>>> > The gluster volume prepared for engine storage is online and
>>>>>>>>>>> no split brain
>>>>>>>>>>> > is reported. I don't understand what needs to be done to
>>>>>>>>>>> overcome this. Any
>>>>>>>>>>> > idea will be appreciated.
>>>>>>>>>>> >
>>>>>>>>>>> > Thank you,
>>>>>>>>>>> > Alex
>>>>>>>>>>> >
>>>>>>>>>>> > _______________________________________________
>>>>>>>>>>> > Users mailing list
>>>>>>>>>>> > Users@ovirt.org
>>>>>>>>>>> > http://lists.ovirt.org/mailman/listinfo/users
>>>>>>>>>>> >
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>>> Users mailing list
>>>>>>>>> Users@ovirt.org
>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Users mailing list
>>>>>>> Users@ovirt.org
>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>>>
>>>>>>>
>>>>>>
>>>>
>>>
>>
>
[2017-06-06 11:30:33.691882] I [MSGID: 100030] [glusterfsd.c:2454:main] 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 3.8.12 (args: /usr/sbin/glusterfsd -s gluster0 --volfile-id engine.gluster0.gluster-engine-brick -p /var/lib/glusterd/vols/engine/run/gluster0-gluster-engine-brick.pid -S /var/run/gluster/9779b79e531e2de93c00c2b4e6cf92ae.socket --brick-name /gluster/engine/brick -l /var/log/glusterfs/bricks/gluster-engine-brick.log --xlator-option *-posix.glusterd-uuid=e4d940bc-c2ee-449a-8904-690e176a638b --brick-port 49152 --xlator-option engine-server.listen-port=49152)
[2017-06-06 11:30:33.707133] I [MSGID: 101190] [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2017-06-06 11:30:33.713148] I [MSGID: 101173] [graph.c:269:gf_add_cmdline_options] 0-engine-server: adding option 'listen-port' for volume 'engine-server' with value '49152'
[2017-06-06 11:30:33.713198] I [MSGID: 101173] [graph.c:269:gf_add_cmdline_options] 0-engine-posix: adding option 'glusterd-uuid' for volume 'engine-posix' with value 'e4d940bc-c2ee-449a-8904-690e176a638b'
[2017-06-06 11:30:33.713602] I [MSGID: 115034] [server.c:398:_check_for_auth_option] 0-engine-decompounder: skip format check for non-addr auth option auth.login./gluster/engine/brick.allow
[2017-06-06 11:30:33.713610] I [MSGID: 101190] [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread with index 2
[2017-06-06 11:30:33.713621] I [MSGID: 115034] [server.c:398:_check_for_auth_option] 0-engine-decompounder: skip format check for non-addr auth option auth.login.fdf75175-8b60-4a1c-944a-b4365ef9297c.password
[2017-06-06 11:30:33.714737] I [rpcsvc.c:2243:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service: Configured rpc.outstanding-rpc-limit with value 64
[2017-06-06 11:30:33.714848] W [MSGID: 101002] [options.c:954:xl_opt_validate] 0-engine-server: option 'listen-port' is deprecated, preferred is 'transport.socket.listen-port', continuing with correction
[2017-06-06 11:30:33.719418] I [MSGID: 121050] [ctr-helper.c:259:extract_ctr_options] 0-gfdbdatastore: CTR Xlator is disabled.
[2017-06-06 11:30:33.719461] W [MSGID: 101105] [gfdb_sqlite3.h:234:gfdb_set_sql_params] 0-engine-changetimerecorder: Failed to retrieve sql-db-pagesize from params.Assigning default value: 4096
[2017-06-06 11:30:33.719477] W [MSGID: 101105] [gfdb_sqlite3.h:234:gfdb_set_sql_params] 0-engine-changetimerecorder: Failed to retrieve sql-db-journalmode from params.Assigning default value: wal
[2017-06-06 11:30:33.719489] W [MSGID: 101105] [gfdb_sqlite3.h:234:gfdb_set_sql_params] 0-engine-changetimerecorder: Failed to retrieve sql-db-sync from params.Assigning default value: off
[2017-06-06 11:30:33.719501] W [MSGID: 101105] [gfdb_sqlite3.h:234:gfdb_set_sql_params] 0-engine-changetimerecorder: Failed to retrieve sql-db-autovacuum from params.Assigning default value: none
[2017-06-06 11:30:33.732482] I [trash.c:2408:init] 0-engine-trash: no option specified for 'eliminate', using NULL
[2017-06-06 11:30:33.733260] W [MSGID: 101174] [graph.c:360:_log_if_unknown_option] 0-engine-server: option 'rpc-auth.auth-glusterfs' is not recognized
[2017-06-06 11:30:33.733307] W [MSGID: 101174] [graph.c:360:_log_if_unknown_option] 0-engine-server: option 'rpc-auth.auth-unix' is not recognized
[2017-06-06 11:30:33.733340] W [MSGID: 101174] [graph.c:360:_log_if_unknown_option] 0-engine-server: option 'rpc-auth.auth-null' is not recognized
[2017-06-06 11:30:33.733401] W [MSGID: 101174] [graph.c:360:_log_if_unknown_option] 0-engine-server: option 'auth-path' is not recognized
[2017-06-06 11:30:33.733436] W [MSGID: 101174] [graph.c:360:_log_if_unknown_option] 0-engine-quota: option 'timeout' is not recognized
[2017-06-06 11:30:33.733530] W [MSGID: 101174] [graph.c:360:_log_if_unknown_option] 0-engine-trash: option 'brick-path' is not recognized
Final graph:
+------------------------------------------------------------------------------+
  1: volume engine-posix
  2:     type storage/posix
  3:     option glusterd-uuid e4d940bc-c2ee-449a-8904-690e176a638b
  4:     option directory /gluster/engine/brick
  5:     option volume-id 2171cc32-2b60-4bcf-a70f-ef737f479692
  6:     option brick-uid 36
  7:     option brick-gid 36
  8: end-volume
  9:  
 10: volume engine-trash
 11:     type features/trash
 12:     option trash-dir .trashcan
 13:     option brick-path /gluster/engine/brick
 14:     option trash-internal-op off
 15:     subvolumes engine-posix
 16: end-volume
 17:  
 18: volume engine-changetimerecorder
 19:     type features/changetimerecorder
 20:     option db-type sqlite3
 21:     option hot-brick off
 22:     option db-name brick.db
 23:     option db-path /gluster/engine/brick/.glusterfs/
 24:     option record-exit off
 25:     option ctr_link_consistency off
 26:     option ctr_lookupheal_link_timeout 300
 27:     option ctr_lookupheal_inode_timeout 300
 28:     option record-entry on
 29:     option ctr-enabled off
 30:     option record-counters off
 31:     option ctr-record-metadata-heat off
 32:     option sql-db-cachesize 1000
 33:     option sql-db-wal-autocheckpoint 1000
 34:     subvolumes engine-trash
 35: end-volume
 36:  
 37: volume engine-changelog
 38:     type features/changelog
 39:     option changelog-brick /gluster/engine/brick
 40:     option changelog-dir /gluster/engine/brick/.glusterfs/changelogs
 41:     option changelog-barrier-timeout 120
 42:     subvolumes engine-changetimerecorder
 43: end-volume
 44:  
 45: volume engine-bitrot-stub
 46:     type features/bitrot-stub
 47:     option export /gluster/engine/brick
 48:     subvolumes engine-changelog
 49: end-volume
 50:  
 51: volume engine-access-control
 52:     type features/access-control
 53:     subvolumes engine-bitrot-stub
 54: end-volume
 55:  
 56: volume engine-locks
 57:     type features/locks
 58:     subvolumes engine-access-control
 59: end-volume
 60:  
 61: volume engine-worm
 62:     type features/worm
 63:     option worm off
 64:     option worm-file-level off
 65:     subvolumes engine-locks
 66: end-volume
 67:  
 68: volume engine-read-only
 69:     type features/read-only
 70:     option read-only off
 71:     subvolumes engine-worm
 72: end-volume
 73:  
 74: volume engine-leases
 75:     type features/leases
 76:     option leases off
 77:     subvolumes engine-read-only
 78: end-volume
 79:  
 80: volume engine-upcall
 81:     type features/upcall
 82:     option cache-invalidation off
 83:     subvolumes engine-leases
 84: end-volume
 85:  
 86: volume engine-io-threads
 87:     type performance/io-threads
 88:     option low-prio-threads 32
 89:     subvolumes engine-upcall
 90: end-volume
 91:  
 92: volume engine-marker
 93:     type features/marker
 94:     option volume-uuid 2171cc32-2b60-4bcf-a70f-ef737f479692
 95:     option timestamp-file /var/lib/glusterd/vols/engine/marker.tstamp
 96:     option quota-version 0
 97:     option xtime off
 98:     option gsync-force-xtime off
 99:     option quota off
100:     option inode-quota off
101:     subvolumes engine-io-threads
102: end-volume
103:  
104: volume engine-barrier
105:     type features/barrier
106:     option barrier disable
107:     option barrier-timeout 120
108:     subvolumes engine-marker
109: end-volume
110:  
111: volume engine-index
112:     type features/index
113:     option index-base /gluster/engine/brick/.glusterfs/indices
114:     option xattrop-dirty-watchlist trusted.afr.dirty
115:     option xattrop-pending-watchlist trusted.afr.engine-
116:     subvolumes engine-barrier
117: end-volume
118:  
119: volume engine-quota
120:     type features/quota
121:     option volume-uuid engine
122:     option server-quota off
123:     option timeout 0
124:     option deem-statfs off
125:     subvolumes engine-index
126: end-volume
127:  
128: volume /gluster/engine/brick
129:     type debug/io-stats
130:     option log-level INFO
131:     option latency-measurement off
132:     option count-fop-hits off
133:     subvolumes engine-quota
134: end-volume
135:  
136: volume engine-decompounder
137:     type performance/decompounder
138:     subvolumes /gluster/engine/brick
139: end-volume
140:  
141: volume engine-server
142:     type protocol/server
143:     option transport.socket.listen-port 49152
144:     option rpc-auth.auth-glusterfs on
145:     option rpc-auth.auth-unix on
146:     option rpc-auth.auth-null on
147:     option rpc-auth-allow-insecure on
148:     option transport-type tcp
149:     option transport.address-family inet
150:     option auth.login./gluster/engine/brick.allow fdf75175-8b60-4a1c-944a-b4365ef9297c
151:     option auth.login.fdf75175-8b60-4a1c-944a-b4365ef9297c.password 08f20f45-f0de-4dd7-90b0-c5a710bac2a2
152:     option auth-path /gluster/engine/brick
153:     option auth.addr./gluster/engine/brick.allow *
154:     option transport.tcp-user-timeout 30
155:     subvolumes engine-decompounder
156: end-volume
157:  
+------------------------------------------------------------------------------+
[2017-06-06 11:30:37.798095] I [login.c:76:gf_auth] 0-auth/login: allowed user names: fdf75175-8b60-4a1c-944a-b4365ef9297c
[2017-06-06 11:30:37.798137] I [MSGID: 115029] [server-handshake.c:692:server_setvolume] 0-engine-server: accepted client from v0.vi-11877-2017/06/06-11:30:33:762848-engine-client-0-0-0 (version: 3.8.12)
[2017-06-06 11:30:37.955399] I [login.c:76:gf_auth] 0-auth/login: allowed user names: fdf75175-8b60-4a1c-944a-b4365ef9297c
[2017-06-06 11:30:37.955431] I [MSGID: 115029] [server-handshake.c:692:server_setvolume] 0-engine-server: accepted client from v1.iv-13136-2017/06/06-11:30:33:918085-engine-client-0-0-0 (version: 3.8.12)
[2017-06-06 11:30:37.968785] I [login.c:76:gf_auth] 0-auth/login: allowed user names: fdf75175-8b60-4a1c-944a-b4365ef9297c
[2017-06-06 11:30:37.968822] I [MSGID: 115029] [server-handshake.c:692:server_setvolume] 0-engine-server: accepted client from v2.iv-12970-2017/06/06-11:30:33:927502-engine-client-0-0-0 (version: 3.8.12)
[2017-06-06 11:33:05.593833] I [login.c:76:gf_auth] 0-auth/login: allowed user names: fdf75175-8b60-4a1c-944a-b4365ef9297c
[2017-06-06 11:33:05.593862] I [MSGID: 115029] [server-handshake.c:692:server_setvolume] 0-engine-server: accepted client from v0.vi-12038-2017/06/06-11:33:05:522313-engine-client-0-0-0 (version: 3.8.12)
[2017-06-06 11:34:47.845771] E [MSGID: 113072] [posix.c:3453:posix_writev] 0-engine-posix: write failed: offset 0, [Invalid argument]
[2017-06-06 11:34:47.845841] E [MSGID: 115067] [server-rpc-fops.c:1346:server_writev_cbk] 0-engine-server: 70: WRITEV 0 (075ab3a5-0274-4f07-a075-2748c3b4d394) ==> (Invalid argument) [Invalid argument]
[2017-06-06 11:40:57.380895] I [login.c:76:gf_auth] 0-auth/login: allowed user names: fdf75175-8b60-4a1c-944a-b4365ef9297c
[2017-06-06 11:40:57.381056] I [MSGID: 115029] [server-handshake.c:692:server_setvolume] 0-engine-server: accepted client from v0.vi-12676-2017/06/06-11:40:57:342872-engine-client-0-0-0 (version: 3.8.12)
[2017-06-06 11:40:57.438659] I [MSGID: 115036] [server.c:548:server_rpc_notify] 0-engine-server: disconnecting connection from v0.vi-12676-2017/06/06-11:40:57:342872-engine-client-0-0-0
[2017-06-06 11:40:57.438707] I [MSGID: 101055] [client_t.c:415:gf_client_unref] 0-engine-server: Shutting down connection v0.vi-12676-2017/06/06-11:40:57:342872-engine-client-0-0-0
[2017-06-06 11:40:57.676640] I [login.c:76:gf_auth] 0-auth/login: allowed user names: fdf75175-8b60-4a1c-944a-b4365ef9297c
[2017-06-06 11:40:57.676670] I [MSGID: 115029] [server-handshake.c:692:server_setvolume] 0-engine-server: accepted client from v0.vi-12776-2017/06/06-11:40:57:639824-engine-client-0-0-0 (version: 3.8.12)
[2017-06-06 11:40:57.933701] I [MSGID: 115036] [server.c:548:server_rpc_notify] 0-engine-server: disconnecting connection from v0.vi-12776-2017/06/06-11:40:57:639824-engine-client-0-0-0
[2017-06-06 11:40:57.933887] I [MSGID: 101055] [client_t.c:415:gf_client_unref] 0-engine-server: Shutting down connection v0.vi-12776-2017/06/06-11:40:57:639824-engine-client-0-0-0
[2017-06-06 11:40:58.522572] I [login.c:76:gf_auth] 0-auth/login: allowed user names: fdf75175-8b60-4a1c-944a-b4365ef9297c
[2017-06-06 11:40:58.522612] I [MSGID: 115029] [server-handshake.c:692:server_setvolume] 0-engine-server: accepted client from v0.vi-12920-2017/06/06-11:40:58:486434-engine-client-0-0-0 (version: 3.8.12)
[2017-06-06 11:40:58.572491] I [MSGID: 115036] [server.c:548:server_rpc_notify] 0-engine-server: disconnecting connection from v0.vi-12920-2017/06/06-11:40:58:486434-engine-client-0-0-0
[2017-06-06 11:40:58.572539] I [MSGID: 101055] [client_t.c:415:gf_client_unref] 0-engine-server: Shutting down connection v0.vi-12920-2017/06/06-11:40:58:486434-engine-client-0-0-0
[2017-06-06 11:44:49.542013] I [login.c:76:gf_auth] 0-auth/login: allowed user names: fdf75175-8b60-4a1c-944a-b4365ef9297c
[2017-06-06 11:44:49.542157] I [MSGID: 115029] [server-handshake.c:692:server_setvolume] 0-engine-server: accepted client from v0.vi-26955-2017/06/06-11:44:49:506605-engine-client-0-0-0 (version: 3.8.12)
[2017-06-06 11:44:50.098076] E [MSGID: 113072] [posix.c:3453:posix_writev] 0-engine-posix: write failed: offset 0, [Invalid argument]
[2017-06-06 11:44:50.098128] E [MSGID: 115067] [server-rpc-fops.c:1346:server_writev_cbk] 0-engine-server: 311: WRITEV 0 (a0734f10-db9e-4659-be75-4f5d54594ae7) ==> (Invalid argument) [Invalid argument]
[2017-06-06 11:45:11.435694] E [MSGID: 113040] [posix.c:3178:posix_readv] 0-engine-posix: read failed on gfid=a0734f10-db9e-4659-be75-4f5d54594ae7, fd=0x7f9e6fb1106c, offset=127488 size=512, buf=0x7f9e6ded0000 [Invalid argument]
[2017-06-06 11:45:11.435872] E [MSGID: 115068] [server-rpc-fops.c:1395:server_readv_cbk] 0-engine-server: 338: READV -2 (a0734f10-db9e-4659-be75-4f5d54594ae7) ==> (Invalid argument) [Invalid argument]
[2017-06-06 11:46:16.060652] E [MSGID: 113072] [posix.c:3453:posix_writev] 0-engine-posix: write failed: offset 0, [Invalid argument]
[2017-06-06 11:46:16.060682] E [MSGID: 115067] [server-rpc-fops.c:1346:server_writev_cbk] 0-engine-server: 156: WRITEV 0 (075ab3a5-0274-4f07-a075-2748c3b4d394) ==> (Invalid argument) [Invalid argument]
[2017-06-06 11:48:36.858637] I [MSGID: 115036] [server.c:548:server_rpc_notify] 0-engine-server: disconnecting connection from v0.vi-26955-2017/06/06-11:44:49:506605-engine-client-0-0-0
[2017-06-06 11:48:36.858694] I [MSGID: 101055] [client_t.c:415:gf_client_unref] 0-engine-server: Shutting down connection v0.vi-26955-2017/06/06-11:44:49:506605-engine-client-0-0-0
[2017-06-06 12:03:00.285720] E [MSGID: 113072] [posix.c:3453:posix_writev] 0-engine-posix: write failed: offset 0, [Invalid argument]
[2017-06-06 12:03:00.285932] E [MSGID: 115067] [server-rpc-fops.c:1346:server_writev_cbk] 0-engine-server: 237: WRITEV 0 (075ab3a5-0274-4f07-a075-2748c3b4d394) ==> (Invalid argument) [Invalid argument]
[2017-06-06 12:04:54.102278] E [MSGID: 113072] [posix.c:3453:posix_writev] 0-engine-posix: write failed: offset 0, [Invalid argument]
[2017-06-06 12:04:54.102500] E [MSGID: 115067] [server-rpc-fops.c:1346:server_writev_cbk] 0-engine-server: 277: WRITEV 0 (075ab3a5-0274-4f07-a075-2748c3b4d394) ==> (Invalid argument) [Invalid argument]
[2017-06-06 12:07:03.793080] E [MSGID: 113072] [posix.c:3453:posix_writev] 0-engine-posix: write failed: offset 0, [Invalid argument]
[2017-06-06 12:07:03.793172] E [MSGID: 115067] [server-rpc-fops.c:1346:server_writev_cbk] 0-engine-server: 291: WRITEV 0 (075ab3a5-0274-4f07-a075-2748c3b4d394) ==> (Invalid argument) [Invalid argument]
[2017-06-06 11:33:05.528021] I [MSGID: 100030] [glusterfsd.c:2454:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.12 (args: /usr/sbin/glusterfs --volfile-server=10.100.100.1 --volfile-id=/engine /mnt)
[2017-06-06 11:33:05.569878] I [MSGID: 101190] [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2017-06-06 11:33:05.577820] I [MSGID: 101190] [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread with index 2
[2017-06-06 11:33:05.579284] I [MSGID: 114020] [client.c:2356:notify] 0-engine-client-0: parent translators are ready, attempting connect on transport
[2017-06-06 11:33:05.584527] I [MSGID: 114020] [client.c:2356:notify] 0-engine-client-1: parent translators are ready, attempting connect on transport
[2017-06-06 11:33:05.584805] I [rpc-clnt.c:1965:rpc_clnt_reconfig] 0-engine-client-0: changing port to 49152 (from 0)
[2017-06-06 11:33:05.588493] I [MSGID: 114020] [client.c:2356:notify] 0-engine-client-2: parent translators are ready, attempting connect on transport
[2017-06-06 11:33:05.593669] I [MSGID: 114057] [client-handshake.c:1440:select_server_supported_programs] 0-engine-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2017-06-06 11:33:05.594023] I [rpc-clnt.c:1965:rpc_clnt_reconfig] 0-engine-client-1: changing port to 49152 (from 0)
[2017-06-06 11:33:05.594098] I [MSGID: 114046] [client-handshake.c:1216:client_setvolume_cbk] 0-engine-client-0: Connected to engine-client-0, attached to remote volume '/gluster/engine/brick'.
[2017-06-06 11:33:05.594114] I [MSGID: 114047] [client-handshake.c:1227:client_setvolume_cbk] 0-engine-client-0: Server and Client lk-version numbers are not same, reopening the fds
[2017-06-06 11:33:05.594182] I [MSGID: 108005] [afr-common.c:4387:afr_notify] 0-engine-replicate-0: Subvolume 'engine-client-0' came back up; going online.
Final graph:
+------------------------------------------------------------------------------+
  1: volume engine-client-0
  2:     type protocol/client
  3:     option clnt-lk-version 1
  4:     option volfile-checksum 0
  5:     option volfile-key /engine
  6:     option client-version 3.8.12
  7:     option process-uuid v0.vi-12038-2017/06/06-11:33:05:522313-engine-client-0-0-0
  8:     option fops-version 1298437
  9:     option ping-timeout 30
 10:     option remote-host gluster0
 11:     option remote-subvolume /gluster/engine/brick
 12:     option transport-type socket
 13:     option transport.address-family inet
 14:     option username fdf75175-8b60-4a1c-944a-b4365ef9297c
 15:     option password 08f20f45-f0de-4dd7-90b0-c5a710bac2a2
 16:     option filter-O_DIRECT off
 17:     option send-gids true
 18: end-volume
 19:  
 20: volume engine-client-1
 21:     type protocol/client
 22:     option ping-timeout 30
 23:     option remote-host gluster1
 24:     option remote-subvolume /gluster/engine/brick
 25:     option transport-type socket
 26:     option transport.address-family inet
 27:     option username fdf75175-8b60-4a1c-944a-b4365ef9297c
 28:     option password 08f20f45-f0de-4dd7-90b0-c5a710bac2a2
 29:     option filter-O_DIRECT off
 30:     option send-gids true
 31: end-volume
 32:  
 33: volume engine-client-2
 34:     type protocol/client
 35:     option ping-timeout 30
 36:     option remote-host gluster2
 37:     option remote-subvolume /gluster/engine/brick
 38:     option transport-type socket
 39:     option transport.address-family inet
 40:     option username fdf75175-8b60-4a1c-944a-b4365ef9297c
 41:     option password 08f20f45-f0de-4dd7-90b0-c5a710bac2a2
 42:     option filter-O_DIRECT off
 43:     option send-gids true
 44: end-volume
 45:  
 46: volume engine-replicate-0
 47:     type cluster/replicate
 48:     option arbiter-count 1
 49:     option data-self-heal-algorithm full
 50:     option eager-lock enable
 51:     option quorum-type auto
 52:     option shd-max-threads 8
 53:     option shd-wait-qlength 10000
 54:     option locking-scheme granular
 55:     option granular-entry-heal enable
 56:     subvolumes engine-client-0 engine-client-1 engine-client-2
 57: end-volume
 58:  
 59: volume engine-dht
 60:     type cluster/distribute
 61:     option lock-migration off
 62:     subvolumes engine-replicate-0
 63: end-volume
 64:  
 65: volume engine-shard
 66:     type features/shard
 67:     subvolumes engine-dht
 68: end-volume
 69:  
 70: volume engine-write-behind
 71:     type performance/write-behind
 72:     option strict-O_DIRECT on
 73:     subvolumes engine-shard
 74: end-volume
 75:  
 76: volume engine-readdir-ahead
 77:     type performance/readdir-ahead
 78:     subvolumes engine-write-behind
 79: end-volume
 80:  
 81: volume engine-open-behind
 82:     type performance/open-behind
 83:     subvolumes engine-readdir-ahead
 84: end-volume
 85:  
 86: volume engine
 87:     type debug/io-stats
 88:     option log-level INFO
 89:     option latency-measurement off
 90:     option count-fop-hits off
 91:     subvolumes engine-open-behind
 92: end-volume
 93:  
 94: volume meta-autoload
 95:     type meta
 96:     subvolumes engine
 97: end-volume
 98:  
+------------------------------------------------------------------------------+
[2017-06-06 11:33:05.597131] I [MSGID: 114035] [client-handshake.c:202:client_set_lk_version_cbk] 0-engine-client-0: Server lk version = 1
[2017-06-06 11:33:05.597575] I [rpc-clnt.c:1965:rpc_clnt_reconfig] 0-engine-client-2: changing port to 49152 (from 0)
[2017-06-06 11:33:05.601024] I [MSGID: 114057] [client-handshake.c:1440:select_server_supported_programs] 0-engine-client-1: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2017-06-06 11:33:05.601601] I [MSGID: 114046] [client-handshake.c:1216:client_setvolume_cbk] 0-engine-client-1: Connected to engine-client-1, attached to remote volume '/gluster/engine/brick'.
[2017-06-06 11:33:05.601626] I [MSGID: 114047] [client-handshake.c:1227:client_setvolume_cbk] 0-engine-client-1: Server and Client lk-version numbers are not same, reopening the fds
[2017-06-06 11:33:05.601879] I [MSGID: 114035] [client-handshake.c:202:client_set_lk_version_cbk] 0-engine-client-1: Server lk version = 1
[2017-06-06 11:33:05.604357] I [MSGID: 114057] [client-handshake.c:1440:select_server_supported_programs] 0-engine-client-2: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2017-06-06 11:33:05.604968] I [MSGID: 114046] [client-handshake.c:1216:client_setvolume_cbk] 0-engine-client-2: Connected to engine-client-2, attached to remote volume '/gluster/engine/brick'.
[2017-06-06 11:33:05.604990] I [MSGID: 114047] [client-handshake.c:1227:client_setvolume_cbk] 0-engine-client-2: Server and Client lk-version numbers are not same, reopening the fds
[2017-06-06 11:33:05.609327] I [MSGID: 114035] [client-handshake.c:202:client_set_lk_version_cbk] 0-engine-client-2: Server lk version = 1
[2017-06-06 11:33:05.609390] I [fuse-bridge.c:4147:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 kernel 7.22
[2017-06-06 11:33:05.609413] I [fuse-bridge.c:4832:fuse_graph_sync] 0-fuse: switched to graph 0
[2017-06-06 11:33:05.610516] I [MSGID: 108031] [afr-common.c:2157:afr_local_discovery_cbk] 0-engine-replicate-0: selecting local read_child engine-client-0
[2017-06-06 11:33:05.611317] I [MSGID: 109063] [dht-layout.c:713:dht_layout_normalize] 0-engine-dht: Found anomalies in / (gfid = 00000000-0000-0000-0000-000000000001). Holes=1 overlaps=0
[2017-06-06 11:33:08.826937] I [MSGID: 109063] [dht-layout.c:713:dht_layout_normalize] 0-engine-dht: Found anomalies in /.trashcan (gfid = 00000000-0000-0000-0000-000000000005). Holes=1 overlaps=0
[2017-06-06 11:34:47.845929] W [MSGID: 114031] [client-rpc-fops.c:854:client3_3_writev_cbk] 0-engine-client-0: remote operation failed [Invalid argument]
[2017-06-06 11:34:47.846174] W [MSGID: 114031] [client-rpc-fops.c:854:client3_3_writev_cbk] 0-engine-client-1: remote operation failed [Invalid argument]
[2017-06-06 11:34:47.846744] W [fuse-bridge.c:2312:fuse_writev_cbk] 0-glusterfs-fuse: 16: WRITE => -1 gfid=075ab3a5-0274-4f07-a075-2748c3b4d394 fd=0x7faf1d08706c (Transport endpoint is not connected)
[2017-06-06 11:46:16.060784] W [MSGID: 114031] [client-rpc-fops.c:854:client3_3_writev_cbk] 0-engine-client-0: remote operation failed [Invalid argument]
[2017-06-06 11:46:16.061082] W [MSGID: 114031] [client-rpc-fops.c:854:client3_3_writev_cbk] 0-engine-client-1: remote operation failed [Invalid argument]
[2017-06-06 11:46:16.061590] W [fuse-bridge.c:2312:fuse_writev_cbk] 0-glusterfs-fuse: 135: WRITE => -1 gfid=075ab3a5-0274-4f07-a075-2748c3b4d394 fd=0x7faf1d08706c (Transport endpoint is not connected)
[2017-06-06 12:03:00.286068] W [MSGID: 114031] [client-rpc-fops.c:854:client3_3_writev_cbk] 0-engine-client-0: remote operation failed [Invalid argument]
[2017-06-06 12:03:00.286334] W [MSGID: 114031] [client-rpc-fops.c:854:client3_3_writev_cbk] 0-engine-client-1: remote operation failed [Invalid argument]
[2017-06-06 12:03:00.286951] W [fuse-bridge.c:2312:fuse_writev_cbk] 0-glusterfs-fuse: 182: WRITE => -1 gfid=075ab3a5-0274-4f07-a075-2748c3b4d394 fd=0x7faf1d08706c (Transport endpoint is not connected)
[2017-06-06 12:04:54.102576] W [MSGID: 114031] [client-rpc-fops.c:854:client3_3_writev_cbk] 0-engine-client-0: remote operation failed [Invalid argument]
[2017-06-06 12:04:54.102591] W [MSGID: 114031] [client-rpc-fops.c:854:client3_3_writev_cbk] 0-engine-client-1: remote operation failed [Invalid argument]
[2017-06-06 12:04:54.103355] W [fuse-bridge.c:2312:fuse_writev_cbk] 0-glusterfs-fuse: 205: WRITE => -1 gfid=075ab3a5-0274-4f07-a075-2748c3b4d394 fd=0x7faf1d08706c (Transport endpoint is not connected)
[2017-06-06 12:07:03.793313] W [MSGID: 114031] [client-rpc-fops.c:854:client3_3_writev_cbk] 0-engine-client-0: remote operation failed [Invalid argument]
[2017-06-06 12:07:03.793392] W [MSGID: 114031] [client-rpc-fops.c:854:client3_3_writev_cbk] 0-engine-client-1: remote operation failed [Invalid argument]
[2017-06-06 12:07:03.793921] W [fuse-bridge.c:2312:fuse_writev_cbk] 0-glusterfs-fuse: 217: WRITE => -1 gfid=075ab3a5-0274-4f07-a075-2748c3b4d394 fd=0x7faf1d08706c (Transport endpoint is not connected)
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to