On Thu, May 16, 2019 at 10:02 PM Strahil <hunter86...@yahoo.com> wrote:

> This is my previous e-mail:
>
> On May 16, 2019 15:23, Strahil Nikolov <hunter86...@yahoo.com> wrote:
>
> It seems that the issue is within the 'dd' command as it stays waiting for
> input:
>
> [root@ovirt1 mnt]# /usr/bin/dd iflag=fullblock  of=file
> oflag=direct,seek_bytes seek=1048576 bs=256512 count=1
> conv=notrunc,nocreat,fsync      ^C0+0 records in
> 0+0 records out
> 0 bytes (0 B) copied, 19.3282 s, 0.0 kB/s
>
> Changing the dd command works and shows that the gluster is working:
>
> [root@ovirt1 mnt]# cat /dev/urandom |  /usr/bin/dd  of=file
> oflag=direct,seek_bytes seek=1048576 bs=256512 count=1
> conv=notrunc,nocreat,fsync  0+1 records in
> 0+1 records out
> 131072 bytes (131 kB) copied, 0.00705081 s, 18.6 MB/s
>
> Best Regards,
>
> Strahil Nikolov
>
> ----- Препратено съобщение -----
>
> *От:* Strahil Nikolov <hunter86...@yahoo.com>
>
> *До:* Users <us...@ovirt.org>
>
> *Изпратено:* четвъртък, 16 май 2019 г., 5:56:44 ч. Гринуич-4
>
> *Тема:* ovirt 4.3.3.7 cannot create a gluster storage domain
>
> Hey guys,
>
> I have recently updated (yesterday) my platform to latest available (v
> 4.3.3.7) and upgraded to gluster v6.1 .The setup is hyperconverged 3 node
> cluster with ovirt1/gluster1 & ovirt2/gluster2 as replica nodes (glusterX
> is for gluster communication) while ovirt3 is the arbiter.
>
> Today I have tried to add new domain storages but they fail with the
> following:
>
> 2019-05-16 10:15:21,296+0300 INFO  (jsonrpc/2) [vdsm.api] FINISH
> createStorageDomain error=Command ['/usr/bin/dd', 'iflag=fullblock',
> u'of=/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases',
> 'oflag=direct,seek_bytes', 'seek=1048576', 'bs=256512', 'count=1',
> 'conv=notrunc,nocreat,fsync'] failed with rc=1 out='[suppressed]'
> err="/usr/bin/dd: error writing
> '/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases':
> Invalid argument\n1+0 records in\n0+0 records out\n0 bytes (0 B) copied,
> 0.0138582 s, 0.0 kB/s\n" from=::ffff:192.168.1.2,43864, flow_id=4a54578a,
> task_id=d2535d0f-c7f7-4f31-a10f-704923ce1790 (api:52)
>
>
This may be another issue. This command works only for storage with 512
bytes sector size.

Hyperconverge systems may use VDO, and it must be configured in
compatibility mode to support
512 bytes sector size.

I'm not sure how this is configured but Sahina should know.

Nir

> 2019-05-16 10:15:21,296+0300 ERROR (jsonrpc/2) [storage.TaskManager.Task]
> (Task='d2535d0f-c7f7-4f31-a10f-704923ce1790') Unexpected error (task:875)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
> in _run
>     return fn(*args, **kargs)
>   File "<string>", line 2, in createStorageDomain
>   File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in
> method
>     ret = func(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2614,
> in createStorageDomain
>     storageType, domVersion, block_size, alignment)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/nfsSD.py
> <http://nfssd.py/>", line 106, in create
>     block_size)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/fileSD.py
> <http://filesd.py/>", line 466, in _prepareMetadata
>     cls.format_external_leases(sdUUID, xleases_path)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 1255,
> in format_external_leases
>     xlease.format_index(lockspace, backend)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line
> 681, in format_index
>     index.dump(file)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line
> 843, in dump
>     file.pwrite(INDEX_BASE, self._buf)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line
> 1076, in pwr
>
>
> It seems that the 'dd' is having trouble checking the new gluster volume.
> The output is from the RC1 , but as you see Darell's situation is maybe
> the same.
> On May 16, 2019 21:41, Nir Soffer <nsof...@redhat.com> wrote:
>
> On Thu, May 16, 2019 at 8:38 PM Darrell Budic <bu...@onholyground.com>
> wrote:
>
> I tried adding a new storage domain on my hyper converged test cluster
> running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new
> gluster volume fine, but it’s not able to add the gluster storage domain
> (as either a managed gluster volume or directly entering values). The
> created gluster volume mounts and looks fine from the CLI. Errors in VDSM
> log:
>
> ...
>
> 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying
> file system doesn't supportdirect IO (fileSD:110)
> 2019-05-16 10:25:09,584-0500 INFO  (jsonrpc/5) [vdsm.api] FINISH
> createStorageDomain error=Storage Domain target is unsupported: ()
> from=::ffff:10.100.90.5,44732, flow_id=31d993dd,
> task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
>
>
> The direct I/O check has failed.
>
> This is the code doing the check:
>
>  98 def validateFileSystemFeatures(sdUUID, mountDir):
>  99     try:
> 100         # Don't unlink this file, we don't have the cluster lock yet
> as it
> 101         # requires direct IO which is what we are trying to test for.
> This
> 102         # means that unlinking the file might cause a race. Since we
> don't
> 103         # care what the content of the file is, just that we managed to
> 104         # open it O_DIRECT.
> 105         testFilePath = os.path.join(mountDir, "__DIRECT_IO_TEST__")
> 106         oop.getProcessPool(sdUUID).directTouch(testFilePath)
>
>
> 107     except OSError as e:
> 108         if e.errno == errno.EINVAL:
> 109             log = logging.getLogger("storage.fileSD")
> 110             log.error("Underlying file system doesn't support"
>
>
_______________________________________________
Announce mailing list -- announce@ovirt.org
To unsubscribe send an email to announce-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/announce@ovirt.org/message/HDCJOIOFQRNUAGQRKQMG5OYRWJQIOC35/

Reply via email to