This is my previous e-mail:

On May 16, 2019 15:23, Strahil Nikolov <> wrote:

It seems that the issue is within the 'dd' command as it stays waiting for 

[root@ovirt1 mnt]# /usr/bin/dd iflag=fullblock  of=file oflag=direct,seek_bytes 
seek=1048576 bs=256512 count=1 conv=notrunc,nocreat,fsync      ^C0+0 records in
0+0 records out
0 bytes (0 B) copied, 19.3282 s, 0.0 kB/s

Changing the dd command works and shows that the gluster is working:

[root@ovirt1 mnt]# cat /dev/urandom |  /usr/bin/dd  of=file 
oflag=direct,seek_bytes seek=1048576 bs=256512 count=1 
conv=notrunc,nocreat,fsync  0+1 records in
0+1 records out
131072 bytes (131 kB) copied, 0.00705081 s, 18.6 MB/s

Best Regards,

Strahil Nikolov

----- Препратено съобщение -----

От: Strahil Nikolov <>

До: Users <>

Изпратено: четвъртък, 16 май 2019 г., 5:56:44 ч. Гринуич-4

Тема: ovirt cannot create a gluster storage domain

Hey guys,

I have recently updated (yesterday) my platform to latest available (v4.3.3.7) 
and upgraded to gluster v6.1 .The setup is hyperconverged 3 node cluster with 
ovirt1/gluster1 & ovirt2/gluster2 as replica nodes (glusterX is for gluster 
communication) while ovirt3 is the arbiter.

Today I have tried to add new domain storages but they fail with the following:

2019-05-16 10:15:21,296+0300 INFO  (jsonrpc/2) [vdsm.api] FINISH 
createStorageDomain error=Command ['/usr/bin/dd', 'iflag=fullblock', 
 'oflag=direct,seek_bytes', 'seek=1048576', 'bs=256512', 'count=1', 
'conv=notrunc,nocreat,fsync'] failed with rc=1 out='[suppressed]' 
err="/usr/bin/dd: error writing 
 Invalid argument\n1+0 records in\n0+0 records out\n0 bytes (0 B) copied, 
0.0138582 s, 0.0 kB/s\n" from=::ffff:,43864, flow_id=4a54578a, 
task_id=d2535d0f-c7f7-4f31-a10f-704923ce1790 (api:52)
2019-05-16 10:15:21,296+0300 ERROR (jsonrpc/2) [storage.TaskManager.Task] 
(Task='d2535d0f-c7f7-4f31-a10f-704923ce1790') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/", line 882, in 
    return fn(*args, **kargs)
  File "<string>", line 2, in createStorageDomain
  File "/usr/lib/python2.7/site-packages/vdsm/common/", line 50, in method
    ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/", line 2614, in 
    storageType, domVersion, block_size, alignment)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/", line 106, in 
  File "/usr/lib/python2.7/site-packages/vdsm/storage/", line 466, in 
    cls.format_external_leases(sdUUID, xleases_path)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/", line 1255, in 
    xlease.format_index(lockspace, backend)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/", line 681, in 
  File "/usr/lib/python2.7/site-packages/vdsm/storage/", line 843, in 
    file.pwrite(INDEX_BASE, self._buf)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/", line 1076, in 

It seems that the 'dd' is having trouble checking the new gluster volume.
The output is from the RC1 , but as you see Darell's situation is maybe the 
same.On May 16, 2019 21:41, Nir Soffer <> wrote:
> On Thu, May 16, 2019 at 8:38 PM Darrell Budic <> wrote:
>> I tried adding a new storage domain on my hyper converged test cluster 
>> running Ovirt and gluster 6.1. I was able to create the new gluster 
>> volume fine, but it’s not able to add the gluster storage domain (as either 
>> a managed gluster volume or directly entering values). The created gluster 
>> volume mounts and looks fine from the CLI. Errors in VDSM log:
> ... 
>> 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying 
>> file system doesn't supportdirect IO (fileSD:110)
>> 2019-05-16 10:25:09,584-0500 INFO  (jsonrpc/5) [vdsm.api] FINISH 
>> createStorageDomain error=Storage Domain target is unsupported: () 
>> from=::ffff:,44732, flow_id=31d993dd, 
>> task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
> The direct I/O check has failed.
> This is the code doing the check:
>  98 def validateFileSystemFeatures(sdUUID, mountDir):
>  99     try:
> 100         # Don't unlink this file, we don't have the cluster lock yet as it
> 101         # requires direct IO which is what we are trying to test for. This
> 102         # means that unlinking the file might cause a race. Since we don't
> 103         # care what the content of the file is, just that we managed to
> 104         # open it O_DIRECT.
> 105         testFilePath = os.path.join(mountDir, "__DIRECT_IO_TEST__")
> 106         oop.getProcessPool(sdUUID).directTouch(testFilePath)              
> 107     except OSError as e:
> 108         if e.errno == errno.EINVAL:
> 109             log = logging.getLogger("storage.fileSD")
> 110             log.error("Underlying file system doesn't support"
Users mailing list --
To unsubscribe send an email to
Privacy Statement:
oVirt Code of Conduct:
List Archives:

Reply via email to