On Mon, Aug 31, 2020 at 8:17 PM <tho...@hoberg.net> wrote:
>
> After this 
> (https://devconfcz2020a.sched.com/event/YOtG/ovirt-4k-teaching-an-old-dog-new-tricks)

Vdsm learned a few tricks but is still old :-)

> I sure do not expect this (log below):
>
> Actually I am trying to evaluate just how portable oVirt storage is and this 
> case I had prepared a USB3 HDD with VDO,
> which I could literally move between farms to transport VMs.
> Logical disks are typically large for simplicity within the VMs, QCOW2 and 
> VDO assumed to compensate for this 'lack of planning' while the allocated 
> storage easily fits the HDD.
>
> Once I got beyond the initial issues, I came across this somewhat unexpected 
> issue: VDO storage uses 4k blocks all around,

VDO has 512 bytes emulation mode. This is how gluster used VDO before 4.3.8.

> but evidently when you mount an export domain (or I guess any domain) as 
> POSIX, 512byte blocks are assumed somewhere and 4k blocks rejected.

Yes, NFS and POSIX (internally NFS) storage domains do not support 4k.

With NFS we cannot detect storage domain block size, since NFS does not use
direct I/O on the server side, so we kept the old configuration, 512.
Theoretically
we can change the code to default to 4k in this case, but it was never tested.

POSIX was never tested wth 4k and since internally it is NFS, it has the same
limits. It it likely to support 4k if we create a specific storage
class for it, like
local storage.

Why do you need VDO for export domain? qcow2 keeps only the used data.
You don't have enough space on the disk for keep all VMs in qcow2 format?

> I'd say that is a bug in VDSM, right?

No, this is unsupported configuration.

export domain is using safelease, which does not support 4k storage,
and since export
domain it is deprecated and safelease is unmaintained, it will never support 4k.

If you need 4k support for POSIX or NFS storage *data* domain, this is
something that
can be considered for a future version, please file RFE for this.

Nir

> Or is there anything in the mount options to fix this?
> 020-08-31 18:44:40,424+0200 INFO  (periodic/0) [vdsm.api] START 
> repoStats(domains=()) from=internal, 
> task_id=7a293dec-85b3-4b82-92b7-4e7d03b40343 (api:48)
> 2020-08-31 18:44:40,425+0200 INFO  (periodic/0) [vdsm.api] FINISH repoStats 
> return={'9992dc21-edf2-4951-9020-7c78f1220e02': {'code': 0, 'lastCheck': 
> '1.6', 'delay': '0.00283651', 'valid': True, 'version': 5, 'acquired': True, 
> 'actual': True}, '25d95783-44df-4fda-b642-55fe09162149': {'code': 0, 
> 'lastCheck': '2.2', 'delay': '0.00256944', 'valid': True, 'version': 5, 
> 'acquired': True, 'actual': True}, '148d9e9e-d7b8-4220-9ee8-057da96e608c': 
> {'code': 0, 'lastCheck': '2.2', 'delay': '0.00050217', 'valid': True, 
> 'version': 5, 'acquired': True, 'actual': True}} from=internal, 
> task_id=7a293dec-85b3-4b82-92b7-4e7d03b40343 (api:54)
> 2020-08-31 18:44:41,121+0200 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC 
> call Host.ping2 succeeded in 0.00 seconds (__init__:312)
> 2020-08-31 18:44:41,412+0200 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC 
> call Host.ping2 succeeded in 0.00 seconds (__init__:312)
> 2020-08-31 18:44:41,414+0200 INFO  (jsonrpc/2) [vdsm.api] START 
> repoStats(domains=['9992dc21-edf2-4951-9020-7c78f1220e02']) from=::1,44782, 
> task_id=657c820e-e2eb-4b1d-b2fb-1772b3af6f32 (api:48)
> 2020-08-31 18:44:41,414+0200 INFO  (jsonrpc/2) [vdsm.api] FINISH repoStats 
> return={'9992dc21-edf2-4951-9020-7c78f1220e02': {'code': 0, 'lastCheck': 
> '2.6', 'delay': '0.00283651', 'valid': True, 'version': 5, 'acquired': True, 
> 'actual': True}} from=::1,44782, task_id=657c820e-e2eb-4b1d-b2fb-1772b3af6f32 
> (api:54)
> 2020-08-31 18:44:41,414+0200 INFO  (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC 
> call Host.getStorageRepoStats succeeded in 0.00 seconds (__init__:312)
> 2020-08-31 18:44:41,965+0200 INFO  (jsonrpc/3) [IOProcessClient] (Global) 
> Starting client (__init__:308)
> 2020-08-31 18:44:41,984+0200 INFO  (ioprocess/87332) [IOProcess] (Global) 
> Starting ioprocess (__init__:434)
> 2020-08-31 18:44:42,006+0200 INFO  (jsonrpc/3) [storage.StorageDomainCache] 
> Removing domain fe9fb0db-2743-457a-80f0-9a4edc509e9d from storage domain 
> cache (sdc:211)
> 2020-08-31 18:44:42,006+0200 INFO  (jsonrpc/3) [storage.StorageDomainCache] 
> Invalidating storage domain cache (sdc:74)
> 2020-08-31 18:44:42,006+0200 INFO  (jsonrpc/3) [vdsm.api] FINISH 
> connectStorageServer return={'statuslist': [{'id': 
> '00000000-0000-0000-0000-000000000000', 'status': 0}]} 
> from=::ffff:192.168.0.87,40378, flow_id=05fe72ef-c8ac-4e03-8453-5171e5fc5f8b, 
> task_id=b92d4144-a9db-4423-ba6b-35934d4f9200 (api:54)
> 2020-08-31 18:44:42,008+0200 INFO  (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC 
> call StoragePool.connectStorageServer succeeded in 3.11 seconds (__init__:312)
> 2020-08-31 18:44:42,063+0200 INFO  (jsonrpc/5) [vdsm.api] START 
> getStorageDomainsList(spUUID='00000000-0000-0000-0000-000000000000', 
> domainClass=3, storageType='', remotePath='/dev/mapper/vdo1', options=None) 
> from=::ffff:192.168.0.87,40378, flow_id=eff548f9-b663-45dd-b8a5-5854f9b5dde8, 
> task_id=98ca66a5-edee-40d4-9253-6a46409241cc (api:48)
> 2020-08-31 18:44:42,063+0200 INFO  (jsonrpc/5) [storage.StorageDomainCache] 
> Refreshing storage domain cache (resize=True) (sdc:80)
> 2020-08-31 18:44:42,063+0200 INFO  (jsonrpc/5) [storage.ISCSI] Scanning iSCSI 
> devices (iscsi:442)
> 2020-08-31 18:44:42,101+0200 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC 
> call Host.ping2 succeeded in 0.00 seconds (__init__:312)
> 2020-08-31 18:44:42,128+0200 INFO  (jsonrpc/5) [storage.ISCSI] Scanning iSCSI 
> devices: 0.06 seconds (utils:390)
> 2020-08-31 18:44:42,129+0200 INFO  (jsonrpc/5) [storage.HBA] Scanning FC 
> devices (hba:60)
> 2020-08-31 18:44:42,182+0200 INFO  (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC 
> call Host.ping2 succeeded in 0.00 seconds (__init__:312)
> 2020-08-31 18:44:42,184+0200 INFO  (jsonrpc/4) [vdsm.api] START 
> repoStats(domains=['9992dc21-edf2-4951-9020-7c78f1220e02']) from=::1,44782, 
> task_id=7b3f1eb8-f7e3-4467-93a1-27ef9721c90c (api:48)
> 2020-08-31 18:44:42,184+0200 INFO  (jsonrpc/4) [vdsm.api] FINISH repoStats 
> return={'9992dc21-edf2-4951-9020-7c78f1220e02': {'code': 0, 'lastCheck': 
> '3.4', 'delay': '0.00283651', 'valid': True, 'version': 5, 'acquired': True, 
> 'actual': True}} from=::1,44782, task_id=7b3f1eb8-f7e3-4467-93a1-27ef9721c90c 
> (api:54)
> 2020-08-31 18:44:42,184+0200 INFO  (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC 
> call Host.getStorageRepoStats succeeded in 0.00 seconds (__init__:312)
> 2020-08-31 18:44:42,277+0200 INFO  (jsonrpc/5) [storage.HBA] Scanning FC 
> devices: 0.15 seconds (utils:390)
> 2020-08-31 18:44:42,309+0200 INFO  (jsonrpc/5) [storage.Multipath] Resizing 
> multipath devices (multipath:104)
> 2020-08-31 18:44:42,312+0200 INFO  (jsonrpc/5) [storage.Multipath] Resizing 
> multipath devices: 0.00 seconds (utils:390)
> 2020-08-31 18:44:42,312+0200 INFO  (jsonrpc/5) [storage.StorageDomainCache] 
> Refreshing storage domain cache: 0.25 seconds (utils:390)
> 2020-08-31 18:44:42,418+0200 INFO  (tmap-569/0) [IOProcessClient] 
> (_dev_mapper_vdo1) Starting client (__init__:308)
> 2020-08-31 18:44:42,445+0200 INFO  (tmap-569/1) [IOProcessClient] 
> (glusterSD/nucvirt.mtk.hoberg.net:_data) Starting client (__init__:308)
> 2020-08-31 18:44:42,449+0200 INFO  (ioprocess/87496) [IOProcess] 
> (_dev_mapper_vdo1) Starting ioprocess (__init__:434)
> 2020-08-31 18:44:42,457+0200 INFO  (tmap-569/2) [IOProcessClient] 
> (glusterSD/nucvirt.mtk.hoberg.net:_vmstore) Starting client (__init__:308)
> 2020-08-31 18:44:42,465+0200 INFO  (ioprocess/87508) [IOProcess] 
> (glusterSD/nucvirt.mtk.hoberg.net:_data) Starting ioprocess (__init__:434)
> 2020-08-31 18:44:42,478+0200 INFO  (tmap-569/3) [IOProcessClient] 
> (glusterSD/nucvirt.mtk.hoberg.net:_engine) Starting client (__init__:308)
> 2020-08-31 18:44:42,486+0200 INFO  (ioprocess/87516) [IOProcess] 
> (glusterSD/nucvirt.mtk.hoberg.net:_vmstore) Starting ioprocess (__init__:434)
> 2020-08-31 18:44:42,514+0200 INFO  (ioprocess/87533) [IOProcess] 
> (glusterSD/nucvirt.mtk.hoberg.net:_engine) Starting ioprocess (__init__:434)
> 2020-08-31 18:44:42,522+0200 INFO  (jsonrpc/5) [IOProcessClient] 
> (fe9fb0db-2743-457a-80f0-9a4edc509e9d) Starting client (__init__:308)
> 2020-08-31 18:44:42,553+0200 INFO  (ioprocess/87552) [IOProcess] 
> (fe9fb0db-2743-457a-80f0-9a4edc509e9d) Starting ioprocess (__init__:434)
> 2020-08-31 18:44:42,582+0200 ERROR (jsonrpc/5) [storage.HSM] Unexpected error 
> (hsm:2843)
> Traceback (most recent call last):
>   File "/usr/lib/python3.6/site-packages/vdsm/storage/hsm.py", line 2829, in 
> getStorageDomainsList
>     dom = sdCache.produce(sdUUID=sdUUID)
>   File "/usr/lib/python3.6/site-packages/vdsm/storage/sdc.py", line 115, in 
> produce
>     domain.getRealDomain()
>   File "/usr/lib/python3.6/site-packages/vdsm/storage/sdc.py", line 51, in 
> getRealDomain
>     return self._cache._realProduce(self._sdUUID)
>   File "/usr/lib/python3.6/site-packages/vdsm/storage/sdc.py", line 139, in 
> _realProduce
>     domain = self._findDomain(sdUUID)
>   File "/usr/lib/python3.6/site-packages/vdsm/storage/sdc.py", line 156, in 
> _findDomain
>     return findMethod(sdUUID)
>   File "/usr/lib/python3.6/site-packages/vdsm/storage/nfsSD.py", line 146, in 
> findDomain
>     return NfsStorageDomain(NfsStorageDomain.findDomainPath(sdUUID))
>   File "/usr/lib/python3.6/site-packages/vdsm/storage/fileSD.py", line 390, 
> in __init__
>     manifest.block_size, storage_block_size)
>   File "/usr/lib/python3.6/site-packages/vdsm/storage/sd.py", line 939, in 
> _validate_storage_block_size
>     raise se.StorageDomainBlockSizeMismatch(block_size, storage_block_size)
> vdsm.storage.exception.StorageDomainBlockSizeMismatch: Block size does not 
> match storage block size: 'block_size=512, storage_block_size=4096'
> 2020-08-31 18:44:42,582+0200 INFO  (jsonrpc/5) [vdsm.api] FINISH 
> getStorageDomainsList return={'domlist': []} from=::ffff:192.168.0.87,40378, 
> flow_id=eff548f9-b663-45dd-b8a5-5854f9b5dde8, 
> task_id=98ca66a5-edee-40d4-9253-6a46409241cc (api:54)
> 2020-08-31 18:44:42,582+0200 INFO  (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC 
> call Host.getStorageDomains succeeded in 0.52 seconds (__init__:312)
> 2020-08-31 18:44:42,801+0200 INFO  (jsonrpc/7) [vdsm.api] START 
> disconnectStorageServer(domType=6, 
> spUUID='00000000-0000-0000-0000-000000000000', conList=[{'password': 
> '********', 'vfs_type': 'xfs', 'port': '', 'mnt_options': 
> 'rw,relatime,seclabel,attr2,inode64,noquota,_netdev,x-systemd.device-timeout=0',
>  'iqn': '', 'connection': '/dev/mapper/vdo1', 'ipv6_enabled': 'false', 'id': 
> 'b4a73d6a-344c-446b-a33d-7b665e78578d', 'user': '', 'tpgt': '1'}], 
> options=None) from=::ffff:192.168.0.87,40378, 
> flow_id=6678917a-b29c-416c-aa2d-5abc1d5351a3, 
> task_id=3d0cf36e-427d-4000-8857-a8a2af641b10 (api:48)
> 2020-08-31 18:44:42,801+0200 INFO  (jsonrpc/7) [storage.Mount] unmounting 
> /rhev/data-center/mnt/_dev_mapper_vdo1 (mount:215)
> 2020-08-31 18:44:42,806+0200 INFO  (jsonrpc/0) [vdsm.api] START 
> disconnectStorageServer(domType=6, 
> spUUID='00000000-0000-0000-0000-000000000000', conList=[{'password': 
> '********', 'vfs_type': 'xfs', 'port': '', 'mnt_options': 
> 'rw,relatime,seclabel,attr2,inode64,noquota,_netdev,x-systemd.device-timeout=0',
>  'iqn': '', 'connection': '/dev/mapper/vdo1', 'ipv6_enabled': 'false', 'id': 
> 'b4a73d6a-344c-446b-a33d-7b665e78578d', 'user': '', 'tpgt': '1'}], 
> options=None) from=::ffff:192.168.0.87,40378, 
> flow_id=bb449405-5b78-4b65-a352-e2e94a269df9, 
> task_id=09783e0a-933c-448d-b0ca-1b5d351eb042 (api:48)
> 2020-08-31 18:44:42,806+0200 INFO  (jsonrpc/0) [storage.Mount] unmounting 
> /rhev/data-center/mnt/_dev_mapper_vdo1 (mount:215)
> 2020-08-31 18:44:42,821+0200 ERROR (jsonrpc/0) [storage.HSM] Could not 
> disconnect from storageServer (hsm:2528)
> Traceback (most recent call last):
>   File "/usr/lib/python3.6/site-packages/vdsm/storage/hsm.py", line 2524, in 
> disconnectStorageServer
>     conObj.disconnect()
>   File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 
> 201, in disconnect
>     self._mount.umount(True, True)
>   File "/usr/lib/python3.6/site-packages/vdsm/storage/mount.py", line 217, in 
> umount
>     umount(self.fs_file, force=force, lazy=lazy, freeloop=freeloop)
>   File "/usr/lib/python3.6/site-packages/vdsm/common/supervdsm.py", line 56, 
> in __call__
>     return callMethod()
>   File "/usr/lib/python3.6/site-packages/vdsm/common/supervdsm.py", line 54, 
> in <lambda>
>     **kwargs)
>   File "<string>", line 2, in umount
>   File "/usr/lib64/python3.6/multiprocessing/managers.py", line 772, in 
> _callmethod
>     raise convert_to_error(kind, result)
> vdsm.storage.mount.MountError: Command ['/usr/bin/umount', '-f', '-l', 
> '/rhev/data-center/mnt/_dev_mapper_vdo1'] failed with rc=32 out=b'' 
> err=b'umount: /rhev/data-center/mnt/_dev_mapper_vdo1: not mounted.\n'
> 2020-08-31 18:44:42,821+0200 INFO  (jsonrpc/0) [storage.StorageDomainCache] 
> Refreshing storage domain cache (resize=False) (sdc:80)
> _______________________________________________
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZCLXQA2NIX5AMCQRBQ4G5C7KALMUVVFU/
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XWSSQ36GCSHLIO2SDA6JJHI4UDYEWN7J/

Reply via email to