I  can write a file inside the mount point as the *vdsm* user :

*[ec2-user@ip-172-31-21-171 ~]$ sudo -u vdsm dd if=/dev/zero
> of=/rhev/data-center/mnt/172.31.81.195
> <http://172.31.81.195>\:_home_ec2-user_export/test_storage_file*




>
>
>
>
> *[ec2-user@ip-172-31-21-171 ~]$ ll /rhev/data-center/mnt/172.31.81.195
> <http://172.31.81.195>\:_home_ec2-user_exporttotal 23190428drwxr-xr-x. 6
> vdsm kvm          64 Feb  9 19:00
> 38421e83-a4cd-4e74-bad9-e454187219c7-rw-r--r--. 1 vdsm kvm 23746968064 Mar
> 15 13:54 test_storage_file[ec2-user@ip-172-31-21-171 ~]$*


So I'll cleanup the deployment and run it again.

Let me know in the meanwhile if you have any other idea.

Eugène NG

Le mar. 15 mars 2022 à 14:33, Eugène Ngontang <sympav...@gmail.com> a
écrit :

> I unmounted my home *export* folder, but still have the same error :
>
>
>>
>>
>>
>> *[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Activate storage
>> domain][ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault
>> detail is "[]". HTTP response code is 400.[ ERROR ] fatal: [localhost]:
>> FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\".
>> Fault detail is \"[]\". HTTP response code is 400."}[ ERROR ] Failed to
>> execute stage 'Closing up': Failed executing ansible-playbook[ INFO  ]
>> Stage: Clean up*
>
>
> My mount points actually, missing the iso directory :
>
>
>> *[root@ip-172-31-21-171 ec2-user]# mount -l*
>> *sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel)*
>> *proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)*
>> *devtmpfs on /dev type devtmpfs
>> (rw,nosuid,seclabel,size=197765916k,nr_inodes=49441479,mode=755)*
>> *securityfs on /sys/kernel/security type securityfs
>> (rw,nosuid,nodev,noexec,relatime)*
>> *tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel)*
>> *devpts on /dev/pts type devpts
>> (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000)*
>> *tmpfs on /run type tmpfs (rw,nosuid,nodev,seclabel,mode=755)*
>> *tmpfs on /sys/fs/cgroup type tmpfs
>> (ro,nosuid,nodev,noexec,seclabel,mode=755)*
>> *cgroup on /sys/fs/cgroup/systemd type cgroup
>> (rw,nosuid,nodev,noexec,relatime,seclabel,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)*
>> *pstore on /sys/fs/pstore type pstore
>> (rw,nosuid,nodev,noexec,relatime,seclabel)*
>> *bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)*
>> *cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup
>> (rw,nosuid,nodev,noexec,relatime,seclabel,net_cls,net_prio)*
>> *cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup
>> (rw,nosuid,nodev,noexec,relatime,seclabel,cpu,cpuacct)*
>> *cgroup on /sys/fs/cgroup/freezer type cgroup
>> (rw,nosuid,nodev,noexec,relatime,seclabel,freezer)*
>> *cgroup on /sys/fs/cgroup/pids type cgroup
>> (rw,nosuid,nodev,noexec,relatime,seclabel,pids)*
>> *cgroup on /sys/fs/cgroup/perf_event type cgroup
>> (rw,nosuid,nodev,noexec,relatime,seclabel,perf_event)*
>> *cgroup on /sys/fs/cgroup/memory type cgroup
>> (rw,nosuid,nodev,noexec,relatime,seclabel,memory)*
>> *cgroup on /sys/fs/cgroup/devices type cgroup
>> (rw,nosuid,nodev,noexec,relatime,seclabel,devices)*
>> *cgroup on /sys/fs/cgroup/cpuset type cgroup
>> (rw,nosuid,nodev,noexec,relatime,seclabel,cpuset)*
>> *cgroup on /sys/fs/cgroup/hugetlb type cgroup
>> (rw,nosuid,nodev,noexec,relatime,seclabel,hugetlb)*
>> *cgroup on /sys/fs/cgroup/blkio type cgroup
>> (rw,nosuid,nodev,noexec,relatime,seclabel,blkio)*
>> *cgroup on /sys/fs/cgroup/rdma type cgroup
>> (rw,nosuid,nodev,noexec,relatime,seclabel,rdma)*
>> *none on /sys/kernel/tracing type tracefs (rw,relatime,seclabel)*
>> *configfs on /sys/kernel/config type configfs (rw,relatime)*
>> */dev/nvme0n1p2 on / type xfs
>> (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota)*
>> *selinuxfs on /sys/fs/selinux type selinuxfs (rw,relatime)*
>> *systemd-1 on /proc/sys/fs/binfmt_misc type autofs
>> (rw,relatime,fd=32,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=84184)*
>> *mqueue on /dev/mqueue type mqueue (rw,relatime,seclabel)*
>> *debugfs on /sys/kernel/debug type debugfs (rw,relatime,seclabel)*
>> *hugetlbfs on /dev/hugepages type hugetlbfs
>> (rw,relatime,seclabel,pagesize=2M)*
>> *sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)*
>> *tmpfs on /run/user/1000 type tmpfs
>> (rw,nosuid,nodev,relatime,seclabel,size=39560148k,mode=700,uid=1000,gid=1000)*
>> *hugetlbfs on /dev/hugepages1G type hugetlbfs
>> (rw,relatime,seclabel,pagesize=1024M)*
>> *172.31.81.195:/home/ec2-user/export on
>> /rhev/data-center/mnt/172.31.81.195:_home_ec2-user_export type nfs4
>> (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=100,retrans=3,sec=sys,clientaddr=172.31.21.171,local_lock=none,addr=172.31.81.195)*
>> *tmpfs on /run/user/0 type tmpfs
>> (rw,nosuid,nodev,relatime,seclabel,size=39560148k,mode=700)**[root@ip-172-31-21-171
>> ec2-user]#*
>
>
> Eugène NG
>
> Le mar. 15 mars 2022 à 13:59, Eugène Ngontang <sympav...@gmail.com> a
> écrit :
>
>> I can see the nfs is mounted twice, do you think I should remove this,
>> and avoid manually mount the network storage file system as I did?
>>
>> 172.31.81.195:/ on /home/ec2-user/export type nfs4
>>> (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=172.31.21.171,local_lock=none,addr=172.31.81.195)
>>
>>
>> Regards,
>> Eugène NG
>>
>> Le mar. 15 mars 2022 à 13:55, Eugène Ngontang <sympav...@gmail.com> a
>> écrit :
>>
>>> This screenshot show the output of `mount -l` command.
>>>
>>> Le mar. 15 mars 2022 à 13:52, Eugène Ngontang <sympav...@gmail.com> a
>>> écrit :
>>>
>>>> No @Strahil Nikolov <hunter86...@yahoo.com> it's not, cause I ran the
>>>> mount  command myself from my home directory before running the hosted
>>>> engine deployment :
>>>>
>>>> mount  172.31.81.195:/ ./export
>>>>
>>>>
>>>> Regards,
>>>> Eugène NG
>>>>
>>>> Le mar. 15 mars 2022, 11:39, Strahil Nikolov <hunter86...@yahoo.com> a
>>>> écrit :
>>>>
>>>>> ~ is your home folder and SELINUX can also get into the path.
>>>>>
>>>>> Have you checked if the storage is not already mounted on the
>>>>> /rhev/.... ?
>>>>>
>>>>> Best Regards,
>>>>> Strahil Nikolov
>>>>>
>>>>> On Tue, Mar 15, 2022 at 12:23, Eugène Ngontang
>>>>> <sympav...@gmail.com> wrote:
>>>>> _______________________________________________
>>>>> Users mailing list -- users@ovirt.org
>>>>> To unsubscribe send an email to users-le...@ovirt.org
>>>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>>>> oVirt Code of Conduct:
>>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>>> List Archives:
>>>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5Q4NN4KVDOCXY6ETXPCMLANJ3KJZMFRP/
>>>>>
>>>>>
>>>
>>> --
>>> LesCDN <http://lescdn.com>
>>> engont...@lescdn.com
>>> ------------------------------------------------------------
>>> *Aux hommes il faut un chef, et au*
>>>
>>> * chef il faut des hommes!L'habit ne fait pas le moine, mais lorsqu'on
>>> te voit on te juge!*
>>>
>>
>>
>> --
>> LesCDN <http://lescdn.com>
>> engont...@lescdn.com
>> ------------------------------------------------------------
>> *Aux hommes il faut un chef, et au*
>>
>> * chef il faut des hommes!L'habit ne fait pas le moine, mais lorsqu'on te
>> voit on te juge!*
>>
>
>
> --
> LesCDN <http://lescdn.com>
> engont...@lescdn.com
> ------------------------------------------------------------
> *Aux hommes il faut un chef, et au*
>
> * chef il faut des hommes!L'habit ne fait pas le moine, mais lorsqu'on te
> voit on te juge!*
>


-- 
LesCDN <http://lescdn.com>
engont...@lescdn.com
------------------------------------------------------------
*Aux hommes il faut un chef, et au*

* chef il faut des hommes!L'habit ne fait pas le moine, mais lorsqu'on te
voit on te juge!*
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VPY55QTDCXYIHK2EGGTRLC5MKWYLKACU/

Reply via email to