Hi,

What version of CentOS are you using for the underlying OS in the
undercloud and how did you deploy it?

Best regards,

Alfredo


On Thu, Jun 20, 2019 at 10:47 AM Pradeep Antil <[email protected]>
wrote:

> Hi Folks,
>
> I am trying to install undercloud on CentOS 7 VirtualBox VM. Two Nics are
> assigned to VM, first NIC is used for Provisioning N/w(192.168.56.0/24)
> and second is used for internet (NAT - 10.0.3.15)
>
> When i try to install undercloud using below command,  getting the
> following error,
>
> Any idea and suggestion to resolve the error.
>
> [stack@undercloud ~]$ openstack undercloud install
>
> TASK [Restart Keepalived container]
> *******************************************************************************************
> fatal: [undercloud]: FAILED! => {"changed": true, "cmd": "podman restart
> keepalived", "delta": "0:00:00.090226", "end": "2019-06-20
> 03:50:58.583423", "msg": "non-zero return code", "rc": 125, "start":
> "2019-06-20 03:50:58.493197", "stderr": "Error: error creating libpod
> runtime: kernel does not support overlay fs: overlay: the backing xfs
> filesystem is formatted without d_type support, which leads to incorrect
> behavior. Reformat the filesystem with ftype=1 to enable d_type support.
> Running without d_type is not supported.: driver not supported",
> "stderr_lines": ["Error: error creating libpod runtime: kernel does not
> support overlay fs: overlay: the backing xfs filesystem is formatted
> without d_type support, which leads to incorrect behavior. Reformat the
> filesystem with ftype=1 to enable d_type support. Running without d_type is
> not supported.: driver not supported"], "stdout": "", "stdout_lines": []}
> ...ignoring
>
> [stack@undercloud ~]$ df -Th
> Filesystem              Type      Size  Used Avail Use% Mounted on
> /dev/mapper/centos-root xfs        31G  2.3G   29G   8% /
> devtmpfs                devtmpfs  3.9G     0  3.9G   0% /dev
> tmpfs                   tmpfs     3.9G     0  3.9G   0% /dev/shm
> tmpfs                   tmpfs     3.9G  8.7M  3.9G   1% /run
> tmpfs                   tmpfs     3.9G     0  3.9G   0% /sys/fs/cgroup
> /dev/sda1               xfs       497M  141M  356M  29% /boot
> tmpfs                   tmpfs     782M     0  782M   0% /run/user/0
> tmpfs                   tmpfs     500M   36M  465M   8%
> /var/log/heat-launcher
> [stack@undercloud ~]$ free -h
>               total        used        free      shared  buff/cache
> available
> Mem:           7.6G        232M        5.7G         44M        1.7G
>  7.0G
> Swap:          3.5G          0B        3.5G
> [stack@undercloud ~]$
>
> [stack@undercloud ~]$ cat ~/undercloud.conf
> # Config generated by undercloud wizard
> # Use these values in undercloud.conf
> [DEFAULT]
> undercloud_hostname = undercloud.localdomain
> local_interface = enp0s3
> local_mtu = 1500
> local_ip = 192.168.56.2/24
> undercloud_public_host = 192.168.56.3
> undercloud_admin_host = 192.168.56.4
> undercloud_service_certificate =
> generate_service_certificate = True
> scheduler_max_attempts = 10
>
> # Deprecated names for compatibility with older releases
> discovery_iprange = 192.168.56.16,192.168.56.20
> undercloud_public_vip = 192.168.56.3
> undercloud_admin_vip = 192.168.56.4
> network_cidr = 192.168.56.0/24
> dhcp_start = 192.168.56.6
> dhcp_end = 192.168.56.15
> inspection_iprange = 192.168.56.16,192.168.56.20
> network_gateway = 192.168.56.1
> #masquerade_network = 192.168.56.0/24
> # End of deprecated names
>
> [ctlplane-subnet]
> cidr = 192.168.56.0/24
> gateway = 192.168.56.1
> dhcp_start = 192.168.56.6
> dhcp_end = 192.168.56.15
> inspection_iprange = 192.168.56.16,192.168.56.20
> #masquerade = true
>
> [stack@undercloud ~]$
>
> --
> Best Regards
> Pradeep Kumar
> _______________________________________________
> users mailing list
> [email protected]
> http://lists.rdoproject.org/mailman/listinfo/users
>
> To unsubscribe: [email protected]
>
_______________________________________________
users mailing list
[email protected]
http://lists.rdoproject.org/mailman/listinfo/users

To unsubscribe: [email protected]

Reply via email to