Thaks for the reply.

sda is a virual disk because server has an hardware array controller.
So I have to split sda in two smaller disks.

Il 08/10/2018 17:24, Jayme ha scritto:
You should be using shared external storage or glusterfs, if gluster you should have other drives in the server to porvision as gluster bricks during the hyoerconverged deployment

On Mon, Oct 8, 2018, 8:07 AM Stefano Danzi, <s.da...@hawai.it <mailto:s.da...@hawai.it>> wrote:

    Hi! It's the frist time thai I use node.

    I installed node on my hosts and I leave auto partitioning.
    Hosts storage now is:

    [root@ovirtn01 ~]# lsblk
    NAME                                                          MAJ:MIN RM   
SIZE RO TYPE  MOUNTPOINT

    sda                                                             8:0    0 
136,7G  0 disk

    ├─sda1                                                          8:1    0    
 1G  0 part  /boot

    └─sda2                                                          8:2    0 
135,7G  0 part

      ├─onn_ovirtn01-pool00_tmeta                                 253:0    0    
 1G  0 lvm

      │ └─onn_ovirtn01-pool00-tpool                               253:2    0  
94,8G  0 lvm

      │   ├─onn_ovirtn01-ovirt--node--ng--4.2.6.2--0.20181003.0+1 253:3    0  
67,8G  0 lvm   /

      │   ├─onn_ovirtn01-pool00                                   253:6    0  
94,8G  0 lvm

      │   ├─onn_ovirtn01-var_log_audit                            253:7    0    
 2G  0 lvm   /var/log/audit

      │   ├─onn_ovirtn01-var_log                                  253:8    0    
 8G  0 lvm   /var/log

      │   ├─onn_ovirtn01-var                                      253:9    0    
15G  0 lvm   /var

      │   ├─onn_ovirtn01-tmp                                      253:10   0    
 1G  0 lvm   /tmp

      │   ├─onn_ovirtn01-home                                     253:11   0    
 1G  0 lvm   /home

      │   ├─onn_ovirtn01-root                                     253:12   0  
67,8G  0 lvm

      │   └─onn_ovirtn01-var_crash                                253:13   0    
10G  0 lvm   /var/crash

      ├─onn_ovirtn01-pool00_tdata                                 253:1    0  
94,8G  0 lvm

      │ └─onn_ovirtn01-pool00-tpool                               253:2    0  
94,8G  0 lvm

      │   ├─onn_ovirtn01-ovirt--node--ng--4.2.6.2--0.20181003.0+1 253:3    0  
67,8G  0 lvm   /

      │   ├─onn_ovirtn01-pool00                                   253:6    0  
94,8G  0 lvm

      │   ├─onn_ovirtn01-var_log_audit                            253:7    0    
 2G  0 lvm   /var/log/audit

      │   ├─onn_ovirtn01-var_log                                  253:8    0    
 8G  0 lvm   /var/log

      │   ├─onn_ovirtn01-var                                      253:9    0    
15G  0 lvm   /var

      │   ├─onn_ovirtn01-tmp                                      253:10   0    
 1G  0 lvm   /tmp

      │   ├─onn_ovirtn01-home                                     253:11   0    
 1G  0 lvm   /home

      │   ├─onn_ovirtn01-root                                     253:12   0  
67,8G  0 lvm

      │   └─onn_ovirtn01-var_crash                                253:13   0    
10G  0 lvm   /var/crash

      └─onn_ovirtn01-swap                                         253:4    0  
13,7G  0 lvm   [SWAP]


    But I have no more space for hosted engine.... (Vm/Data space will
    be in another place).
    Now I could:
    - manually resize volumes
    - reinstall with custom partitioning and leave 60Gb for hosted
    engine volume.

    what's the better way?

    Il 27/09/2018 13:46, Hesham Ahmed ha scritto:
    Unless you have a reason to use CentOS, I suggest you use oVirt node,
    it is much more optimized out of the box for oVirt


    _______________________________________________
    Users mailing list -- users@ovirt.org <mailto:users@ovirt.org>
    To unsubscribe send an email to users-le...@ovirt.org
    <mailto:users-le...@ovirt.org>
    Privacy Statement: https://www.ovirt.org/site/privacy-policy/
    oVirt Code of Conduct:
    https://www.ovirt.org/community/about/community-guidelines/
    List Archives:
    
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PKVK6EWIHWK5Q3IXMPIU75DJZUPBYGJP/


_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/F3ECOBSK7ZU3KT3ATB6XGEOUTNLOITA3/

Reply via email to