I have setup containerized CNS cluster with 3 glusterfs nodes into existing 
origin 3.9 cluster. Each node installs fine and the cluster is created. I can 
list info about the cluster and volumes, I can create pvc and resize them, 
dynamic provisioning works great.

I’m having issue when I try to add device to the nodes to increase storage. 
Before we put this into production, we need to be able to add storage, if 
needed. For testing, I add 3rd device manually to each node, which shows up as 
/dev/xvdd.  Lsblk and fdisk both see the disk.  I’ve tried using heketi-cli 
thru the heketi storage pod. I follow the example here for adding new devices

https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/container-native_storage_for_openshift_container_platform/chap-documentation-red_hat_gluster_storage_container_native_with_openshift_platform-managing_clusters#idm139668535333216

In the hekeit pod, I run this command:

sh-4.4# heketi-cli device add --name=/dev/xvdd 
--node=2b32e62662ba340cc079a7a82ed7ca2e
Error: Invalid path or request

It fails everytime and I’ve tried all 3 nodes, different variations on the 
device name, etc..I have not found way around this issue. I have tried to load 
topology by exporting it, edit to add the device, and then load it back ,but 
get error each time as well.

What is the best way to add device (extra disk) to CNS? Am I even following the 
right procedure? We don’t want to size the cluster really large initially but 
want to grow as needed by adding disk. The other options, adding node or adding 
entirely new cluster, would increase costs, so we’d like to be able to just add 
device.

Please let me know if any of you have the solution.   Below is lsblk example 
too.


Lsblck example

sh-4.2# lsblk
NAME                                                                            
  MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda                                                                            
  202:0    0  50G  0 disk
`-xvda1                                                                         
  202:1    0  50G  0 part /var/log/journal/f0b5f0caead6
xvdb                                                                            
  202:16   0  50G  0 disk
`-xvdb1                                                                         
  202:17   0  50G  0 part
  `-docker_vg-dockerlv                                                          
  253:0    0  50G  0 lvm  /run/secrets
xvdc                                                                            
  202:32   0  50G  0 disk
|-vg_59185224764b33da4462c5e6a634e709-tp_445729b6edabc584d1e19f35a9a4a02b_tmeta 
  253:1    0  12M  0 lvm
| 
`-vg_59185224764b33da4462c5e6a634e709-tp_445729b6edabc584d1e19f35a9a4a02b-tpool 
253:3    0   2G  0 lvm
|   |-vg_59185224764b33da4462c5e6a634e709-tp_445729b6edabc584d1e19f35a9a4a02b   
  253:4    0   2G  0 lvm
|   
`-vg_59185224764b33da4462c5e6a634e709-brick_445729b6edabc584d1e19f35a9a4a02b  
253:5    0   2G  0 lvm  
/var/lib/heketi/mounts/vg_59185224764b33da4462c5e6a634e709/brick_445729b6edabc584d1e19f35a9a4a02b
|-vg_59185224764b33da4462c5e6a634e709-tp_445729b6edabc584d1e19f35a9a4a02b_tdata 
  253:2    0   2G  0 lvm
| 
`-vg_59185224764b33da4462c5e6a634e709-tp_445729b6edabc584d1e19f35a9a4a02b-tpool 
253:3    0   2G  0 lvm
|   |-vg_59185224764b33da4462c5e6a634e709-tp_445729b6edabc584d1e19f35a9a4a02b   
  253:4    0   2G  0 lvm
|   
`-vg_59185224764b33da4462c5e6a634e709-brick_445729b6edabc584d1e19f35a9a4a02b  
253:5    0   2G  0 lvm  
/var/lib/heketi/mounts/vg_59185224764b33da4462c5e6a634e709/brick_445729b6edabc584d1e19f35a9a4a02b
|-vg_59185224764b33da4462c5e6a634e709-tp_563b7509e1f08c021b0d9fa0db859e30_tmeta 
  253:6    0   8M  0 lvm
| 
`-vg_59185224764b33da4462c5e6a634e709-tp_563b7509e1f08c021b0d9fa0db859e30-tpool 
253:8    0   1G  0 lvm
|   |-vg_59185224764b33da4462c5e6a634e709-tp_563b7509e1f08c021b0d9fa0db859e30   
  253:9    0   1G  0 lvm
|   
`-vg_59185224764b33da4462c5e6a634e709-brick_563b7509e1f08c021b0d9fa0db859e30  
253:10   0   1G  0 lvm  
/var/lib/heketi/mounts/vg_59185224764b33da4462c5e6a634e709/brick_563b7509e1f08c021b0d9fa0db859e30
`-vg_59185224764b33da4462c5e6a634e709-tp_563b7509e1f08c021b0d9fa0db859e30_tdata 
  253:7    0   1G  0 lvm
  
`-vg_59185224764b33da4462c5e6a634e709-tp_563b7509e1f08c021b0d9fa0db859e30-tpool 
253:8    0   1G  0 lvm
    |-vg_59185224764b33da4462c5e6a634e709-tp_563b7509e1f08c021b0d9fa0db859e30   
  253:9    0   1G  0 lvm
    
`-vg_59185224764b33da4462c5e6a634e709-brick_563b7509e1f08c021b0d9fa0db859e30  
253:10   0   1G  0 lvm  
/var/lib/heketi/mounts/vg_59185224764b33da4462c5e6a634e709/brick_563b7509e1f08c021b0d9fa0db859e30
xvdd


Todd Walters
Unigroup


########################################################################
The information contained in this message, and any attachments thereto,
is intended solely for the use of the addressee(s) and may contain
confidential and/or privileged material. Any review, retransmission,
dissemination, copying, or other use of the transmitted information is
prohibited. If you received this in error, please contact the sender
and delete the material from any computer. UNIGROUP.COM
########################################################################

_______________________________________________
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

Reply via email to