I was able to resolve this simply by adding the server string. I didn’t realize 
I needed it because the link below from red hat gluster doc does not show 
connecting to it.

sh-4.4# heketi-cli -s http://localhost:8080 --user admin --secret 
"$HEKETI_CLI_KEY" device add --name=/dev/xvdd 
--node=2b32e62662ba340cc079a7a82ed7ca2e
Device added successfully

This was too simple but I didn’t see it documented anywhere.

Thanks,
Todd

--------------

2. Adding Device to CNS 3.9 (Walters, Todd)


    Message: 2
    Date: Thu, 17 May 2018 13:00:04 +0000
    From: "Walters, Todd" <todd_walt...@unigroup.com>
    To: "users@lists.openshift.redhat.com"
    <users@lists.openshift.redhat.com>
    Subject: Adding Device to CNS 3.9
    Message-ID: <544d33e4-cdd0-48e3-a901-2bb802dc9...@unigroupinc.com>
    Content-Type: text/plain; charset="utf-8"

    I have setup containerized CNS cluster with 3 glusterfs nodes into existing 
origin 3.9 cluster. Each node installs fine and the cluster is created. I can 
list info about the cluster and volumes, I can create pvc and resize them, 
dynamic provisioning works great.

    I?m having issue when I try to add device to the nodes to increase storage. 
Before we put this into production, we need to be able to add storage, if 
needed. For testing, I add 3rd device manually to each node, which shows up as 
/dev/xvdd.  Lsblk and fdisk both see the disk.  I?ve tried using heketi-cli 
thru the heketi storage pod. I follow the example here for adding new devices

    
https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Faccess.redhat.com%2Fdocumentation%2Fen-us%2Fred_hat_gluster_storage%2F3.3%2Fhtml%2Fcontainer-native_storage_for_openshift_container_platform%2Fchap-documentation-red_hat_gluster_storage_container_native_with_openshift_platform-managing_clusters%23idm139668535333216&data=01%7C01%7Ctodd_walters%40unigroup.com%7C47a89ecef4da483ab1eb08d5bbf623ba%7C259bdc2f86d3477b8cb34eee64289142%7C1&sdata=389aR6M9HOUSH2kMNJSqXKMccdolpn15afjau73YUZs%3D&reserved=0

    In the hekeit pod, I run this command:

    sh-4.4# heketi-cli device add --name=/dev/xvdd 
--node=2b32e62662ba340cc079a7a82ed7ca2e
    Error: Invalid path or request

    It fails everytime and I?ve tried all 3 nodes, different variations on the 
device name, etc..I have not found way around this issue. I have tried to load 
topology by exporting it, edit to add the device, and then load it back ,but 
get error each time as well.

    What is the best way to add device (extra disk) to CNS? Am I even following 
the right procedure? We don?t want to size the cluster really large initially 
but want to grow as needed by adding disk. The other options, adding node or 
adding entirely new cluster, would increase costs, so we?d like to be able to 
just add device.

    Please let me know if any of you have the solution.   Below is lsblk 
example too.


    Lsblck example

    sh-4.2# lsblk
    NAME                                                                        
      MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
    xvda                                                                        
      202:0    0  50G  0 disk
    `-xvda1                                                                     
      202:1    0  50G  0 part /var/log/journal/f0b5f0caead6
    xvdb                                                                        
      202:16   0  50G  0 disk
    `-xvdb1                                                                     
      202:17   0  50G  0 part
      `-docker_vg-dockerlv                                                      
      253:0    0  50G  0 lvm  /run/secrets
    xvdc                                                                        
      202:32   0  50G  0 disk
    
|-vg_59185224764b33da4462c5e6a634e709-tp_445729b6edabc584d1e19f35a9a4a02b_tmeta 
  253:1    0  12M  0 lvm
    | 
`-vg_59185224764b33da4462c5e6a634e709-tp_445729b6edabc584d1e19f35a9a4a02b-tpool 
253:3    0   2G  0 lvm
    |   
|-vg_59185224764b33da4462c5e6a634e709-tp_445729b6edabc584d1e19f35a9a4a02b     
253:4    0   2G  0 lvm
    |   
`-vg_59185224764b33da4462c5e6a634e709-brick_445729b6edabc584d1e19f35a9a4a02b  
253:5    0   2G  0 lvm  
/var/lib/heketi/mounts/vg_59185224764b33da4462c5e6a634e709/brick_445729b6edabc584d1e19f35a9a4a02b
    
|-vg_59185224764b33da4462c5e6a634e709-tp_445729b6edabc584d1e19f35a9a4a02b_tdata 
  253:2    0   2G  0 lvm
    | 
`-vg_59185224764b33da4462c5e6a634e709-tp_445729b6edabc584d1e19f35a9a4a02b-tpool 
253:3    0   2G  0 lvm
    |   
|-vg_59185224764b33da4462c5e6a634e709-tp_445729b6edabc584d1e19f35a9a4a02b     
253:4    0   2G  0 lvm
    |   
`-vg_59185224764b33da4462c5e6a634e709-brick_445729b6edabc584d1e19f35a9a4a02b  
253:5    0   2G  0 lvm  
/var/lib/heketi/mounts/vg_59185224764b33da4462c5e6a634e709/brick_445729b6edabc584d1e19f35a9a4a02b
    
|-vg_59185224764b33da4462c5e6a634e709-tp_563b7509e1f08c021b0d9fa0db859e30_tmeta 
  253:6    0   8M  0 lvm
    | 
`-vg_59185224764b33da4462c5e6a634e709-tp_563b7509e1f08c021b0d9fa0db859e30-tpool 
253:8    0   1G  0 lvm
    |   
|-vg_59185224764b33da4462c5e6a634e709-tp_563b7509e1f08c021b0d9fa0db859e30     
253:9    0   1G  0 lvm
    |   
`-vg_59185224764b33da4462c5e6a634e709-brick_563b7509e1f08c021b0d9fa0db859e30  
253:10   0   1G  0 lvm  
/var/lib/heketi/mounts/vg_59185224764b33da4462c5e6a634e709/brick_563b7509e1f08c021b0d9fa0db859e30
    
`-vg_59185224764b33da4462c5e6a634e709-tp_563b7509e1f08c021b0d9fa0db859e30_tdata 
  253:7    0   1G  0 lvm
      
`-vg_59185224764b33da4462c5e6a634e709-tp_563b7509e1f08c021b0d9fa0db859e30-tpool 
253:8    0   1G  0 lvm
        
|-vg_59185224764b33da4462c5e6a634e709-tp_563b7509e1f08c021b0d9fa0db859e30     
253:9    0   1G  0 lvm
        
`-vg_59185224764b33da4462c5e6a634e709-brick_563b7509e1f08c021b0d9fa0db859e30  
253:10   0   1G  0 lvm  
/var/lib/heketi/mounts/vg_59185224764b33da4462c5e6a634e709/brick_563b7509e1f08c021b0d9fa0db859e30
    xvdd


    Todd Walters
    Unigroup


    ########################################################################
    The information contained in this message, and any attachments thereto,
    is intended solely for the use of the addressee(s) and may contain
    confidential and/or privileged material. Any review, retransmission,
    dissemination, copying, or other use of the transmitted information is
    prohibited. If you received this in error, please contact the sender
    and delete the material from any computer. UNIGROUP.COM
    ########################################################################

    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: 
<https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.openshift.redhat.com%2Fopenshift-archives%2Fusers%2Fattachments%2F20180517%2Feee1b53d%2Fattachment.html&data=01%7C01%7Ctodd_walters%40unigroup.com%7C47a89ecef4da483ab1eb08d5bbf623ba%7C259bdc2f86d3477b8cb34eee64289142%7C1&sdata=d20pEFfkcY4qtweCA1sgwETjE77LLZiu0EzBVhdbpgg%3D&reserved=0>

    ------------------------------

    _______________________________________________
    users mailing list
    users@lists.openshift.redhat.com
    
https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.openshift.redhat.com%2Fopenshiftmm%2Flistinfo%2Fusers&data=01%7C01%7Ctodd_walters%40unigroup.com%7C47a89ecef4da483ab1eb08d5bbf623ba%7C259bdc2f86d3477b8cb34eee64289142%7C1&sdata=PejJ9ib%2B%2BxsO%2FvTxf7ITQtvEmjqXSnNdVpWS017zkWo%3D&reserved=0


    End of users Digest, Vol 70, Issue 44
    *************************************



########################################################################
The information contained in this message, and any attachments thereto,
is intended solely for the use of the addressee(s) and may contain
confidential and/or privileged material. Any review, retransmission,
dissemination, copying, or other use of the transmitted information is
prohibited. If you received this in error, please contact the sender
and delete the material from any computer. UNIGROUP.COM
########################################################################


_______________________________________________
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

Reply via email to