Re: Provisioning persistence for metrics with GlusterFS

2018-05-21 Thread Rodrigo Bersa
Hi Dan!

Now what I also don't understand is how did the initial volume group for
> the registry got created with just 26GB of storage if the default is for
> 100GB? Is there a rule such as: "create block-hosting volume of default
> size=100GB or max available"?
> The integrated registry's persistence is set to 5GB. This is, I believe, a
> default value, as I haven't set anything related to it in my inventory file
> when installing Openshift Origin. How can I use the remaining storage in my
> vg with glusterFS and Openshift?
>

The registry-storage volume is not a block-volume is a file volume, witch
has no "minimum" size. So you can create many other small file volumes with
no problems. The only restriction will be to create block-volumes, that
need a least 100GB.

Best regards,


Rodrigo Bersa

Cloud Consultant, RHCVA, RHCE

Red Hat Brasil 

rbe...@redhat.comM: +55-11-99557-5841

TRIED. TESTED. TRUSTED. 
Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil
pelo *Great Place to Work*.

On Mon, May 21, 2018 at 7:49 AM, Dan Pungă  wrote:

> Hello Rodrigo, I appreciate your answer!
>
> In the meantime I had reached for the heketi-cli related support(chat) and
> I got the same reference. There's a config map generated by the installer
> for the heketi-registry pod that has the default for block-hosting volumes
> size set at 100GB.
> What I thought was that the "block hosting volume" would be an equivalent
> of a logical volume and it(heketi-cli) tries to create a lv of size 100GB
> inside the already created vg_bd61a1e6f317bb9decade964449c12e8(which has
> 26GB).
>
> I've actually modified the encrypted json config and tried to restart the
> heketi-registry pod, which failed. So I ended up with some unmanaged
> glusterFS storage, but since I'm on a test envionment, it's fine.
> Otherwise, good to know for the future.
>
> Now what I also don't understand is how did the initial volume group for
> the registry got created with just 26GB of storage if the default is for
> 100GB? Is there a rule such as: "create block-hosting volume of default
> size=100GB or max available"?
> The integrated registry's persistence is set to 5GB. This is, I believe, a
> default value, as I haven't set anything related to it in my inventory file
> when installing Openshift Origin. How can I use the remaining storage in my
> vg with glusterFS and Openshift?
>
> Thank you!
>
> On 19.05.2018 02:43, Rodrigo Bersa wrote:
>
> Hi Dan,
>
> The Gluster Block volumes works with the concept of block-hosting volume,
> and these ones are created with 100GB by default.
>
> To clarify, the block volumes will be provisioned over the block hosting
> volumes.
>
> Let's say you need a 10GB block volume, it will create a block hosting
> volume with 100GB and then the 10GB block volume over it, as the next block
> volumes requested until it reaches the 100GB. After that a new block
> hosting volume will be created and so on.
>
> So, if you have just 26GB available in each server, it's not enough to
> create the block hosting volume. You may need to add more devices to your
> CNS Cluster to grow your free space.
>
>
> Kind regards,
>
>
> Rodrigo Bersa
>
> Cloud Consultant, RHCVA, RHCE
>
> Red Hat Brasil 
>
> rbe...@redhat.comM: +55-11-99557-5841
> 
> TRIED. TESTED. TRUSTED. 
> Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil
> pelo *Great Place to Work*.
>
> On Wed, May 16, 2018 at 10:35 PM, Dan Pungă  wrote:
>
>> Hello all!
>>
>> I have setup a cluster with 3 glusterFS nodes for disk persistence just
>> as specified in the docs. I have configured the inventory file to install
>> the containerized version to be used by Openshift's integrated registry.
>> This works fine.
>>
>> Now I wanted to install the metrics component and I followed the
>> procedure described here: https://docs.openshift.org/lat
>> est/install_config/persistent_storage/persistent_storage_
>> glusterfs.html#install-example-infra
>>
>> I end up with openshift-infra project set up, but with 3 pods failing to
>> start and I think this has to do with the PVC for cassandra that fails to
>> create.
>>
>> oc get pvc metrics-cassandra-1 -o yaml
>>
>> apiVersion: v1
>> kind: PersistentVolumeClaim
>> metadata:
>>   annotations:
>> control-plane.alpha.kubernetes.io/leader:
>> '{"holderIdentity":"8ef584d1-5923-11e8-8730-0a580a830040","l
>> easeDurationSeconds":15,"acquireTime":"2018-05-17T00:38:34Z"
>> ,"renewTime":"2018-05-17T00:55:33Z","leaderTransitions":0}'
>> kubectl.kubernetes.io/last-applied-configuration: |
>>   {"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata"
>> :{"annotations":{"volume.beta.kubernetes.io/storage-provisioner":"
>> gluster.org/glusterblock"},"labels":{"metrics-infra":"hawkular-cassandra
>> 

Re: Provisioning persistence for metrics with GlusterFS

2018-05-21 Thread Dan Pungă

Hello Rodrigo, I appreciate your answer!

In the meantime I had reached for the heketi-cli related support(chat) 
and I got the same reference. There's a config map generated by the 
installer for the heketi-registry pod that has the default for 
block-hosting volumes size set at 100GB.
What I thought was that the "block hosting volume" would be an 
equivalent of a logical volume and it(heketi-cli) tries to create a lv 
of size 100GB inside the already created 
vg_bd61a1e6f317bb9decade964449c12e8(which has 26GB).


I've actually modified the encrypted json config and tried to restart 
the heketi-registry pod, which failed. So I ended up with some unmanaged 
glusterFS storage, but since I'm on a test envionment, it's fine. 
Otherwise, good to know for the future.


Now what I also don't understand is how did the initial volume group for 
the registry got created with just 26GB of storage if the default is for 
100GB? Is there a rule such as: "create block-hosting volume of default 
size=100GB or max available"?
The integrated registry's persistence is set to 5GB. This is, I believe, 
a default value, as I haven't set anything related to it in my inventory 
file when installing Openshift Origin. How can I use the remaining 
storage in my vg with glusterFS and Openshift?


Thank you!

On 19.05.2018 02:43, Rodrigo Bersa wrote:

Hi Dan,

The Gluster Block volumes works with the concept of block-hosting 
volume, and these ones are created with 100GB by default.


To clarify, the block volumes will be provisioned over the block 
hosting volumes.


Let's say you need a 10GB block volume, it will create a block hosting 
volume with 100GB and then the 10GB block volume over it, as the next 
block volumes requested until it reaches the 100GB. After that a new 
block hosting volume will be created and so on.


So, if you have just 26GB available in each server, it's not enough to 
create the block hosting volume. You may need to add more devices to 
your CNS Cluster to grow your free space.



Kind regards,


Rodrigo Bersa

Cloud Consultant, RHCVA, RHCE

Red Hat Brasil 

rbe...@redhat.com  M: +55-11-99557-5841 



  
TRIED. TESTED. TRUSTED. 

Red Hat é reconhecida entre as melhores empresas para trabalhar no 
Brasil pelo *Great Place to Work*.


On Wed, May 16, 2018 at 10:35 PM, Dan Pungă > wrote:


Hello all!

I have setup a cluster with 3 glusterFS nodes for disk persistence
just as specified in the docs. I have configured the inventory
file to install the containerized version to be used by
Openshift's integrated registry. This works fine.

Now I wanted to install the metrics component and I followed the
procedure described here:

https://docs.openshift.org/latest/install_config/persistent_storage/persistent_storage_glusterfs.html#install-example-infra



I end up with openshift-infra project set up, but with 3 pods
failing to start and I think this has to do with the PVC for
cassandra that fails to create.

oc get pvc metrics-cassandra-1 -o yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
control-plane.alpha.kubernetes.io/leader
:

'{"holderIdentity":"8ef584d1-5923-11e8-8730-0a580a830040","leaseDurationSeconds":15,"acquireTime":"2018-05-17T00:38:34Z","renewTime":"2018-05-17T00:55:33Z","leaderTransitions":0}'
kubectl.kubernetes.io/last-applied-configuration
: |

{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{"volume.beta.kubernetes.io/storage-provisioner

":"gluster.org/glusterblock

"},"labels":{"metrics-infra":"hawkular-cassandra"},"name":"metrics-cassandra-1","namespace":"openshift-infra"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"6Gi"}},"storageClassName":"glusterfs-registry-block"}}
volume.beta.kubernetes.io/storage-provisioner
:
gluster.org/glusterblock 
  creationTimestamp: 2018-05-17T00:38:34Z
  labels:
    metrics-infra: hawkular-cassandra
  name: metrics-cassandra-1
  namespace: openshift-infra
  resourceVersion: "1204482"
  selfLink:

/api/v1/namespaces/openshift-infra/persistentvolumeclaims/metrics-cassandra-1
  uid: a18b8c20-596a-11e8-8a63-fa163ed601cb
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
  storage: 6Gi
  storageClassName: 

Re: Provisioning persistence for metrics with GlusterFS

2018-05-18 Thread Rodrigo Bersa
Hi Dan,

The Gluster Block volumes works with the concept of block-hosting volume,
and these ones are created with 100GB by default.

To clarify, the block volumes will be provisioned over the block hosting
volumes.

Let's say you need a 10GB block volume, it will create a block hosting
volume with 100GB and then the 10GB block volume over it, as the next block
volumes requested until it reaches the 100GB. After that a new block
hosting volume will be created and so on.

So, if you have just 26GB available in each server, it's not enough to
create the block hosting volume. You may need to add more devices to your
CNS Cluster to grow your free space.


Kind regards,


Rodrigo Bersa

Cloud Consultant, RHCVA, RHCE

Red Hat Brasil 

rbe...@redhat.comM: +55-11-99557-5841

TRIED. TESTED. TRUSTED. 
Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil
pelo *Great Place to Work*.

On Wed, May 16, 2018 at 10:35 PM, Dan Pungă  wrote:

> Hello all!
>
> I have setup a cluster with 3 glusterFS nodes for disk persistence just as
> specified in the docs. I have configured the inventory file to install the
> containerized version to be used by Openshift's integrated registry. This
> works fine.
>
> Now I wanted to install the metrics component and I followed the procedure
> described here: https://docs.openshift.org/latest/install_config/
> persistent_storage/persistent_storage_glusterfs.html#install-example-infra
>
> I end up with openshift-infra project set up, but with 3 pods failing to
> start and I think this has to do with the PVC for cassandra that fails to
> create.
>
> oc get pvc metrics-cassandra-1 -o yaml
>
> apiVersion: v1
> kind: PersistentVolumeClaim
> metadata:
>   annotations:
> control-plane.alpha.kubernetes.io/leader:
> '{"holderIdentity":"8ef584d1-5923-11e8-8730-0a580a830040","
> leaseDurationSeconds":15,"acquireTime":"2018-05-17T00:
> 38:34Z","renewTime":"2018-05-17T00:55:33Z","leaderTransitions":0}'
> kubectl.kubernetes.io/last-applied-configuration: |
>   {"apiVersion":"v1","kind":"PersistentVolumeClaim","
> metadata":{"annotations":{"volume.beta.kubernetes.io/storage-provisioner
> ":"gluster.org/glusterblock"},"labels":{"metrics-infra":"hawkular-
> cassandra"},"name":"metrics-cassandra-1","namespace":"
> openshift-infra"},"spec":{"accessModes":["ReadWriteOnce"]
> ,"resources":{"requests":{"storage":"6Gi"}},"storageClassName":"glusterfs-
> registry-block"}}
> volume.beta.kubernetes.io/storage-provisioner:
> gluster.org/glusterblock
>   creationTimestamp: 2018-05-17T00:38:34Z
>   labels:
> metrics-infra: hawkular-cassandra
>   name: metrics-cassandra-1
>   namespace: openshift-infra
>   resourceVersion: "1204482"
>   selfLink: /api/v1/namespaces/openshift-infra/persistentvolumeclaims/
> metrics-cassandra-1
>   uid: a18b8c20-596a-11e8-8a63-fa163ed601cb
> spec:
>   accessModes:
>   - ReadWriteOnce
>   resources:
> requests:
>   storage: 6Gi
>   storageClassName: glusterfs-registry-block
> status:
>   phase: Pending
>
> oc describe pvc metrics-cassandra-1 shows these warnings:
>
>  36m23m13gluster.org/glusterblock
> glusterblock-registry-provisioner-dc-1-tljbb 
> 8ef584d1-5923-11e8-8730-0a580a830040
> WarningProvisioningFailedFailed to provision volume
> with StorageClass "glusterfs-registry-block": failed to create volume:
> [heketi] failed to create volume: Failed to allocate new block volume: No
> space
>   36m21m14gluster.org/glusterblock
> glusterblock-registry-provisioner-dc-1-tljbb 
> 8ef584d1-5923-11e8-8730-0a580a830040
> NormalProvisioningExternal provisioner is
> provisioning volume for claim "openshift-infra/metrics-cassandra-1"
>   21m21m1gluster.org/glusterblock
> glusterblock-registry-provisioner-dc-1-tljbb 
> 8ef584d1-5923-11e8-8730-0a580a830040
> WarningProvisioningFailedFailed to provision volume
> with StorageClass "glusterfs-registry-block": failed to create volume:
> [heketi] failed to create volume: Post http://heketi-registry-
> default.apps.my.net/blockvolumes: dial tcp: lookup
> heketi-registry-default.apps.my.net on 192.168.150.16:53: no such host
>
> In the default project, if I check the logs for heketi-registry, I get a
> lot of
>
> [heketi] ERROR 2018/05/17 00:46:47 /src/github.com/heketi/heketi/
> apps/glusterfs/operations.go:909: Create Block Volume Build Failed: No
> space
> [negroni] Started POST /blockvolumes
> [heketi] INFO 2018/05/17 00:49:02 Loaded simple allocator
> [heketi] INFO 2018/05/17 00:49:02 brick_num: 0
> [heketi] INFO 2018/05/17 00:49:02 brick_num: 0
> [heketi] INFO 2018/05/17 00:49:02 brick_num: 0
> [heketi] INFO 2018/05/17 00:49:02 brick_num: 0
> [heketi] INFO 2018/05/17 00:49:02 brick_num: 1
> [negroni] Completed 500 Internal Server Error in 7.091238ms
>
> For the other glusterFS-related 

Provisioning persistence for metrics with GlusterFS

2018-05-16 Thread Dan Pungă

Hello all!

I have setup a cluster with 3 glusterFS nodes for disk persistence just 
as specified in the docs. I have configured the inventory file to 
install the containerized version to be used by Openshift's integrated 
registry. This works fine.


Now I wanted to install the metrics component and I followed the 
procedure described here: 
https://docs.openshift.org/latest/install_config/persistent_storage/persistent_storage_glusterfs.html#install-example-infra


I end up with openshift-infra project set up, but with 3 pods failing to 
start and I think this has to do with the PVC for cassandra that fails 
to create.


oc get pvc metrics-cassandra-1 -o yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    control-plane.alpha.kubernetes.io/leader: 
'{"holderIdentity":"8ef584d1-5923-11e8-8730-0a580a830040","leaseDurationSeconds":15,"acquireTime":"2018-05-17T00:38:34Z","renewTime":"2018-05-17T00:55:33Z","leaderTransitions":0}'

    kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{"volume.beta.kubernetes.io/storage-provisioner":"gluster.org/glusterblock"},"labels":{"metrics-infra":"hawkular-cassandra"},"name":"metrics-cassandra-1","namespace":"openshift-infra"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"6Gi"}},"storageClassName":"glusterfs-registry-block"}}
    volume.beta.kubernetes.io/storage-provisioner: gluster.org/glusterblock
  creationTimestamp: 2018-05-17T00:38:34Z
  labels:
    metrics-infra: hawkular-cassandra
  name: metrics-cassandra-1
  namespace: openshift-infra
  resourceVersion: "1204482"
  selfLink: 
/api/v1/namespaces/openshift-infra/persistentvolumeclaims/metrics-cassandra-1

  uid: a18b8c20-596a-11e8-8a63-fa163ed601cb
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
  storage: 6Gi
  storageClassName: glusterfs-registry-block
status:
  phase: Pending

oc describe pvc metrics-cassandra-1 shows these warnings:

 36m        23m        13    gluster.org/glusterblock 
glusterblock-registry-provisioner-dc-1-tljbb 
8ef584d1-5923-11e8-8730-0a580a830040            Warning 
ProvisioningFailed    Failed to provision volume with StorageClass 
"glusterfs-registry-block": failed to create volume: [heketi] failed to 
create volume: Failed to allocate new block volume: No space
  36m        21m        14    gluster.org/glusterblock 
glusterblock-registry-provisioner-dc-1-tljbb 
8ef584d1-5923-11e8-8730-0a580a830040            Normal Provisioning    
    External provisioner is provisioning volume for claim 
"openshift-infra/metrics-cassandra-1"
  21m        21m        1    gluster.org/glusterblock 
glusterblock-registry-provisioner-dc-1-tljbb 
8ef584d1-5923-11e8-8730-0a580a830040            Warning 
ProvisioningFailed    Failed to provision volume with StorageClass 
"glusterfs-registry-block": failed to create volume: [heketi] failed to 
create volume: Post 
http://heketi-registry-default.apps.my.net/blockvolumes: dial tcp: 
lookup heketi-registry-default.apps.my.net on 192.168.150.16:53: no such 
host


In the default project, if I check the logs for heketi-registry, I get a 
lot of


[heketi] ERROR 2018/05/17 00:46:47 
/src/github.com/heketi/heketi/apps/glusterfs/operations.go:909: Create 
Block Volume Build Failed: No space

[negroni] Started POST /blockvolumes
[heketi] INFO 2018/05/17 00:49:02 Loaded simple allocator
[heketi] INFO 2018/05/17 00:49:02 brick_num: 0
[heketi] INFO 2018/05/17 00:49:02 brick_num: 0
[heketi] INFO 2018/05/17 00:49:02 brick_num: 0
[heketi] INFO 2018/05/17 00:49:02 brick_num: 0
[heketi] INFO 2018/05/17 00:49:02 brick_num: 1
[negroni] Completed 500 Internal Server Error in 7.091238ms

For the other glusterFS-related pod, I see the same errors reported by 
the pvc creation


oc logs -f glusterblock-registry-provisioner-dc-1-tljbb -n default

I0516 22:38:49.136388   1 controller.go:1167] 
scheduleOperation[lock-provision-openshift-infra/metrics-cassandra-1[1191fb8d-5959-11e8-94c9-fa163e1cba7f]]
I0516 22:38:49.166658   1 leaderelection.go:156] attempting to 
acquire leader lease...
I0516 22:38:49.197051   1 leaderelection.go:178] successfully 
acquired lease to provision for pvc openshift-infra/metrics-cassandra-1
I0516 22:38:49.197122   1 controller.go:1167] 
scheduleOperation[provision-openshift-infra/metrics-cassandra-1[1191fb8d-5959-11e8-94c9-fa163e1cba7f]]
E0516 22:38:49.207257   1 glusterblock-provisioner.go:441] BLOCK 
VOLUME NAME I RECEIEVED:
E0516 22:38:49.207288   1 glusterblock-provisioner.go:449] BLOCK 
VOLUME CREATE REQUEST: &{Size:6 Clusters:[] Name: Hacount:3 Auth:true}
E0516 22:38:49.355122   1 glusterblock-provisioner.go:451] BLOCK 
VOLUME RESPONSE: 
E0516 22:38:49.355204   1 glusterblock-provisioner.go:453] [heketi] 
failed to create volume: Failed to allocate new block volume: No space
E0516 22:38:49.355262   1 controller.go:895] Failed to provision 
volume for claim