falcon78921 opened a new issue #3137: Integrating Ceph Status into CloudStack 
Dashboard
URL: https://github.com/apache/cloudstack/issues/3137
 
 
   <!--
   Verify first that your issue/request is not already reported on GitHub.
   Also test if the latest release and master branch are affected too.
   Always add information AFTER of these HTML comments, but no need to delete 
the comments.
   -->
   
   ##### ISSUE TYPE
   <!-- Pick one below and delete the rest -->
    * Feature Idea
   
   ##### COMPONENT NAME
   <!--
   Categorize the issue, e.g. API, VR, VPN, UI, etc.
   -->
   ~~~
   CloudStack UI
   ~~~
   
   ##### CLOUDSTACK VERSION
   <!--
   New line separated list of affected versions, commit ID for issues on master 
branch.
   -->
   
   ~~~
   4.11.1+
   ~~~
   
   ##### CONFIGURATION
   <!--
   Information about the configuration if relevant, e.g. basic network, 
advanced networking, etc.  N/A otherwise
   -->
   
   I have an idea for a new CloudStack feature. Being someone who utilizes Ceph 
greatly in their environment, I thought it would be pretty cool if CloudStack 
fetched the health status of a Ceph cluster and displayed it in the ACS 
dashboard (under each instance of Primary Storage (RBD)). I wrote a Python 
script that opens a SSH connection to a Ceph node and runs the ``ceph health`` 
command. I'm not too familiar with the "under-the-hood" functions of 
CloudStack. That's why I would love some advice on how to go about doing this. 
Some things I brainstormed:
   
   - Where would I put this script, in accordance with the CloudStack code 
structure? (e.g. ``cloudstack/scripts/storage``)
   - Where would I reference the SSH authentication for the Ceph storage node? 
Could we add a passwordless SSH auth to the storage node (via CloudStack)? Is 
there another way to fetch ``ceph health`` without SSH? (maybe via API). 
   - I'm guessing a new column would be created under the ``storage_pool`` 
table. You could call it ``ceph_health`` and insert the health status within 
this column, for each row of RBD storage specified. If the storage type is not 
RBD, the value would be ``null``.
   - For scheduling execution, I'm thinking you could add a reference within 
the management server to run the script. The script would open a connection to 
the ``cloud`` database, query all storage types that are RBD, authenticate to 
the RADOS monitor IP specified, run the ``ceph health`` command, report the 
result back to the CloudStack management server, and store the result in 
``ceph_health`` (for each instance). 
   
   The Ceph health status would be placed under the storage state. For example,
   
   State:     **Up**
   
   Ceph Health:   **HEALTH_OK**
   
   If anyone needs further clarification, please let me know. Again, I just 
thought of this idea and it seemed like a pretty good one. I know there are 
many different Ceph dashboards available, including the dashboard that comes 
with Ceph (starting in the Luminous release). I thought this feature would be 
useful and it wouldn't take too much time to implement. If there are any errors 
in my idea or if I'm misinterpreting something, please let me know. Thanks! :)

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to