Hi there,

On 06/06/2016 09:56 PM, Patrick McGarry wrote:

> So we have gone from not having a Ceph Tech Talk this month…to having
> two! As a part of our regularly scheduled Ceph Tech Talk series, Lenz
> Grimmer from OpenATTIC will be talking about the architecture of their
> management/GUI solution, which will also include a live demo.

Thanks a lot for giving me a chance to talk about our project, Patrick,
much appreciated!

For those of you who want to learn more about openATTIC
(http://openattic.org) in advance or can't make it to the tech talk, I'd
like to give you a quick introduction and some pointers to our project.

openATTIC was started as a "traditional" storage management system
(CIFS/NFS, iSCSI/FC, DRBD, Btrfs, ZFS) around 5 years ago. It supports
managing multiple nodes and has monitoring of the storage resources
built-in (using Nagios/Icinga and PNP4Nagios for storing performance
data in RRDs). The openATTIC Backend is based on Python/Django and we
added a RESTful API and WebUI based on AngularJS and Bootstrap with
version 2.0, which is currently under development.

We started adding Ceph support in early 2015, as an answer to users that
were facing data growth at a faster pace than what a traditional storage
system could keep up with. At first, we added the capability to map and
share RBDs as block volumes as well as providing a simple CRUSH map
editor. We started collaborating with SUSE on the Ceph features at the
beginning of the year and have made good progress on extending the
functionality since then.

In this stage, we use the librados and librbd Python bindings to
communicate with the Ceph cluster. But we're also keeping an eye on the
development of ceph-mgr that is currently being worked on.

For additional remote node management and monitoring features, we intend
to use Salt and collectd. Currently, our focus is on building a
dashboard to monitor the cluster's performance and health (making use of
the D3 JavaScript library for the graphs) as well as creating the WebUI
views that display the cluster's various objects like Pools, OSDs, etc.

The openATTIC development takes place in the open: the code is hosted in
a Mercurial repo on BitBucket [1], all issues (bugs and feature specs)
are tracked in a public Jira instance [2]. New code is submitted via
pull requests and we require code reviews before it is merged.
We also have an extensive test suite that performs tests both on the
REST API level as well as over the WebUI.

The Ceph functionality is still under development [3], and right now the
WebUI does not fully utilize everything the API provides [4], but we'd
like to invite you to take a look at what we have so far and let us know
if we're heading in the right direction with this.

Our intention is to provide a Ceph Management and Monitoring tool that
administrators *want* to use and that makes sense. So any feedback or
comments are welcome and appreciated [5].

Thanks!

Lenz

[1] https://bitbucket.org/openattic/openattic/
[2] https://tracker.openattic.org/
[3]
https://wiki.openattic.org/display/OP/openATTIC+Ceph+Management+Roadmap+and+Implementation+Plan
[4] https://wiki.openattic.org/display/OP/openATTIC+Ceph+REST+API+overview
[5] http://openattic.org/get-involved.html



Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to