Hi list!
Under
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-September/033670.html
I found a situation not unlike ours, but unfortunately either
the list archive fails me or the discussion ended without a
conclusion, so I dare to ask again :)
We currently have a setup of 4 servers with 12 OSDs each,
combined journal and data. No SSDs.
We develop a document management application that accepts user
uploads of all kinds of documents and processes them in several
ways. For any given document, we might create anywhere from 10s
to several hundred dependent artifacts.
We are now preparing to move from Gluster to a Ceph based
backend. The application uses the Apache JClouds Library to
talk to the Rados Gateways that are running on all 4 of these
machines, load balanced by haproxy.
We currently intend to create one container for each document
and put all the dependent and derived artifacts as objects into
that container.
This gives us a nice compartmentalization per document, also
making it easy to remove a document and everything that is
connected with it.
During the first test runs we ran into the default limit of
1000 containers per user. In the thread mentioned above that
limit was removed (setting the max_buckets value to 0). We did
that and now can upload more than 1000 documents.
I just would like to understand
a) if this design is recommended, or if there are reasons to go
about the whole issue in a different way, potentially giving
up the benefit of having all document artifacts under one
convenient handle.
b) is there any absolute limit for max_buckets that we will run
into? Remember we are talking about 10s of millions of
containers over time.
c) are any performance issues to be expected with this design
and can we tune any parameters to alleviate this?
Any feedback would be very much appreciated.
Regards,
Daniel
--
Daniel Schneller
Mobile Development Lead
CenterDevice GmbH | Merscheider Straße 1
| 42699 Solingen
tel: +49 1754155711 | Deutschland
[email protected] | www.centerdevice.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com