Similar to Dan's situation we utilize the --cluster name concept for our
operations. Primarily for "datamover" nodes which do incremental rbd
import/export between distinct clusters. This is entirely coordinated by
utilizing the --cluster option throughout.

The way we set it up is that all clusters are actually named "ceph" on the
mons and osds etc, but the clients themselves get /etc/ceph/clusterA.conf
and /etc/ceph/clusterB.conf so that we can differentiate. I would like to
see the functionality of clients being able to specify which conf file to
read preserved.

As a note though we went the route of naming all clusters "ceph" to
workaround difficulties in non-standard naming so this issue does need some
attention.

On Fri, Jun 9, 2017 at 8:19 AM, Alfredo Deza <ad...@redhat.com> wrote:

> On Thu, Jun 8, 2017 at 3:54 PM, Sage Weil <sw...@redhat.com> wrote:
> > On Thu, 8 Jun 2017, Bassam Tabbara wrote:
> >> Thanks Sage.
> >>
> >> > At CDM yesterday we talked about removing the ability to name your
> ceph
> >> > clusters.
> >>
> >> Just to be clear, it would still be possible to run multiple ceph
> >> clusters on the same nodes, right?
> >
> > Yes, but you'd need to either (1) use containers (so that different
> > daemons see a different /etc/ceph/ceph.conf) or (2) modify the systemd
> > unit files to do... something.
>
> In the container case, I need to clarify that ceph-docker deployed
> with ceph-ansible is not capable of doing this, since
> the ad-hoc systemd units use the hostname as part of the identifier
> for the daemon, e.g:
>
>     systemctl enable ceph-mon@{{ ansible_hostname }}.service
>
>
> >
> > This is actually no different from Jewel. It's just that currently you
> can
> > run a single cluster on a host (without containers) but call it 'foo' and
> > knock yourself out by passing '--cluster foo' every time you invoke the
> > CLI.
> >
> > I'm guessing you're in the (1) case anyway and this doesn't affect you at
> > all :)
> >
> > sage
> > --
> > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> > the body of a message to majord...@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
Respectfully,

Wes Dillingham
wes_dilling...@harvard.edu
Research Computing | Senior CyberInfrastructure Storage Engineer
Harvard University | 38 Oxford Street, Cambridge, Ma 02138 | Room 102
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to