Re: [ceph-users] Migrating a baremetal Ceph cluster into K8s + Rook

2019-02-19 Thread Brian Topping
> On Feb 19, 2019, at 3:30 PM, Vitaliy Filippov  wrote:
> 
> In our russian-speaking Ceph chat we swear "ceph inside kuber" people all the 
> time because they often do not understand in what state their cluster is at 
> all

Agreed 100%. This is a really good way to lock yourself out of your data (and 
maybe lose it), especially if you’re new to Kubernetes and using Rook to manage 
Ceph. 

Some months ago, I was on VMs running on Citrix. Everything is stable on 
Kubernetes and Ceph now, but it’s been a lot of work. I’d suggest starting with 
Kubernetes first, especially if you are going to do this on bare metal. I can 
give you some ideas about how to lay things out if you are running with limited 
hardware.

Brian
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Migrating a baremetal Ceph cluster into K8s + Rook

2019-02-19 Thread Vitaliy Filippov
In our russian-speaking Ceph chat we swear "ceph inside kuber" people all  
the time because they often do not understand in what state their cluster  
is at all


// Sorry to intervene :))

--
With best regards,
  Vitaliy Filippov
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Migrating a baremetal Ceph cluster into K8s + Rook

2019-02-19 Thread David Turner
Have you ever seen an example of a Ceph cluster being run and managed by
Rook?  It's a really cool idea and takes care of containerizing mons, rgw,
mds, etc that I've been thinking about doing anyway.  Having those
containerized means that if you can upgrade all of the mon services before
any of your other daemons are even aware of a new Ceph version even if
they're running on the same server.  There are some recent upgrade bugs for
small clusters with mons and osds on the same node that would have been
mitigated with containerized Ceph versions.  For putting OSDs in
containers, have you ever needed to run a custom compiled version of Ceph
for a few OSDs to get past a bug that was causing you some troubles?  With
OSDs in containers, you could do that without worrying about that version
of Ceph being used by any other OSDs.

On top of all of that, I keep feeling like a dinosaur for not understanding
Kubernetes better and have been really excited since seeing Rook
orchestrating a Ceph cluster in K8s.  I spun up a few VMs to start testing
configuring a Kubernetes cluster.  The Rook Slack channel recommended using
kubeadm to set up K8s to manage Ceph.

On Mon, Feb 18, 2019 at 11:50 AM Marc Roos  wrote:

>
> Why not just keep it bare metal? Especially with future ceph
> upgrading/testing. I am having centos7 with luminous and am running
> libvirt on the nodes aswell. If you configure them with a tls/ssl
> connection, you can even nicely migrate a vm, from one host/ceph node to
> the other.
> Next thing I am testing with is mesos, to use the ceph nodes to run
> containers. I am still testing this on some vm's, but looks like you
> have to install only a few rpms (maybe around 300MB) and 2 extra
> services on the nodes to get this up and running aswell. (But keep in
> mind that the help on their mailing list is not so good as here ;))
>
>
>
> -Original Message-
> From: David Turner [mailto:drakonst...@gmail.com]
> Sent: 18 February 2019 17:31
> To: ceph-users
> Subject: [ceph-users] Migrating a baremetal Ceph cluster into K8s + Rook
>
> I'm getting some "new" (to me) hardware that I'm going to upgrade my
> home Ceph cluster with.  Currently it's running a Proxmox cluster
> (Debian) which precludes me from upgrading to Mimic.  I am thinking
> about taking the opportunity to convert most of my VMs into containers
> and migrate my cluster into a K8s + Rook configuration now that Ceph is
> [1] stable on Rook.
>
> I haven't ever configured a K8s cluster and am planning to test this out
> on VMs before moving to it with my live data.  Has anyone done a
> migration from a baremetal Ceph cluster into K8s + Rook?  Additionally
> what is a good way for a K8s beginner to get into managing a K8s
> cluster.  I see various places recommend either CoreOS or kubeadm for
> starting up a new K8s cluster but I don't know the pros/cons for either.
>
> As far as migrating the Ceph services into Rook, I would assume that the
> process would be pretty simple to add/create new mons, mds, etc into
> Rook with the baremetal cluster details.  Once those are active and
> working just start decommissioning the services on baremetal.  For me,
> the OSD migration should be similar since I don't have any multi-device
> OSDs so I only need to worry about migrating individual disks between
> nodes.
>
>
> [1]
> https://blog.rook.io/rook-v0-9-new-storage-backends-in-town-ab952523ec53
>
>
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Migrating a baremetal Ceph cluster into K8s + Rook

2019-02-18 Thread Marc Roos
 
Why not just keep it bare metal? Especially with future ceph 
upgrading/testing. I am having centos7 with luminous and am running 
libvirt on the nodes aswell. If you configure them with a tls/ssl 
connection, you can even nicely migrate a vm, from one host/ceph node to 
the other. 
Next thing I am testing with is mesos, to use the ceph nodes to run 
containers. I am still testing this on some vm's, but looks like you 
have to install only a few rpms (maybe around 300MB) and 2 extra 
services on the nodes to get this up and running aswell. (But keep in 
mind that the help on their mailing list is not so good as here ;))



-Original Message-
From: David Turner [mailto:drakonst...@gmail.com] 
Sent: 18 February 2019 17:31
To: ceph-users
Subject: [ceph-users] Migrating a baremetal Ceph cluster into K8s + Rook

I'm getting some "new" (to me) hardware that I'm going to upgrade my 
home Ceph cluster with.  Currently it's running a Proxmox cluster 
(Debian) which precludes me from upgrading to Mimic.  I am thinking 
about taking the opportunity to convert most of my VMs into containers 
and migrate my cluster into a K8s + Rook configuration now that Ceph is 
[1] stable on Rook.

I haven't ever configured a K8s cluster and am planning to test this out 
on VMs before moving to it with my live data.  Has anyone done a 
migration from a baremetal Ceph cluster into K8s + Rook?  Additionally 
what is a good way for a K8s beginner to get into managing a K8s 
cluster.  I see various places recommend either CoreOS or kubeadm for 
starting up a new K8s cluster but I don't know the pros/cons for either.

As far as migrating the Ceph services into Rook, I would assume that the 
process would be pretty simple to add/create new mons, mds, etc into 
Rook with the baremetal cluster details.  Once those are active and 
working just start decommissioning the services on baremetal.  For me, 
the OSD migration should be similar since I don't have any multi-device 
OSDs so I only need to worry about migrating individual disks between 
nodes.


[1] 
https://blog.rook.io/rook-v0-9-new-storage-backends-in-town-ab952523ec53



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com