Re: [ceph-users] Migration of a Ceph cluster to a new datacenter and new IPs

2018-12-27 Thread Marcus Müller
; Freseniusstr. 31h > 81247 München > www.croit.io > Tel: +49 89 1896585 90 > > On Wed, Dec 19, 2018 at 8:55 PM Marcus Müller > wrote: >> >> Hi all, >> >> we’re running a ceph hammer cluster with 3 mons and 24 osds (3 same nodes) >> and need to m

[ceph-users] Migration of a Ceph cluster to a new datacenter and new IPs

2018-12-19 Thread Marcus Müller
Hi all, we’re running a ceph hammer cluster with 3 mons and 24 osds (3 same nodes) and need to migrate all servers to a new datacenter and change the IPs of the nodes. I found this tutorial:

[ceph-users] Purge Ceph Node and reuse it for another cluster

2018-09-26 Thread Marcus Müller
Hi all, Is it safe to purge a ceph osd / mon node like described here: http://docs.ceph.com/docs/giant/rados/deployment/ceph-deploy-purge/ and later use this node with the same OS again for another production ceph cluster?

Re: [ceph-users] PGs stuck active+remapped and osds lose data?!

2017-01-11 Thread Marcus Müller
Yes, but everything i want to know is, if my way to change the tunables is right or not? > Am 11.01.2017 um 13:11 schrieb Shinobu Kinjo <ski...@redhat.com>: > > Please refer to Jens's message. > > Regards, > >> On Wed, Jan 11, 2017 at 8:53 PM, Marcus Mü

Re: [ceph-users] PGs stuck active+remapped and osds lose data?!

2017-01-11 Thread Marcus Müller
> You likely need to tweak your crushmap to handle this configuration > better or, preferably, move to a more uniform configuration. > > > On Wed, Jan 11, 2017 at 5:38 PM, Marcus Müller <mueller.mar...@posteo.de> > wrote: >> I have to thank you all. You give free support and

Re: [ceph-users] PGs stuck active+remapped and osds lose data?!

2017-01-10 Thread Marcus Müller
st. > -Sam > > On Mon, Jan 9, 2017 at 11:08 PM, Marcus Müller <mueller.mar...@posteo.de> > wrote: >> Ok, i understand but how can I debug why they are not running as they >> should? For me I thought everything is fine because ceph -s said they are up >> an

Re: [ceph-users] PGs stuck active+remapped and osds lose data?!

2017-01-10 Thread Marcus Müller
t is being padded out with an extra osd > which happens to have the data to keep you up to the right number of > replicas. Please refer back to Brad's post. > -Sam > >> On Mon, Jan 9, 2017 at 11:08 PM, Marcus Müller <mueller.mar...@posteo.de> >> wrote: >> Ok,

Re: [ceph-users] PGs stuck active+remapped and osds lose data?!

2017-01-09 Thread Marcus Müller
t;> ], > > > Here is an example: > > "up": [ > 1, >0, >2 > ], > "acting": [ >1, >0, >2 > ], > > Regards, > > > On Tue, Jan 10, 2017 at 3:52 PM, Marcus Müller <mueller.mar...@posteo.de

Re: [ceph-users] PGs stuck active+remapped and osds lose data?!

2017-01-09 Thread Marcus Müller
; On Tue, Jan 10, 2017 at 3:44 PM, Marcus Müller <mueller.mar...@posteo.de> > wrote: >> All osds are currently up: >> >> health HEALTH_WARN >>4 pgs stuck unclean >>recovery 4482/58798254 objects degraded (0.008

Re: [ceph-users] PGs stuck active+remapped and osds lose data?!

2017-01-09 Thread Marcus Müller
All osds are currently up: health HEALTH_WARN 4 pgs stuck unclean recovery 4482/58798254 objects degraded (0.008%) recovery 420522/58798254 objects misplaced (0.715%) noscrub,nodeep-scrub flag(s) set monmap e9: 5 mons at

Re: [ceph-users] PGs stuck active+remapped and osds lose data?!

2017-01-09 Thread Marcus Müller
Wuerdig > <christian.wuer...@gmail.com>: > > > > On Tue, Jan 10, 2017 at 8:23 AM, Marcus Müller <mueller.mar...@posteo.de > <mailto:mueller.mar...@posteo.de>> wrote: > Hi all, > > Recently I added a new node with new osds to my cluster, which, of course

[ceph-users] PGs stuck active+remapped and osds lose data?!

2017-01-09 Thread Marcus Müller
Hi all, Recently I added a new node with new osds to my cluster, which, of course resulted in backfilling. At the end, there are 4 pgs left in the state 4 active+remapped and I don’t know what to do. Here is how my cluster looks like currently: ceph -s health HEALTH_WARN 4

[ceph-users] Failed to install ceph via ceph-deploy on Ubuntu 14.04 trusty

2017-01-02 Thread Marcus Müller
Hi all, I tried to install ceph on a new node with ceph-deploy 1.5.35 but it fails. Here is the output: # ceph-deploy install --release hammer ceph5 [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (1.5.35): /usr/bin/ceph-deploy

Re: [ceph-users] docs.ceph.com down?

2017-01-02 Thread Marcus Müller
/master/doc > <https://github.com/ceph/ceph/tree/master/doc> > > On Mon, Jan 2, 2017 at 7:45 PM, Andre Forigato <andre.forig...@rnp.br > <mailto:andre.forig...@rnp.br>> wrote: > Hello Marcus, > > Yes, it´s down. :-( > > > André > >

[ceph-users] docs.ceph.com down?

2017-01-02 Thread Marcus Müller
Hi all, I can not reach docs.ceph.com for some days. Is the site really down or do I have a problem here? Regards, Marcus___ ceph-users mailing list ceph-users@lists.ceph.com

[ceph-users] docs.ceph.com down?

2017-01-02 Thread Marcus Müller
Hi all, I can not reach docs.ceph.com for some days. Is the site really down or do I have a problem? Regards, Marcus___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Need help! Ceph backfill_toofull and recovery_wait+degraded

2016-11-01 Thread Marcus Müller
Hi all, i have a big problem and i really hope someone can help me! We are running a ceph cluster since a year now. Version is: 0.94.7 (Hammer) Here is some info: Our osd map is: ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 26.67998 root default