Re: [ceph-users] osd be marked down when recovering

2019-06-26 Thread zhanrzh...@teamsun.com.cn
Hello,Paul,: Thanks for your help.The aim I did it in my test/dev environment is to ready for my production cluster. If set nodown,while clinet read/write on the osd that previously marked down, What will it happen? How can I avoid it? or is there any document I can refer to? Thanks!

[ceph-users] ceph ansible deploy lvm advanced

2019-06-26 Thread Fabio Abreu
Hi Everybody, I starting a new lab environment with ceph ansible , bluestore and lvm advanced deployment. Which size configuration is recommend to set data ,journal wal and db lvm ? Someone had configured in lvm adavanced deploy ? Regards, Fabio -- Atenciosamente, Fabio Abreu Reis

Re: [ceph-users] Thoughts on rocksdb and erasurecode

2019-06-26 Thread Christian Wuerdig
Hm, according to https://tracker.ceph.com/issues/24025 snappy compression should be available out of the box at least since luminous. What ceph version are you running? On Wed, 26 Jun 2019 at 21:51, Rafał Wądołowski wrote: > We changed these settings. Our config now is: > >

[ceph-users] Tech Talk tomorrow: Intro to Ceph

2019-06-26 Thread Sage Weil
Hi everyone, Tomorrow's Ceph Tech Talk will be an updated "Intro to Ceph" talk by Sage Weil. This will be based on a newly refreshed set of slides and provide a high-level introduction to the overall Ceph architecture, RGW, RBD, and CephFS. Our plan is to follow-up later this summer with

Re: [ceph-users] ceph-deploy osd create adds osds but weight is 0 and not adding hosts to CRUSH map

2019-06-26 Thread Hayashida, Mami
Please disregard the earlier message. I found the culprit: `osd_crush_update_on_start` was set to false. *Mami Hayashida* *Research Computing Associate* Univ. of Kentucky ITS Research Computing Infrastructure On Wed, Jun 26, 2019 at 11:37 AM Hayashida, Mami wrote: > I am trying to build a

[ceph-users] ceph-deploy osd create adds osds but weight is 0 and not adding hosts to CRUSH map

2019-06-26 Thread Hayashida, Mami
I am trying to build a Ceph cluster using ceph-deploy. To add OSDs, I used the following command (which I had successfully used before to build another cluster): ceph-deploy osd create --block-db=ssd0/db0 --data=/dev/sdh osd0 ceph-deploy osd create --block-db=ssd0/db1 --data=/dev/sdi osd0

Re: [ceph-users] Changing the release cadence

2019-06-26 Thread Lars Marowsky-Bree
On 2019-06-26T14:45:31, Sage Weil wrote: Hi Sage, I think that makes sense. I'd have preferred the Oct/Nov target, but that'd have made Octopus quite short. Unsure whether freezing in December with a release in March is too long though. But given how much people scramble, setting that as a

Re: [ceph-users] OSDs taking a long time to boot due to 'clear_temp_objects', even with fresh PGs

2019-06-26 Thread Gregory Farnum
Awesome. I made a ticket and pinged the Bluestore guys about it: http://tracker.ceph.com/issues/40557 On Tue, Jun 25, 2019 at 1:52 AM Thomas Byrne - UKRI STFC wrote: > > I hadn't tried manual compaction, but it did the trick. The db shrunk down to > 50MB and the OSD booted instantly. Thanks! >

Re: [ceph-users] Changing the release cadence

2019-06-26 Thread Bob Farrell
March seems sensible to me for the reasons you stated. If a release gets delayed, I'd prefer it to be on the spring side of Christmas (again for the reasons already mentioned). That aside, I'm now very impatient to install Octopus on my 8-node cluster. : ) On Wed, 26 Jun 2019 at 15:46, Sage Weil

Re: [ceph-users] Changing the release cadence

2019-06-26 Thread Sage Weil
Hi everyone, We talked a bit about this during the CLT meeting this morning. How about the following proposal: - Target release date of Mar 1 each year. - Target freeze in Dec. That will allow us to use the holidays to do a lot of testing when the lab infrastructure tends to be somewhat

[ceph-users] RocksDB with SSD journal 3/30/300 rule

2019-06-26 Thread Robert Ruge
G'Day everyone. I'm about to try my first OSD's with a split data drive and journal on an SSD using some Intel S3500 600GB SSD's I have spare from a previous project. Now I would like to make sure that the 300GB journal fits but my question is whether that 300GB is 300 * 1000 or 300 * 1024?

Re: [ceph-users] Changing the release cadence

2019-06-26 Thread Sage Weil
On Wed, 26 Jun 2019, Alfonso Martinez Hidalgo wrote: > I think March is a good idea. Spring had a slight edge over fall in the twitter poll (for whatever that's worth). I see the appeal for fall when it comes to down time for retailers, but as a practical matter for Octopus specifically, a

Re: [ceph-users] Changing the release cadence

2019-06-26 Thread Sage Weil
On Tue, 25 Jun 2019, Alfredo Deza wrote: > On Mon, Jun 17, 2019 at 4:09 PM David Turner wrote: > > > > This was a little long to respond with on Twitter, so I thought I'd share > > my thoughts here. I love the idea of a 12 month cadence. I like October > > because admins aren't upgrading

Re: [ceph-users] pgs incomplete

2019-06-26 Thread Paul Emmerich
Have you tried: ceph osd force-create-pg ? If that doesn't work: use objectstore-tool on the OSD (while it's not running) and use it to force mark the PG as complete. (Don't know the exact command off the top of my head) Caution: these are obviously really dangerous commands Paul -- Paul

Re: [ceph-users] osd be marked down when recovering

2019-06-26 Thread Paul Emmerich
Looks like it's overloaded and runs into a timeout. For a test/dev environment: try to set the nodown flag for this experiment if you just want to ignore these timeouts completely. Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH

Re: [ceph-users] ceph balancer - Some osds belong to multiple subtrees

2019-06-26 Thread Paul Emmerich
Device classes are implemented with magic invisible crush trees; you've got two completely independent trees internally: one for crush rules mapping to HDDs, one to legacy crush rules not specifying a device class. The balancer *should* be aware of this and ignore it, but I'm not sure about the

[ceph-users] osd be marked down when recovering

2019-06-26 Thread zhanrzh...@teamsun.com.cn
Hi,all: I start ceph cluster on my machine with development mode,to estimate the time of recoverying after increasing pgp_num. all of daemon run on one machine. CPU: Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz memory: 377GB OS:CentOS Linux release 7.6.1810 ceph

[ceph-users] show-prediction-config - no valid command found?

2019-06-26 Thread Nigel Williams
Have I missed a step? Diskprediction module is not working for me. root@cnx-11:/var/log/ceph# ceph device show-prediction-config no valid command found; 10 closest matches: root@cnx-11:/var/log/ceph# ceph mgr module ls { "enabled_modules": [ "dashboard",

Re: [ceph-users] Thoughts on rocksdb and erasurecode

2019-06-26 Thread Rafał Wądołowski
We changed these settings. Our config now is: bluestore_rocksdb_options =

[ceph-users] ceph balancer - Some osds belong to multiple subtrees

2019-06-26 Thread Wolfgang Lendl
Hi, tried to enable the ceph balancer on a 12.2.12 cluster and got this: mgr[balancer] Some osds belong to multiple subtrees: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43,