Re: [ceph-users] How to control automatic deep-scrubs

2019-02-13 Thread Eugen Block
I created http://tracker.ceph.com/issues/38310 for this. Regards, Eugen Zitat von Konstantin Shalygin : On 2/14/19 2:21 PM, Eugen Block wrote: Already did, but now with highlighting ;-) http://docs.ceph.com/docs/luminous/rados/operations/health-checks/?highlight=osd_deep_mon_scrub_interval

Re: [ceph-users] How to control automatic deep-scrubs

2019-02-13 Thread Konstantin Shalygin
On 2/14/19 2:21 PM, Eugen Block wrote: Already did, but now with highlighting ;-) http://docs.ceph.com/docs/luminous/rados/operations/health-checks/?highlight=osd_deep_mon_scrub_interval http://docs.ceph.com/docs/mimic/rados/operations/health-checks/?highlight=osd_deep_mon_scrub_interval I

Re: [ceph-users] How to control automatic deep-scrubs

2019-02-13 Thread Eugen Block
Already did, but now with highlighting ;-) http://docs.ceph.com/docs/luminous/rados/operations/health-checks/?highlight=osd_deep_mon_scrub_interval http://docs.ceph.com/docs/mimic/rados/operations/health-checks/?highlight=osd_deep_mon_scrub_interval Zitat von Konstantin Shalygin : On 2/14/19

Re: [ceph-users] How to control automatic deep-scrubs

2019-02-13 Thread Konstantin Shalygin
On 2/14/19 2:16 PM, Eugen Block wrote: Exactly, it's also not available in a Mimic test-cluster. But it's mentioned in the docs for L and M (I didn't check the docs for other releases), that's what I was wondering about. Can you provide url to this page? k

Re: [ceph-users] How to control automatic deep-scrubs

2019-02-13 Thread Eugen Block
My Ceph Luminous don't know anything about this option: # ceph daemon osd.7 config help osd_deep_mon_scrub_interval { "error": "Setting not found: 'osd_deep_mon_scrub_interval'" } Exactly, it's also not available in a Mimic test-cluster. But it's mentioned in the docs for L and M (I

Re: [ceph-users] How to control automatic deep-scrubs

2019-02-13 Thread Konstantin Shalygin
On 2/13/19 9:17 PM, Eugen Block wrote: Do you have any comment on osd_deep_mon_scrub_interval? My Ceph Luminous don't know anything about this option: ``` # ceph daemon osd.7 config help osd_deep_mon_scrub_interval {     "error": "Setting not found: 'osd_deep_mon_scrub_interval'" } ``` k

[ceph-users] HDD OSD 100% busy reading OMAP keys RGW

2019-02-13 Thread Wido den Hollander
Hi, On a cluster running RGW only I'm running into BlueStore 12.2.11 OSDs being 100% busy sometimes. This cluster has 85k stale indexes (stale-instances list) and I've been slowly trying to remove them. I noticed that regularly OSDs read their HDD heavily and that device then becomes 100% busy.

Re: [ceph-users] Bluestore HDD Cluster Advice

2019-02-13 Thread Wido den Hollander
On 2/14/19 4:40 AM, John Petrini wrote: > Okay that makes more sense, I didn't realize the WAL functioned in a > similar manner to filestore journals (though now that I've had another > read of Sage's blog post, New in Luminous: BlueStore, I notice he does > cover this). Is this to say that

Re: [ceph-users] Bluestore HDD Cluster Advice

2019-02-13 Thread John Petrini
Okay that makes more sense, I didn't realize the WAL functioned in a similar manner to filestore journals (though now that I've had another read of Sage's blog post, New in Luminous: BlueStore, I notice he does cover this). Is this to say that writes are acknowledged as soon as they hit the WAL?

Re: [ceph-users] Fwd: NAS solution for CephFS

2019-02-13 Thread Marvin Zhang
On Thu, Feb 14, 2019 at 8:09 AM Jeff Layton wrote: > > > Hi, > > As http://docs.ceph.com/docs/master/cephfs/nfs/ says, it's OK to > > config active/passive NFS-Ganesha to use CephFs. My question is if we > > can use active/active nfs-ganesha for CephFS. > > (Apologies if you get two copies of

Re: [ceph-users] Fwd: NAS solution for CephFS

2019-02-13 Thread Jeff Layton
> Hi, > As http://docs.ceph.com/docs/master/cephfs/nfs/ says, it's OK to > config active/passive NFS-Ganesha to use CephFs. My question is if we > can use active/active nfs-ganesha for CephFS. (Apologies if you get two copies of this. I sent an earlier one from the wrong account and it got stuck

Re: [ceph-users] [Ceph-community] Deploy and destroy monitors

2019-02-13 Thread David Turner
Ceph-users is the proper ML to post questions like this. On Thu, Dec 20, 2018 at 2:30 PM Joao Eduardo Luis wrote: > On 12/20/2018 04:55 PM, João Aguiar wrote: > > I am having an issue with "ceph-ceploy mon” > > > > I started by creating a cluster with one monitor with "create-deploy > new"…

Re: [ceph-users] [Ceph-community] Ceph SSE-KMS integration to use Safenet as Key Manager service

2019-02-13 Thread David Turner
Ceph-users is the correct ML to post questions like this. On Wed, Jan 2, 2019 at 5:40 PM Rishabh S wrote: > Dear Members, > > Please let me know if you have any link with examples/detailed steps of > Ceph-Safenet(KMS) integration. > > Thanks & Regards, > Rishabh > >

Re: [ceph-users] [Ceph-community] Error during playbook deployment: TASK [ceph-mon : test if rbd exists]

2019-02-13 Thread David Turner
Ceph-users ML is the proper mailing list for questions like this. On Sat, Jan 26, 2019 at 12:31 PM Meysam Kamali wrote: > Hi Ceph Community, > > I am using ansible 2.2 and ceph branch stable-2.2, on centos7, to deploy > the playbook. But the deployment get hangs in this step "TASK [ceph-mon : >

Re: [ceph-users] [Ceph-community] Need help related to ceph client authentication

2019-02-13 Thread David Turner
The Ceph-users ML is the correct list to ask questions like this. Did you figure out the problems/questions you had? On Tue, Dec 4, 2018 at 11:39 PM Rishabh S wrote: > Hi Gaurav, > > Thank You. > > Yes, I am using boto, though I was looking for suggestions on how my ceph > client should get

Re: [ceph-users] Bluestore HDD Cluster Advice

2019-02-13 Thread Vitaliy Filippov
Hello, We'll soon be building out four new luminous clusters with Bluestore. Our current clusters are running filestore so we're not very familiar with Bluestore yet and I'd like to have an idea of what to expect. Here are the OSD hardware specs (5x per cluster): 2x 3.0GHz 18c/36t 22x 1.8TB 10K

Re: [ceph-users] all vms can not start up when boot all the ceph hosts.

2019-02-13 Thread David Turner
This might not be a Ceph issue at all depending on if you're using any sort of caching. If you have caching on your disk controllers at all, then the write might have happened to the cache but never made it to the OSD disks which would show up as problems on the VM RBDs. Make sure you have

Re: [ceph-users] RBD image format v1 EOL ...

2019-02-13 Thread Gregory Farnum
On Wed, Feb 13, 2019 at 10:37 AM Jason Dillaman wrote: > > For the future Ceph Octopus release, I would like to remove all > remaining support for RBD image format v1 images baring any > substantial pushback. > > The image format for new images has been defaulted to the v2 image > format since

Re: [ceph-users] how to mount one of the cephfs namespace using ceph-fuse?

2019-02-13 Thread David Turner
Note that this format in fstab does require a certain version of util-linux because of the funky format of the line. Pretty much it maps all command line options at the beginning of the line separated with commas. On Wed, Feb 13, 2019 at 2:10 PM David Turner wrote: > I believe the fstab line

Re: [ceph-users] how to mount one of the cephfs namespace using ceph-fuse?

2019-02-13 Thread David Turner
I believe the fstab line for ceph-fuse in this case would look something like [1] this. We use a line very similar to that to mount cephfs at a specific client_mountpoint that the specific cephx user only has access to. [1] id=acapp3,client_mds_namespace=fs1 /tmp/ceph fuse.ceph

Re: [ceph-users] jewel10.2.11 EC pool out a osd, its PGs remap to the osds in the same host

2019-02-13 Thread Gregory Farnum
Your CRUSH rule for EC spools is forcing that behavior with the line step chooseleaf indep 1 type ctnr If you want different behavior, you’ll need a different crush rule. On Tue, Feb 12, 2019 at 5:18 PM hnuzhoulin2 wrote: > Hi, cephers > > > I am building a ceph EC cluster.when a disk is

[ceph-users] RBD image format v1 EOL ...

2019-02-13 Thread Jason Dillaman
For the future Ceph Octopus release, I would like to remove all remaining support for RBD image format v1 images baring any substantial pushback. The image format for new images has been defaulted to the v2 image format since Infernalis, the v1 format was officially deprecated in Jewel, and

Re: [ceph-users] compacting omap doubles its size

2019-02-13 Thread David Turner
Sorry for the late response on this, but life has been really busy over the holidays. We compact our omaps offline with the ceph-kvstore-tool. Here [1] is a copy of the script that we use for our clusters. You might need to modify things a bit for your environment. I don't remember which

Re: [ceph-users] v12.2.11 Luminous released

2019-02-13 Thread Neha Ojha
On Wed, Feb 13, 2019 at 12:49 AM Siegfried Höllrigl < siegfried.hoellr...@xidras.com> wrote: > Hi ! > > We have now successfully upgraded (from 12.2.10) to 12.2.11. > Seems to be quite stable. (Using RBD, CephFS and RadosGW) > Great! > > Most of our OSDs are still on Filestore. > > Should we set

Re: [ceph-users] Bluestore HDD Cluster Advice

2019-02-13 Thread John Petrini
Anyone have any insight to offer here? Also I'm now curious to hear about experiences with 512e vs 4kn drives. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] How to control automatic deep-scrubs

2019-02-13 Thread Eugen Block
Thank you, Konstantin, I'll give that a try. Do you have any comment on osd_deep_mon_scrub_interval? Eugen Zitat von Konstantin Shalygin : The expectation was to prevent the automatic deep-scrubs but they are started anyway You can disable deep-scrubs per pool via `ceph osd pool set

Re: [ceph-users] systemd/rbdmap.service

2019-02-13 Thread Jason Dillaman
The "Wants=" clause is just a way to say that there is a soft-dependency between unit files. In this case, it's saying that if you "systemctl enable rbdmap.service", it will also ensure that "network-online.target" and "remote-fs-pre.target" are enabled. The "Before=" and "After=" clauses just

Re: [ceph-users] How to control automatic deep-scrubs

2019-02-13 Thread Konstantin Shalygin
The expectation was to prevent the automatic deep-scrubs but they are started anyway You can disable deep-scrubs per pool via `ceph osd pool set nodeep-scrub`. k ___ ceph-users mailing list ceph-users@lists.ceph.com

[ceph-users] How to control automatic deep-scrubs

2019-02-13 Thread Eugen Block
Hi cephers, I'm struggling a little with the deep-scrubs. I know this has been discussed multiple times (e.g. in [1]) and we also use a known crontab script in a Luminous cluster (12.2.10) to start the deep-scrubbing manually (a quarter of all PGs 4 times a week). The script works just

Re: [ceph-users] systemd/rbdmap.service

2019-02-13 Thread Clausen , Jörn
Thanks, I wasn't are of that mount option. If this is more intuitive to me (i.e. the lesser violation of the Principle of Least Surprise from my point of view) is a whole different matter... Am 13.02.2019 um 11:07 schrieb Marc Roos: Maybe _netdev? /dev/rbd/rbd/influxdb /var/lib/influxdb

Re: [ceph-users] systemd/rbdmap.service

2019-02-13 Thread Marc Roos
Maybe _netdev? /dev/rbd/rbd/influxdb /var/lib/influxdb xfs _netdev 0 0 To be honest I can remember having something similar long time ago, but just tested it on centos7, and have no problems with this. -Original Message- From: Clausen, Jörn [mailto:jclau...@geomar.de]

[ceph-users] systemd/rbdmap.service

2019-02-13 Thread Clausen , Jörn
Hi! I am new to Ceph, Linux, systemd and all that stuff. I have set up a test/toy Ceph installation using ceph-ansible, and now try to understand RBD. My RBD client has a correct /etc/ceph/rbdmap, i.e. /dev/rbd0 is created during system boot automatically. But adding an entry to /etc/fstab

Re: [ceph-users] v12.2.11 Luminous released

2019-02-13 Thread Siegfried Höllrigl
Hi ! We have now successfully upgraded (from 12.2.10) to 12.2.11. Seems to be quite stable. (Using RBD, CephFS and RadosGW) Most of our OSDs are still on Filestore. Should we set the "pglog_hardlimit" (as it mus not be unset anymore) ? What exactly will this limit ? Are there any risks ?

[ceph-users] jewel10.2.11 EC pool out a osd,its PGs remap to the osds in the same host

2019-02-13 Thread hnuzhoulin2
Hi, cephersI am building a ceph EC cluster.when a disk is error,I out it.But its all PGs remap to the osds in the same host,which I think they should remap to other hosts in the same rack.test process is:ceph osd pool create .rgw.buckets.data 8192 8192 erasure ISA-4-2

Re: [ceph-users] ceph osd commit latency increase over time, until restart

2019-02-13 Thread Alexandre DERUMIER
Hi Igor, Thanks again for helping ! I have upgrade to last mimic this weekend, and with new autotune memory, I have setup osd_memory_target to 8G. (my nvme are 6TB) I have done a lot of perf dump and mempool dump and ps of process to see rss memory at different hours, here the reports for

Re: [ceph-users] OSD fails to start (fsck error, unable to read osd superblock)

2019-02-13 Thread Brad Hubbard
A single OSD should be expendable and you should be able to just "zap" it and recreate it. Was this not true in your case? On Wed, Feb 13, 2019 at 1:27 AM Ruben Rodriguez wrote: > > > > On 2/9/19 5:40 PM, Brad Hubbard wrote: > > On Sun, Feb 10, 2019 at 1:56 AM Ruben Rodriguez wrote: > >> > >>