Re: [ceph-users] OSDs too slow to start

2018-06-13 Thread Konstantin Shalygin
On 06/13/2018 08:22 PM, Alfredo Daniel Rezinovsky wrote: I have 3 boxes. And I'm installing a new one. Any box can be lost without data problem. If any SSD is lost I will just reinstall the whole box, still have data duplicates and in about 40 hours the triplicates will be ready. I

Re: [ceph-users] Migrating cephfs data pools and/or mounting multiple filesystems belonging to the same cluster

2018-06-13 Thread Yan, Zheng
On Wed, Jun 13, 2018 at 9:35 PM Alessandro De Salvo wrote: > > Hi, > > > Il 13/06/18 14:40, Yan, Zheng ha scritto: > > On Wed, Jun 13, 2018 at 7:06 PM Alessandro De Salvo > > wrote: > >> Hi, > >> > >> I'm trying to migrate a cephfs data pool to a different one in order to > >> reconfigure with

Re: [ceph-users] How to throttle operations like "rbd rm"

2018-06-13 Thread Paul Emmerich
2018-06-13 23:53 GMT+02:00 : > Hi yao, > > IIRC there is a *sleep* Option which is usefull when delete Operation is > being done from ceph sleep_trim or something like that. > you are thinking of "osd_snap_trim_sleep" which is indeed a very helpful option - but not for deletions. It rate

Re: [ceph-users] How to throttle operations like "rbd rm"

2018-06-13 Thread ceph
Hi yao, IIRC there is a *sleep* Option which is usefull when delete Operation is being done from ceph sleep_trim or something like that. - Mehmet Am 7. Juni 2018 04:11:11 MESZ schrieb Yao Guotao : >Hi Jason, > > >Thank you very much for your reply. >I think the RBD trash is a good way.

Re: [ceph-users] Add ssd's to hdd cluster, crush map class hdd update necessary?

2018-06-13 Thread Marc Roos
This is actually not to nice, because this remapping is now causing a nearfull -Original Message- From: Dan van der Ster [mailto:d...@vanderster.com] Sent: woensdag 13 juni 2018 14:02 To: Marc Roos Cc: ceph-users Subject: Re: [ceph-users] Add ssd's to hdd cluster, crush map class

[ceph-users] GFS2 as RBD on ceph?

2018-06-13 Thread Flint WALRUS
Hi, in fact it highly depend on your ceph underlying installation (both HW and Ceph version). Would you be willing to share more information on your needs and design? How many IOPS are you looking at? Which processors and disks are you using? What ceph version? Which OS? Are you targeting mostly

Re: [ceph-users] cephfs: bind data pool via file layout

2018-06-13 Thread Webert de Souza Lima
Got it Gregory, sounds good enough for us. Thank you all for the help provided. Regards, Webert Lima DevOps Engineer at MAV Tecnologia *Belo Horizonte - Brasil* *IRC NICK - WebertRLZ* On Wed, Jun 13, 2018 at 2:20 PM Gregory Farnum wrote: > Nah, I would use one Filesystem unless you can’t.

Re: [ceph-users] cephfs: bind data pool via file layout

2018-06-13 Thread Gregory Farnum
Nah, I would use one Filesystem unless you can’t. The backtrace does create another object but IIRC it’s a maximum one IO per create/rename (on the file). On Wed, Jun 13, 2018 at 1:12 PM Webert de Souza Lima wrote: > Thanks for clarifying that, Gregory. > > As said before, we use the file layout

Re: [ceph-users] Add ssd's to hdd cluster, crush map class hdd update necessary?

2018-06-13 Thread Marc Roos
Yes thanks I know, I will change it when I get extra an extra node. -Original Message- From: Paul Emmerich [mailto:paul.emmer...@croit.io] Sent: woensdag 13 juni 2018 16:33 To: Marc Roos Cc: ceph-users; k0ste Subject: Re: [ceph-users] Add ssd's to hdd cluster, crush map class hdd

Re: [ceph-users] cephfs: bind data pool via file layout

2018-06-13 Thread Webert de Souza Lima
Thanks for clarifying that, Gregory. As said before, we use the file layout to resolve the difference of workloads in those 2 different directories in cephfs. Would you recommend using 2 filesystems instead? By doing so, each fs would have it's default data pool accordingly. Regards, Webert

Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters - how to ?

2018-06-13 Thread Jason Dillaman
On Wed, Jun 13, 2018 at 8:33 AM, Wladimir Mutel wrote: > On Tue, Jun 12, 2018 at 10:39:59AM -0400, Jason Dillaman wrote: > >> > So, my usual question is - where to look and what logs to enable >> > to find out what is going wrong ? > >> If not overridden, tcmu-runner will default

Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters - how to ?

2018-06-13 Thread Wladimir Mutel
On Tue, Jun 12, 2018 at 10:39:59AM -0400, Jason Dillaman wrote: > > So, my usual question is - where to look and what logs to enable > > to find out what is going wrong ? > If not overridden, tcmu-runner will default to 'client.admin' [1] so > you shouldn't need to add any

Re: [ceph-users] Journal flushed on osd clean shutdown?

2018-06-13 Thread Chris Dunlop
Excellent news - tks! On Wed, Jun 13, 2018 at 11:50:15AM +0200, Wido den Hollander wrote: On 06/13/2018 11:39 AM, Chris Dunlop wrote: Hi, Is the osd journal flushed completely on a clean shutdown? In this case, with Jewel, and FileStore osds, and a "clean shutdown" being: It is, a Jewel

Re: [ceph-users] OSDs too slow to start

2018-06-13 Thread Gregory Farnum
How long is “too long”? 800MB on an SSD should only be a second or three. I’m not sure if that’s a reasonable amount of data; you could try compacting the rocksdb instance etc. But if reading 800MB is noticeable I would start wondering about the quality of your disks as a journal or rocksdb

Re: [ceph-users] Add ssd's to hdd cluster, crush map class hdd update necessary?

2018-06-13 Thread Paul Emmerich
2018-06-13 7:13 GMT+02:00 Marc Roos : > I just added here 'class hdd' > > rule fs_data.ec21 { > id 4 > type erasure > min_size 3 > max_size 3 > step set_chooseleaf_tries 5 > step set_choose_tries 100 > step take default class hdd >

Re: [ceph-users] cephfs: bind data pool via file layout

2018-06-13 Thread Gregory Farnum
The backtrace object Zheng referred to is used only for resolving hard links or in disaster recovery scenarios. If the default data pool isn’t available you would stack up pending RADOS writes inside of your mds but the rest of the system would continue unless you manage to run the mds out of

Re: [ceph-users] Add a new iSCSI gateway would not update client multipath

2018-06-13 Thread Jason Dillaman
I've never used XenServer, but I'd imagine you would need to do something similar to what is documented here [1]. [1] http://docs.ceph.com/docs/master/rbd/iscsi-initiator-linux/ On Wed, Jun 13, 2018 at 5:11 AM, Max Cuttins wrote: > I just realize there is a an error: > > multipath -r > Jun 13

Re: [ceph-users] Migrating cephfs data pools and/or mounting multiple filesystems belonging to the same cluster

2018-06-13 Thread Alessandro De Salvo
Hi, Il 13/06/18 14:40, Yan, Zheng ha scritto: On Wed, Jun 13, 2018 at 7:06 PM Alessandro De Salvo wrote: Hi, I'm trying to migrate a cephfs data pool to a different one in order to reconfigure with new pool parameters. I've found some hints but no specific documentation to migrate pools.

Re: [ceph-users] cephfs: bind data pool via file layout

2018-06-13 Thread Webert de Souza Lima
Thank you Zheng. Does that mean that, when using such feature, our data integrity relies now on both data pools' integrity/availability? We currently use such feature in production for dovecot's index files, so we could store this directory on a pool of SSDs only. The main data pool is made of

Re: [ceph-users] OSDs too slow to start

2018-06-13 Thread Alfredo Daniel Rezinovsky
On 13/06/18 01:03, Konstantin Shalygin wrote: Each node now has 1 SSD with the OS and the BlockDBs and 3 HDDs with bluestore data. Very. Very bad idea. When your ssd/nvme dead you lost your linux box. I have 3 boxes. And I'm installing a new one. Any box can be lost without data problem.

Re: [ceph-users] cephfs: bind data pool via file layout

2018-06-13 Thread Yan, Zheng
On Wed, Jun 13, 2018 at 3:34 AM Webert de Souza Lima wrote: > > hello, > > is there any performance impact on cephfs for using file layouts to bind a > specific directory in cephfs to a given pool? Of course, such pool is not the > default data pool for this cephfs. > For each file, no matter

Re: [ceph-users] Ceph Mimic on Debian 9 Stretch

2018-06-13 Thread Lenz Grimmer
On 06/13/2018 02:01 PM, Sean Purdy wrote: > Me too. I picked ceph luminous on debian stretch because I thought > it would be maintained going forwards, and we're a debian shop. I > appreciate Mimic is a non-LTS release, I hope issues of debian > support are resolved by the time of the next LTS.

Re: [ceph-users] Migrating cephfs data pools and/or mounting multiple filesystems belonging to the same cluster

2018-06-13 Thread Yan, Zheng
On Wed, Jun 13, 2018 at 7:06 PM Alessandro De Salvo wrote: > > Hi, > > I'm trying to migrate a cephfs data pool to a different one in order to > reconfigure with new pool parameters. I've found some hints but no > specific documentation to migrate pools. > > I'm currently trying with rados export

Re: [ceph-users] Ceph Mimic on Debian 9 Stretch

2018-06-13 Thread Sage Weil
Hi Fabian, On Wed, 13 Jun 2018, Fabian Grünbichler wrote: > On Mon, Jun 04, 2018 at 06:39:08PM +, Sage Weil wrote: > > [adding ceph-maintainers] > > [and ceph-devel] > > > > > On Mon, 4 Jun 2018, Charles Alva wrote: > > > Hi Guys, > > > > > > When will the Ceph Mimic packages for Debian

Re: [ceph-users] Add ssd's to hdd cluster, crush map class hdd update necessary?

2018-06-13 Thread Dan van der Ster
See this thread: http://lists.ceph.com/pipermail/ceph-large-ceph.com/2018-April/000106.html http://lists.ceph.com/pipermail/ceph-large-ceph.com/2018-June/000113.html (Wido -- should we kill the ceph-large list??) On Wed, Jun 13, 2018 at 1:14 PM Marc Roos wrote: > > > I wonder if this is not a

Re: [ceph-users] Ceph Mimic on Debian 9 Stretch

2018-06-13 Thread Sean Purdy
On Wed, 13 Jun 2018, Fabian Grünbichler said: > I hope we find some way to support Mimic+ for Stretch without requiring > a backport of gcc-7+, although it unfortunately seems unlikely at this > point. Me too. I picked ceph luminous on debian stretch because I thought it would be maintained

Re: [ceph-users] *****SPAM***** Re: Add ssd's to hdd cluster, crush map class hdd update necessary?

2018-06-13 Thread Dan van der Ster
See this thread: http://lists.ceph.com/pipermail/ceph-large-ceph.com/2018-April/000106.html http://lists.ceph.com/pipermail/ceph-large-ceph.com/2018-June/000113.html (Wido -- should we kill the ceph-large list??) -- dan On Wed, Jun 13, 2018 at 12:27 PM Marc Roos wrote: > > > Shit, I added

Re: [ceph-users] Ceph Mimic on Debian 9 Stretch

2018-06-13 Thread Fabian Grünbichler
On Mon, Jun 04, 2018 at 06:39:08PM +, Sage Weil wrote: > [adding ceph-maintainers] [and ceph-devel] > > On Mon, 4 Jun 2018, Charles Alva wrote: > > Hi Guys, > > > > When will the Ceph Mimic packages for Debian Stretch released? I could not > > find the packages even after changing the

Re: [ceph-users] Add ssd's to hdd cluster, crush map class hdd update necessary?

2018-06-13 Thread Marc Roos
I wonder if this is not a bug or so. Adding the class hdd, to an all hdd cluster should not have such result that 60% of objects are moved around. pool fs_data.ec21 id 53 3866523/6247464 objects misplaced (61.889%) recovery io 93089 kB/s, 22 objects/s -Original Message-

[ceph-users] Migrating cephfs data pools and/or mounting multiple filesystems belonging to the same cluster

2018-06-13 Thread Alessandro De Salvo
Hi, I'm trying to migrate a cephfs data pool to a different one in order to reconfigure with new pool parameters. I've found some hints but no specific documentation to migrate pools. I'm currently trying with rados export + import, but I get errors like these: Write

Re: [ceph-users] Add ssd's to hdd cluster, crush map class hdd update necessary?

2018-06-13 Thread Marc Roos
I just added here 'class hdd' rule fs_data.ec21 { id 4 type erasure min_size 3 max_size 3 step set_chooseleaf_tries 5 step set_choose_tries 100 step take default class hdd step choose indep 0 type osd step emit }

Re: [ceph-users] *****SPAM***** Re: Add ssd's to hdd cluster, crush map class hdd update necessary?

2018-06-13 Thread Konstantin Shalygin
On 06/13/2018 12:06 PM, Marc Roos wrote: Shit, I added this class and now everything start backfilling (10%) How is this possible, I only have hdd's? This is normal when you change your crush and placement rules. Post your output, I will take a look ceph osd crush tree ceph osd crush dump

Re: [ceph-users] *****SPAM***** Re: Add ssd's to hdd cluster, crush map class hdd update necessary?

2018-06-13 Thread Marc Roos
Shit, I added this class and now everything start backfilling (10%) How is this possible, I only have hdd's? -Original Message- From: Konstantin Shalygin [mailto:k0...@k0ste.ru] Sent: woensdag 13 juni 2018 9:26 To: Marc Roos; ceph-users Subject: *SPAM* Re: [ceph-users] Add

Re: [ceph-users] Journal flushed on osd clean shutdown?

2018-06-13 Thread Wido den Hollander
On 06/13/2018 11:39 AM, Chris Dunlop wrote: > Hi, > > Is the osd journal flushed completely on a clean shutdown? > > In this case, with Jewel, and FileStore osds, and a "clean shutdown" being: > It is, a Jewel OSD will flush it's journal on a clean shutdown. The flush-journal is no longer

[ceph-users] Journal flushed on osd clean shutdown?

2018-06-13 Thread Chris Dunlop
Hi, Is the osd journal flushed completely on a clean shutdown? In this case, with Jewel, and FileStore osds, and a "clean shutdown" being: systemctl stop ceph-osd@${osd} I understand it's documented practice to issue a --flush-journal after shutting down down an osd if you're intending to

Re: [ceph-users] Add a new iSCSI gateway would not update client multipath

2018-06-13 Thread Max Cuttins
I just realize there is a an error:  multipath -r *Jun 13 11:02:27 | rbd0: HDIO_GETGEO failed with 25* reload: mpatha (360014051b4fb8c6384545b7ae7d5142e) undef LIO-ORG ,TCMU device size=100G features='1 queue_if_no_path' hwhandler='1 alua' wp=undef |-+- policy='queue-length 0'

Re: [ceph-users] Crush maps : split the root in two parts on an OSD node with same disks ?

2018-06-13 Thread Hervé Ballans
Thanks Janne for your reply. Here are the reasons which made me think to "physically" split the pools : 1) A different usage of the pools : the first one will be used for user home directories, with an intensive read/write access. And the second one will be used for data storage/backup, with

[ceph-users] Add a new iSCSI gateway would not update client multipath

2018-06-13 Thread Max Cuttins
Hi everybody, maybe I miss something but multipath is not adding new iscsi gateways. I have installed 2 gateway and test it on a client. Everything worked fine. After that I decided to complete install and create a 3rd gateway. But no one iscsi initiatior client update the number of gateways.

Re: [ceph-users] iSCSI rookies questions

2018-06-13 Thread Max Cuttins
Hi Marc, thanks for the reply. I knew RBDSR project very well. I used to be one first contributors to the project: https://github.com/rposudnevskiy/RBDSR/graphs/contributors I rewrote all the installation script to do it easily and allow multiple installation across all XenClusters in few

[ceph-users] large omap object

2018-06-13 Thread stephan schultchen
Hello, i am running a ceph 13.2.0 cluster exclusively for radosrw / s3. i only have one big bucket. and the cluster is currently in warning state: cluster: id: d605c463-9f1c-4d91-a390-a28eedb21650 health: HEALTH_WARN 13 large omap objects i tried to google it, but i

Re: [ceph-users] Installing iSCSI support

2018-06-13 Thread Lenz Grimmer
On 06/12/2018 07:14 PM, Max Cuttins wrote: > it's a honor to me contribute to the main repo of ceph. We appreciate you support! Please take a look at http://docs.ceph.com/docs/master/start/documenting-ceph/ for guidance on how to contribute to the documentation. > Just a throught, is it wise

Re: [ceph-users] Add ssd's to hdd cluster, crush map class hdd update necessary?

2018-06-13 Thread Konstantin Shalygin
On 06/13/2018 09:01 AM, Marc Roos wrote: Yes but I already have some sort of test cluster with data in it. I don’t think there are commands to modify existing rules that are being used by pools. And the default replicated_ruleset doesn’t have a class specified. I also have an erasure code rule

Re: [ceph-users] iSCSI rookies questions

2018-06-13 Thread Marc Schöchlin
Hi Max, just a sidenote: we are using a fork of RBDSR (https://github.com/vico-research-and-consulting/RBDSR) to connect XENServer 7.2 Community to RBDs directly using rbd-nbd. After a bit of hacking this works pretty good: direct RBD Creation from the storage repo, live Migration between

Re: [ceph-users] Add ssd's to hdd cluster, crush map class hdd update necessary?

2018-06-13 Thread Marc Roos
Yes but I already have some sort of test cluster with data in it. I don’t think there are commands to modify existing rules that are being used by pools. And the default replicated_ruleset doesn’t have a class specified. I also have an erasure code rule without any class definition for the