Re: [ceph-users] Growing an SSD cluster with different disk sizes

2018-03-19 Thread Christian Balzer
Hello, On Mon, 19 Mar 2018 10:39:02 -0400 Mark Steffen wrote: > At the moment I'm just testing things out and have no critical data on > Ceph. I'm using some Intel DC S3510 drives at the moment; these may not be > optimal but I'm just trying to do some testing and get my feet with with > Ceph

Re: [ceph-users] Backfilling on Luminous

2018-03-19 Thread Pavan Rallabhandi
David, Pretty sure you must be aware of the filestore random split on existing OSD PGs, `filestore split rand factor`, may be you could try that too. Thanks, -Pavan. From: ceph-users on behalf of David Turner Date: Monday, March 19,

Re: [ceph-users] Huge amount of cephfs metadata writes while only reading data (rsync from storage, to single disk)

2018-03-19 Thread Nicolas Huillard
On Monday, March 19, 2018 at 18:45, Nicolas Huillard wrote: > > Then I tried to reduce the number of MDS, from 4 to 1,  > Le lundi 19 mars 2018 à 19:15 +0300, Sergey Malinin a écrit : > Forgot to mention, that in my setup the issue gone when I had > reverted back to single MDS and switched

[ceph-users] Multi Networks Ceph

2018-03-19 Thread Lazuardi Nasution
Hi, What is the best way to do if there network segments different between OSD to OSD, OSD to MON and OSD to client due to some networking policy? What should I put for public_addr and cluster_addr? Is that simple "as is" depend on the connected network segments of each OSD and MON? If it is not

Re: [ceph-users] Backfilling on Luminous

2018-03-19 Thread David Turner
Sorry for being away. I set all of my backfilling to VERY slow settings over the weekend and things have been stable, but incredibly slow (1% recovery from 3% misplaced to 2% all weekend). I'm back on it now and well rested. @Caspar, SWAP isn't being used on these nodes and all of the affected

[ceph-users] What about Petasan?

2018-03-19 Thread Max Cuttins
Hi everybody, does anybody have used Petasan? On the website it claim that use Ceph with ready-to-use iSCSI. Is it something that somebody have try already? Experience? Thought? Reviews? Dubts? Pro? Cons? Thanks for any thoughts. Max ___ ceph-users

Re: [ceph-users] Huge amount of cephfs metadata writes while only reading data (rsync from storage, to single disk)

2018-03-19 Thread Sergey Malinin
Forgot to mention, that in my setup the issue gone when I had reverted back to single MDS and switched dirfrag off. On Monday, March 19, 2018 at 18:45, Nicolas Huillard wrote: > Then I tried to reduce the number of MDS, from 4 to 1, ___ ceph-users

Re: [ceph-users] Huge amount of cephfs metadata writes while only reading data (rsync from storage, to single disk)

2018-03-19 Thread Nicolas Huillard
Le lundi 19 mars 2018 à 15:30 +0300, Sergey Malinin a écrit : > Default for mds_log_events_per_segment is 1024, in my set up I ended > up with 8192. > I calculated that value like IOPS / log segments * 5 seconds (afaik > MDS performs journal maintenance once in 5 seconds by default). I tried 4096

Re: [ceph-users] Radosgw ldap user authentication issues

2018-03-19 Thread Benjeman Meekhof
Hi Marc, You mentioned following the instructions 'except for doing this ldap token'. Do I read that correctly that you did not generate / use an LDAP token with your client? I think that is a necessary part of triggering the LDAP authentication (Section 3.2 and 3.3 of the doc you linked). I

Re: [ceph-users] Deep Scrub distribution

2018-03-19 Thread Jonathan Proulx
On Mon, Mar 05, 2018 at 12:55:52PM -0500, Jonathan D. Proulx wrote: :Hi All, : :I've recently noticed my deep scrubs are EXTREAMLY poorly :distributed. They are stating with in the 18->06 local time start :stop time but are not distrubuted over enough days or well distributed :over the range of

Re: [ceph-users] Growing an SSD cluster with different disk sizes

2018-03-19 Thread Mark Steffen
At the moment I'm just testing things out and have no critical data on Ceph. I'm using some Intel DC S3510 drives at the moment; these may not be optimal but I'm just trying to do some testing and get my feet with with Ceph (since trying it out with 9 OSDs on 2TB spinners about 4 years ago). I

Re: [ceph-users] Huge amount of cephfs metadata writes while only reading data (rsync from storage, to single disk)

2018-03-19 Thread Sergey Malinin
Default for mds_log_events_per_segment is 1024, in my set up I ended up with 8192. I calculated that value like IOPS / log segments * 5 seconds (afaik MDS performs journal maintenance once in 5 seconds by default). On Monday, March 19, 2018 at 15:20, Nicolas Huillard wrote: > I can't find any

Re: [ceph-users] Huge amount of cephfs metadata writes while only reading data (rsync from storage, to single disk)

2018-03-19 Thread Nicolas Huillard
Le lundi 19 mars 2018 à 10:01 +, Sergey Malinin a écrit : > I experienced the same issue and was able to reduce metadata writes > by raising mds_log_events_per_segment to > it’s original value multiplied several times. I changed it from 1024 to 4096 : * rsync status (1 line per file) scrolls

Re: [ceph-users] Failed to add new OSD with bluestores

2018-03-19 Thread Alfredo Deza
On Mon, Mar 19, 2018 at 7:29 AM, ST Wong (ITSC) wrote: > Hi, > > > > I tried to extend my experimental cluster with more OSDs running CentOS 7 > but failed with warning and error with following steps: > > > > $ ceph-deploy install --release luminous newosd1 > # no error > >

[ceph-users] Failed to add new OSD with bluestores

2018-03-19 Thread ST Wong (ITSC)
Hi, I tried to extend my experimental cluster with more OSDs running CentOS 7 but failed with warning and error with following steps: $ ceph-deploy install --release luminous newosd1# no error $ ceph-deploy osd create newosd1 --data /dev/sdb cut here

Re: [ceph-users] Growing an SSD cluster with different disk sizes

2018-03-19 Thread Christian Balzer
Hello, On Sun, 18 Mar 2018 10:59:15 -0400 Mark Steffen wrote: > Hello, > > I have a Ceph newb question I would appreciate some advice on > > Presently I have 4 hosts in my Ceph cluster, each with 4 480GB eMLC drives > in them. These 4 hosts have 2 more empty slots each. > A lot of the

Re: [ceph-users] Disk write cache - safe?

2018-03-19 Thread Frédéric Nass
Hi Steven, Le 16/03/2018 à 17:26, Steven Vacaroaia a écrit : Hi All, Can someone confirm please that, for a perfect performance/safety compromise, the following would be the best settings  ( id 0 is SSD, id 1 is HDD ) Alternatively, any suggestions / sharing configuration / advice would be

Re: [ceph-users] Memory leak in Ceph OSD?

2018-03-19 Thread Konstantin Shalygin
We don't run compression as far as I know, so that wouldn't be it. We do actually run a mix of bluestore & filestore - due to the rest of the cluster predating a stable bluestore by some amount. 12.2.2 -> 12.2.4 at 2018/03/10: I don't see increase of memory usage. No any compressions of

Re: [ceph-users] Huge amount of cephfs metadata writes while only reading data (rsync from storage, to single disk)

2018-03-19 Thread Sergey Malinin
I experienced the same issue and was able to reduce metadata writes by raising mds_log_events_per_segment to it’s original value multiplied several times. From: ceph-users on behalf of Nicolas Huillard

Re: [ceph-users] Prometheus RADOSGW usage exporter

2018-03-19 Thread Konstantin Shalygin
Hi Berant I've created prometheus exporter that scrapes the RADOSGW Admin Ops API and exports the usage information for all users and buckets. This is my first prometheus exporter so if anyone has feedback I'd greatly appreciate it. I've tested it against Hammer, and will shortly test against

Re: [ceph-users] Huge amount of cephfs metadata writes while only reading data (rsync from storage, to single disk)

2018-03-19 Thread Gregory Farnum
The MDS has to write to its local journal when clients open files, in case of certain kinds of failures. I guess it doesn't distinguish between read-only (when it could *probably* avoid writing them down? Although it's not as simple a thing as it sounds) and writeable file opens. So every file

[ceph-users] Huge amount of cephfs metadata writes while only reading data (rsync from storage, to single disk)

2018-03-19 Thread Nicolas Huillard
Hi all, I'm experimenting with a new little storage cluster. I wanted to take advantage of the week-end to copy all data (1TB, 10M objects) from the cluster to a single SATA disk. I expected to saturate the SATA disk while writing to it, but the storage cluster actually saturates its network

Re: [ceph-users] Reducing pg_num for a pool

2018-03-19 Thread Gregory Farnum
Maybe (likely?) in Mimic. Certainly the next release. Some code has been written but the reason we haven’t done this before is the number of edge cases involved, and it’s not clear how long rounding those off will take. -Greg On Fri, Mar 16, 2018 at 2:38 PM Ovidiu Poncea

Re: [ceph-users] Syslog logging date/timestamp

2018-03-19 Thread Gregory Farnum
Mostly, this exists because syslog is just receiving our raw strings, and those embed time stamps deep in the code level. So we *could* strip them out for syslog, but we’d have still paid the cost of generating them, and as you can see we have much higher precision than that syslog output, plus

Re: [ceph-users] HA for Vms with Ceph and KVM

2018-03-19 Thread Gregory Farnum
You can explore the rbd exclusive lock functionality if you want to do this, but it’s not typically advised because using it makes moving live VMs across hosts harder, IUIC. -Greg On Sat, Mar 17, 2018 at 7:47 PM Egoitz Aurrekoetxea wrote: > Good morning, > > > Does some kind