Re: [ceph-users] Right way to delete OSD from cluster?

2019-02-26 Thread Fyodor Ustinov
Hi! Thank you so much! I do not understand why, but your variant really causes only one rebalance compared to the "osd out". - Original Message - From: "Scottix" To: "Fyodor Ustinov" Cc: "ceph-users" Sent: Wednesday, 30 January, 2019 20:31:32 Subject: Re: [ceph-users] Right way to

Re: [ceph-users] REQUEST_SLOW across many OSDs at the same time

2019-02-26 Thread Massimo Sgaravatto
On Mon, Feb 25, 2019 at 9:26 PM mart.v wrote: > - As far as I understand the reported 'implicated osds' are only the > primary ones. In the log of the osds you should find also the relevant pg > number, and with this information you can get all the involved OSDs. This > might be useful e.g. to

Re: [ceph-users] CephFS Quotas on Subdirectories

2019-02-26 Thread Ramana Raja
On Tue, Feb 26, 2019 at 1:38 PM, Hendrik Peyerl wrote: > > Hello All, > > I am having some troubles with Ceph Quotas not working on subdirectories. I > am running with the following directory tree: > > - customer > - project > - environment > - application1 > - application2

Re: [ceph-users] Files in CephFS data pool

2019-02-26 Thread Hector Martin
On 15/02/2019 22:46, Ragan, Tj (Dr.) wrote: Is there anyway to find out which files are stored in a CephFS data pool?  I know you can reference the extended attributes, but those are only relevant for files created after ceph.dir.layout.pool or ceph.file.layout.pool attributes are set - I need

Re: [ceph-users] ceph migration

2019-02-26 Thread Eugen Block
Hi, Well, I've just reacted to all the text at the beginning of http://docs.ceph.com/docs/luminous/rados/operations/add-or-rm-mons/#changing-a-monitor-s-ip-address-the-messy-way including the title "the messy way". If the cluster is clean I see no reason for doing brain surgery on monmaps just

[ceph-users] CephFS Quotas on Subdirectories

2019-02-26 Thread Hendrik Peyerl
Hello All, I am having some troubles with Ceph Quotas not working on subdirectories. I am running with the following directory tree: - customer - project - environment - application1 - application2 - applicationx I set a quota on environment which works perfectly fine,

Re: [ceph-users] radosgw-admin reshard stale-instances rm experience

2019-02-26 Thread Wido den Hollander
On 2/21/19 9:19 PM, Paul Emmerich wrote: > On Thu, Feb 21, 2019 at 4:05 PM Wido den Hollander wrote: >> This isn't available in 13.2.4, but should be in 13.2.5, so on Mimic you >> will need to wait. But this might bite you at some point. > > Unfortunately it hasn't been backported to Mimic: >

Re: [ceph-users] CephFS Quotas on Subdirectories

2019-02-26 Thread Hendrik Peyerl
Thank you Ramana and Luis for your quick reply. @ Ramana: I have a quota for 300G for this specific environment, I dont want to split this into 100G quotas for all the subdirectories as i cannot yet forsee how big they will be. @ Luis: The Client has access to the Environment directory as you

Re: [ceph-users] CephFS Quotas on Subdirectories

2019-02-26 Thread Luis Henriques
Hendrik Peyerl writes: > Thank you Ramana and Luis for your quick reply. > > @ Ramana: I have a quota for 300G for this specific environment, I dont want > to > split this into 100G quotas for all the subdirectories as i cannot yet forsee > how big they will be. > > @ Luis: The Client has

Re: [ceph-users] CephFS Quotas on Subdirectories

2019-02-26 Thread Hendrik Peyerl
Thank you Luis, I’m looking forward to a solution. > On 26. Feb 2019, at 13:10, Luis Henriques wrote: > > Hendrik Peyerl writes: > >> Thank you Ramana and Luis for your quick reply. >> >> @ Ramana: I have a quota for 300G for this specific environment, I dont want >> to >> split this into

Re: [ceph-users] Multi-Site Cluster RGW Sync issues

2019-02-26 Thread Benjamin . Zieglmeier
Hello, We have a two zone multisite configured Luminous 12.2.5 cluster. Cluster has been running for about 1 year, and has only ~140G of data (~350k objects). We recently added a third zone to the zonegroup to facilitate a migration out of an existing site. Sync appears to be working and

Re: [ceph-users] Mimic and cephfs

2019-02-26 Thread Sergey Malinin
I've been using fresh 13.2.2 install in production for 4 months now without any issues. February 25, 2019 10:17 PM, "Andras Pataki" wrote: > Hi ceph users, > > As I understand, cephfs in Mimic had significant issues up to and > including version 13.2.2. With some critical patches in Mimic

[ceph-users] luminous 12.2.11 on debian 9 requires nscd?

2019-02-26 Thread Chad W Seys
Hi all, I cannot get my luminous 12.2.11 mds servers to start on Debian 9(.8) unless nscd is also installed. Trying to start from command line: # /usr/bin/ceph-mds -f --cluster ceph --id mds02.hep.wisc.edu --setuser ceph --setgroup ceph unable to look up group 'ceph': (34) Numerical

Re: [ceph-users] CephFS Quotas on Subdirectories

2019-02-26 Thread Luis Henriques
On Tue, Feb 26, 2019 at 03:47:31AM -0500, Ramana Raja wrote: > On Tue, Feb 26, 2019 at 1:38 PM, Hendrik Peyerl wrote: > > > > Hello All, > > > > I am having some troubles with Ceph Quotas not working on subdirectories. I > > am running with the following directory tree: > > > > - customer > >

Re: [ceph-users] faster switch to another mds

2019-02-26 Thread Marc Roos
My two cents, with default luminous cluster 4nodes, 2x mds, taking 21 seconds to respond?? Is that not a bit long for a 4 node, 2x mds cluster? After flushing caches and doing [@c03 sbin]# ceph mds fail c failed mds gid 3464231 [@c04 5]# time ls -l total 2 ... real 0m21.891s user 0m0.002s

Re: [ceph-users] Ceph bluestore performance on 4kn vs. 512e?

2019-02-26 Thread Martin Verges
Hello Oliver, as 512e requires the drive to read a 4k block, change the 512 byte and then write back the 4k block to the disk, it should have a significant performance impact. However costs are the same, so always choose 4Kn drives. By the way, this might not affect you, as long as you write 4k

Re: [ceph-users] redirect log to syslog and disable log to stderr

2019-02-26 Thread Alex Litvak
Dear Cephers, In mimic 13.2.2 ceph tell mgr.* injectargs --log-to-stderr=false Returns an error (no valid command found ...). What is the correct way to inject mgr configuration values? The same command works on mon ceph tell mon.* injectargs --log-to-stderr=false Thank you in advance,

Re: [ceph-users] Questions about rbd-mirror and clones

2019-02-26 Thread Jason Dillaman
On Tue, Feb 26, 2019 at 7:49 PM Anthony D'Atri wrote: > > Hello again. > > I have a couple of questions about rbd-mirror that I'm hoping you can help me > with. > > > 1) http://docs.ceph.com/docs/mimic/rbd/rbd-snapshot/ indicates that > protecting is required for cloning. We somehow had the

[ceph-users] Blocked ops after change from filestore on HDD to bluestore on SDD

2019-02-26 Thread Uwe Sauter
Hi, TL;DR: In my Ceph clusters I replaced all OSDs from HDDs of several brands and models with Samsung 860 Pro SSDs and used the opportunity to switch from filestore to bluestore. Now I'm seeing blocked ops in Ceph and file system freezes inside VMs. Any suggestions? I have two Proxmox

Re: [ceph-users] ?= Intel P4600 3.2TB=?utf-8?q? U.2 form factor NVMe firmware problems causing dead disks

2019-02-26 Thread solarflow99
I knew it. FW updates are very important for SSDs On Sat, Feb 23, 2019 at 8:35 PM Michel Raabe wrote: > On Monday, February 18, 2019 16:44 CET, David Turner < > drakonst...@gmail.com> wrote: > > Has anyone else come across this issue before? Our current theory is > that > > Bluestore is

Re: [ceph-users] Intel P4600 3.2TB U.2 form factor NVMe firmware problems causing dead disks

2019-02-26 Thread Jeff Smith
We had several postgresql servers running these disks from Dell. Numerous failures, including one server that had 3 die at once. Dell claims it is a firmware issue instructed us to upgrade to QDV1DP15 from QDV1DP12 (I am not sure how these line up to the Intel firmwares). We lost several more

Re: [ceph-users] Configuration about using nvme SSD

2019-02-26 Thread solarflow99
I saw Intel had a demo of a luminous cluster running on top of the line hardware, they used 2 OSD partitions with the best performance. I was interested that they would split them like that, and asked the demo person how they came to that number. I never got a really good answer except that it