[ceph-users] Re: KRBD use Luminous upmap feature.Which version of the kernel should i ues?

2019-09-16 Thread 潘东元
Thank you for your reply. so,i would like to verify this problem. i create a new VM as a client,it is kernel version: [root@localhost ~]# uname -a Linux localhost.localdomain 5.2.9-200.fc30.x86_64 #1 SMP Fri Aug 16 21:37:45 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux First of all,use command:ceph feat

[ceph-users] Re: 14.2.4 Packages Avaliable

2019-09-16 Thread Ronny Aasen
On 17.09.2019 06:54, Ashley Merrick wrote: Have just noticed their is packages available for 14.2.4.. I know with the whole 14.2.3 release and the notes not going out to a good day or so later.. but this is not long after the 14.2.3 release..? Was this release even meant to have come out? Mak

[ceph-users] osds xxx have blocked requests > 1048.58 sec / osd.yyy has stuck requests > 67108.9 sec

2019-09-16 Thread Thomas
Hi, I have defined pool hdd which is exclusively used by virtual disks of multiple KVMs / LXCs. Yesterday I run these commands osdmaptool om --upmap out.txt --upmap-pool hdd source out.txt and Ceph started rebalancing this pool. However since then no KVM / LXC is reacting anymore. If I try to sta

[ceph-users] 14.2.4 Packages Avaliable

2019-09-16 Thread Ashley Merrick
Have just noticed their is packages available for 14.2.4.. I know with the whole 14.2.3 release and the notes not going out to a good day or so later.. but this is not long after the 14.2.3 release..? Was this release even meant to have come out? Makes it difficult for people installing a new n

[ceph-users] Re: Ceph FS not releasing space after file deletion

2019-09-16 Thread Yan, Zheng
please send me crash log On Tue, Sep 17, 2019 at 12:56 AM Guilherme Geronimo wrote: > > Thank you, Yan. > > It took like 10 minutes to execute the scan_links. > I believe the number of Lost+Found decreased in 60%, but the rest of > them are still causing the MDS crash. > > Any other suggestion? >

[ceph-users] Intergrate Metadata with ElasticSeach

2019-09-16 Thread tuan dung
Dear all, Can you show me steps how to intergrate Metadata of Ceph object with ElasticSeach to improve searching medata performance? thank you very much. - Br, Dương Tuấn Dũng 0986153686 ___ ceph-users mailing list -- ceph-user

[ceph-users] dashboard not working

2019-09-16 Thread solarflow99
I have mimic installed and for some reason the dashboard isn't showing up. I see which mon is listed as active for "mgr", the module is enabled, but nothing is listening on port 8080: # ceph mgr module ls { "enabled_modules": [ "dashboard", "iostat", "status" tcp

[ceph-users] Fwd: ceph-users Digest, Vol 80, Issue 54

2019-09-16 Thread Rom Freiman
unsubscribe -- Forwarded message - From: Date: Mon, Sep 16, 2019 at 7:22 PM Subject: ceph-users Digest, Vol 80, Issue 54 To: Send ceph-users mailing list submissions to ceph-users@ceph.io To subscribe or unsubscribe via email, send a message with subject or body 'help'

[ceph-users] Re: Ceph FS not releasing space after file deletion

2019-09-16 Thread Guilherme Geronimo
Thank you, Yan. It took like 10 minutes to execute the scan_links. I believe the number of Lost+Found decreased in 60%, but the rest of them are still causing the MDS crash. Any other suggestion? =D []'s Arthur (aKa Guilherme Geronimo) On 10/09/2019 23:51, Yan, Zheng wrote: On Wed, Sep 4,

[ceph-users] Re: RGW Passthrough

2019-09-16 Thread Casey Bodley
Hi Robert, So far the cloud tiering features are still in the design stages. We're working on some initial refactoring work to support this abstraction (ie. to either satisfy a request against the local rados cluster, or to proxy it somewhere else). With respect to passthrough/tiering to AWS,

[ceph-users] Re: Using same instance name for rgw

2019-09-16 Thread Eric Choi
bump. anyone? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: upmap supported in SLES 12SPx

2019-09-16 Thread Ilya Dryomov
On Mon, Sep 16, 2019 at 5:10 PM Thomas Schneider <74cmo...@gmail.com> wrote: > > Wonderbra. > > I found some relevant sessions on 2 of 3 monitor nodes. > And I found some others: > root@ld5505:~# ceph daemon mon.ld5505 sessions | grep 0x40106b84a842a42 > root@ld5505:~# ceph daemon mon.ld5505 sessio

[ceph-users] Re: upmap supported in SLES 12SPx

2019-09-16 Thread Thomas Schneider
Wonderbra. I found some relevant sessions on 2 of 3 monitor nodes. And I found some others: root@ld5505:~# ceph daemon mon.ld5505 sessions | grep 0x40106b84a842a42 root@ld5505:~# ceph daemon mon.ld5505 sessions | grep -v luminous [     "MonSession(client.32679861 v1:10.97.206.92:0/1183647891 is op

[ceph-users] Re: upmap supported in SLES 12SPx

2019-09-16 Thread Ilya Dryomov
On Mon, Sep 16, 2019 at 4:40 PM Thomas Schneider <74cmo...@gmail.com> wrote: > > Hi, > > thanks for your valuable input. > > Question: > Can I get more information of the 6 clients (those with features > 0x40106b84a842a42), e.g. IP, that allows me to identify it easily? Yes, although it's not inte

[ceph-users] Re: upmap supported in SLES 12SPx

2019-09-16 Thread Thomas Schneider
Hi, thanks for your valuable input. Question: Can I get more information of the 6 clients (those with features 0x40106b84a842a42), e.g. IP, that allows me to identify it easily? Regards Thomas Am 16.09.2019 um 15:56 schrieb Paul Emmerich: > Bit 21 in the features bitfield is upmap support > >

[ceph-users] Re: upmap supported in SLES 12SPx

2019-09-16 Thread Paul Emmerich
Bit 21 in the features bitfield is upmap support Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89 1896585 90 On Mon, Sep 16, 2019 at 3:21 PM Ilya Dryomov wrote: > > On Mon, Sep 16

[ceph-users] Re: KRBD use Luminous upmap feature.Which version of the kernel should i ues?

2019-09-16 Thread Ilya Dryomov
On Mon, Sep 16, 2019 at 2:24 PM 潘东元 wrote: > > hi, >my ceph cluster version is Luminous run the kernel version Linux 3.10 >[root@node-1 ~]# ceph features > { > "mon": { > "group": { > "features": "0x3ffddff8eeacfffb", > "release": "luminous", >

[ceph-users] Re: upmap supported in SLES 12SPx

2019-09-16 Thread Ilya Dryomov
On Mon, Sep 16, 2019 at 2:20 PM Thomas Schneider <74cmo...@gmail.com> wrote: > > Hello, > > the current kernel with SLES 12SP3 is: > ld3195:~ # uname -r > 4.4.176-94.88-default > > > Assuming that this kernel is not supporting upmap, do you recommend to > use balance mode crush-compat then? Hi Tho

[ceph-users] Re: KRBD use Luminous upmap feature.Which version of the kernel should i ues?

2019-09-16 Thread Wesley Peng
Hi on 2019/9/16 20:19, 潘东元 wrote: my ceph cluster version is Luminous run the kernel version Linux 3.10 Please refer this page: https://docs.ceph.com/docs/master/start/os-recommendations/ see [LUMINOUS] section. regards. ___ ceph-users mailing

[ceph-users] KRBD use Luminous upmap feature.Which version of the kernel should i ues?

2019-09-16 Thread 潘东元
hi, my ceph cluster version is Luminous run the kernel version Linux 3.10 [root@node-1 ~]# ceph features { "mon": { "group": { "features": "0x3ffddff8eeacfffb", "release": "luminous", "num": 3 } }, "osd": { "group": {

[ceph-users] Re: upmap supported in SLES 12SPx

2019-09-16 Thread Thomas Schneider
Hello, the current kernel with SLES 12SP3 is: ld3195:~ # uname -r 4.4.176-94.88-default Assuming that this kernel is not supporting upmap, do you recommend to use balance mode crush-compat then? Regards Thomas Am 16.09.2019 um 11:11 schrieb Oliver Freyermuth: > Am 16.09.19 um 11:06 schrieb Ko

[ceph-users] Re: Activate Cache Tier on Running Pools

2019-09-16 Thread Kees Meijs
Hi Robert, As long as you triple-check permissions on the cache tier (should be the same as your actual storage pool) you should be fine. In our setup I applied this a few times. The first time I made the assumption permissions would be inherited or not applicable but IOPs get redirected towards

[ceph-users] Re: Activate Cache Tier on Running Pools

2019-09-16 Thread Eikermann, Robert
That would be the case for me. I think all data during one day would fit into the cache, and we could slowly flush back over night (or even over the weekend). But my impression is, I would have to test it. So my initial question: Do I have to stop all VMs before activating the cache? And restart

[ceph-users] Different pools count in ceph -s and ceph osd pool ls

2019-09-16 Thread Fyodor Ustinov
Hi! I create bug https://tracker.ceph.com/issues/41832 Maybe someone also encountered such a problem? WBR, Fyodor. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Activate Cache Tier on Running Pools

2019-09-16 Thread Fyodor Ustinov
Hi! Cache tiering is a great solution if the cache size is larger than the hot data. Even better if the data can cool quietly in the cache. Otherwise, it’s really better not to do this. - Original Message - > From: "Wido den Hollander" > To: "Eikermann, Robert" , ceph-users@ceph.io > S

[ceph-users] Re: Activate Cache Tier on Running Pools

2019-09-16 Thread Ashley Merrick
I hope the data your running the CEPH server isn't important if your looking to run a Cache tier with just 2 SSDS / Replication of 2. If your cache tier fails, you basically corrupt most data on the pool below. Also as Wido said, as much as you may get it to work, I don't think it will give you

[ceph-users] Re: Activate Cache Tier on Running Pools

2019-09-16 Thread Eikermann, Robert
We have terrible IO performance when multiple VMs do some file IO. Mainly do some java compilation on that servers. If we have 2 parallel jobs everything is fine, but having 10 jobs we see the warning "HEALTH_WARN X requests are blocked > 32 sec; Y osds have slow requests". I have two enterprise

[ceph-users] Re: Activate Cache Tier on Running Pools

2019-09-16 Thread Wido den Hollander
On 9/16/19 11:36 AM, Eikermann, Robert wrote: > > Hi, > >   > > I’m using Ceph in combination with Openstack. For the “VMs” Pool I’d > like to enable writeback caching tier, like described here: > https://docs.ceph.com/docs/luminous/rados/operations/cache-tiering/. > >   > Can you explain why? The

[ceph-users] Re: Activate Cache Tier on Running Pools

2019-09-16 Thread Wesley Peng
Hello on 2019/9/16 17:36, Eikermann, Robert wrote: Should it be possible to do that on a running pool? I tried to do so and immediately all VMs (Linux Ubuntu OS) running on Ceph disks got readonly filesystems. No errors were shown in ceph (but also no traffic arrived after enabling the cache t

[ceph-users] Re: Ceph Day London - October 24 (Call for Papers!)

2019-09-16 Thread Wido den Hollander
Hi, The CFP is ending today for the Ceph Day London on October 24th. If you have a talk you would like to submit, please follow the link below! Wido On 7/18/19 3:43 PM, Wido den Hollander wrote: > Hi, > > We will be having Ceph Day London October 24th! > > https://ceph.com/cephdays/ceph-day-lon

[ceph-users] Re: Activate Cache Tier on Running Pools

2019-09-16 Thread Ashley Merrick
Have you checked that the user/keys that your VMs are connecting to have access rights to the cache pool? On Mon, 16 Sep 2019 17:36:38 +0800 Eikermann, Robert wrote Hi,   I’m using Ceph in combination with Openstack. For the “VMs” Pool I’d like to enable writeback caching

[ceph-users] Activate Cache Tier on Running Pools

2019-09-16 Thread Eikermann, Robert
Hi, I'm using Ceph in combination with Openstack. For the "VMs" Pool I'd like to enable writeback caching tier, like described here: https://docs.ceph.com/docs/luminous/rados/operations/cache-tiering/ . Should it be possible to do that on a running pool? I tried to do so and immediately all VM

[ceph-users] Re: upmap supported in SLES 12SPx

2019-09-16 Thread Oliver Freyermuth
Am 16.09.19 um 11:06 schrieb Konstantin Shalygin: On 9/16/19 3:59 PM, Thomas wrote: I tried to run this command with failure: root@ld3955:/mnt/rbd# ceph osd set-require-min-compat-client luminous Error EPERM: cannot set require_min_compat_client to luminous: 6 connected client(s) look like jewel

[ceph-users] Re: upmap supported in SLES 12SPx

2019-09-16 Thread Konstantin Shalygin
On 9/16/19 3:59 PM, Thomas wrote: I tried to run this command with failure: root@ld3955:/mnt/rbd# ceph osd set-require-min-compat-client luminous Error EPERM: cannot set require_min_compat_client to luminous: 6 connected client(s) look like jewel (missing 0xa20); 19 connected client(s

[ceph-users] upmap supported in SLES 12SPx

2019-09-16 Thread Thomas
Hi, I tried to run this command with failure: root@ld3955:/mnt/rbd# ceph osd set-require-min-compat-client luminous Error EPERM: cannot set require_min_compat_client to luminous: 6 connected client(s) look like jewel (missing 0xa20); 19 connected client(s) look like jewel (missing 0x800