Re: [ceph-users] pgs stuck unclean after reweight

2016-07-19 Thread M Ranga Swami Reddy
Ok...try the same with osd.32 and osd.13...one by one (do the osd.32 and wait if any rebalance happens, if no changes, then do it on osd.13). thanks Swami On Wed, Jul 20, 2016 at 11:59 AM, Goncalo Borges wrote: > Hi Swami. > > Did not make any difference. > > Cheers > > G. > > > > On 07/20/2016

Re: [ceph-users] pgs stuck unclean after reweight

2016-07-19 Thread Goncalo Borges
Hi Swami. Did not make any difference. Cheers G. On 07/20/2016 03:31 PM, M Ranga Swami Reddy wrote: can you restart osd.32 and check the status? Thanks Swami On Wed, Jul 20, 2016 at 9:12 AM, Goncalo Borges wrote: Hi All... Today we had a warning regarding 8 near full osd. Looking to th

[ceph-users] CephFS Samba VFS RHEL packages

2016-07-19 Thread Blair Bethwaite
Hi all, We've started a CephFS Samba PoC on RHEL but just noticed the Samba Ceph VFS doesn't seem to be included with Samba on RHEL, or we're not looking in the right place. Trying to avoid needing to build Samba from source if possible. Any pointers appreciated. -- Cheers, ~Blairo _

Re: [ceph-users] pgs stuck unclean after reweight

2016-07-19 Thread M Ranga Swami Reddy
can you restart osd.32 and check the status? Thanks Swami On Wed, Jul 20, 2016 at 9:12 AM, Goncalo Borges wrote: > Hi All... > > Today we had a warning regarding 8 near full osd. Looking to the osds > occupation, 3 of them were above 90%. In order to solve the situation, I've > decided to reweig

Re: [ceph-users] pgs stuck unclean after reweight

2016-07-19 Thread Goncalo Borges
I think I understood the source of the problem: 1. This is the original pg mapping before reweighing: # egrep "(^6.e2\s|^6.4\s|^5.24\s|^5.306\s)" /tmp/pg_dump.1 6.e21273200004539146855330843084 active+clean2016-07-19 19:06:56.6221851005'234027 1005:281726

Re: [ceph-users] pgs stuck unclean after reweight

2016-07-19 Thread Goncalo Borges
Hi KK... Thanks. I did set 'sortbitwise flag' since that was mentioned in the release notes. However I do not understand how this relates to this problem. Can you give a bit more info? Cheers and Thanks Goncalo On 07/20/2016 02:10 PM, K K wrote: Hi, Goncalo. Do you set sortbitwise flag

[ceph-users] pgs stuck unclean after reweight

2016-07-19 Thread Goncalo Borges
Hi All... Today we had a warning regarding 8 near full osd. Looking to the osds occupation, 3 of them were above 90%. In order to solve the situation, I've decided to reweigh those first using ceph osd crush reweight osd.1 2.67719 ceph osd crush reweight osd.26 2.67719 ceph osd

Re: [ceph-users] how to use cache tiering with proxy in ceph-10.2.2

2016-07-19 Thread m13913886148
But the 0.94 version works fine(In fact, IO was greatly improved). This problem occurs only in version 10.x. Like you said that the IO was going to the cold storage mostly .  And IO is going slowly.what can I do to improve IO performance of cache tiering in  version 10.x ? How does cache tiering w

Re: [ceph-users] Too much pgs backfilling

2016-07-19 Thread Somnath Roy
The settings are per OSD and the messages you are seeing aggregated on the cluster with multiple OSDs doing backfill (working on multiple PGs in parallel).. Thanks & Regards Somnath From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Jimmy Goffaux Sent: Tuesday, July 19, 2

[ceph-users] Too much pgs backfilling

2016-07-19 Thread Jimmy Goffaux
Hello, This is my configuration : -> "osd_max_backfills": "1", -> "osd_recovery_threads": "1" -> "osd_recovery_max_active": "1", -> "osd_recovery_op_priority": "3", -> "osd_client_op_priority": "63", I have run command : ceph osd crush tunables optimal After upgrade Hammer to Jewel.

Re: [ceph-users] Multi-device BlueStore testing

2016-07-19 Thread Somnath Roy
I don't think ceph-disk has support for separating block.db and block.wal yet (?). You need to create the cluster manually by running mkfs. Or if you have old mkcephfs script (which sadly deprecated) you can point the db / wal path and it will create cluster for you. I am using that to configure

[ceph-users] Multi-device BlueStore testing

2016-07-19 Thread Stillwell, Bryan J
I would like to do some BlueStore testing using multiple devices like mentioned here: https://www.sebastien-han.fr/blog/2016/05/04/Ceph-Jewel-configure-BlueStore-with-multiple-devices/ However, si

Re: [ceph-users] CephFS write performance

2016-07-19 Thread Gregory Farnum
On Tue, Jul 19, 2016 at 9:39 AM, Patrick Donnelly wrote: > On Tue, Jul 19, 2016 at 10:25 AM, Fabiano de O. Lucchese > wrote: >> I configured the cluster to replicate data twice (3 copies), so these >> numbers fall within my expectations. So far so good, but here's comes the >> issue: I configured

Re: [ceph-users] CephFS write performance

2016-07-19 Thread John Spray
On Tue, Jul 19, 2016 at 3:25 PM, Fabiano de O. Lucchese wrote: > Hi, folks. > > I'm conducting a series of experiments and tests with CephFS and have been > facing a behavior over which I can't seem to have much control. > > I configured a 5-node Ceph cluster running on enterprise servers. Each >

Re: [ceph-users] CephFS write performance

2016-07-19 Thread Patrick Donnelly
On Tue, Jul 19, 2016 at 10:25 AM, Fabiano de O. Lucchese wrote: > I configured the cluster to replicate data twice (3 copies), so these > numbers fall within my expectations. So far so good, but here's comes the > issue: I configured CephFS and mounted a share locally on one of my servers. > When

Re: [ceph-users] Is anyone seeing iissues with task_numa_find_cpu?

2016-07-19 Thread Alex Gorbachev
On Mon, Jul 18, 2016 at 4:41 AM, Василий Ангапов wrote: > Guys, > > This bug is hitting me constantly, may be once per several days. Does > anyone know is there a solution already? I see there is a fix available, and am waiting for a backport to a longterm kernel: https://lkml.org/lkml/2016/7/1

[ceph-users] CephFS write performance

2016-07-19 Thread Fabiano de O. Lucchese
Hi, folks. I'm conducting a series of experiments and tests with CephFS and have been facing a behavior over which I can't seem to have much control. I configured a 5-node Ceph cluster running on enterprise servers. Each server has 10 x 6TB HDDs and 2 x 800GB SSDs. I configured the SSDs as a R

[ceph-users] CephFS write performance

2016-07-19 Thread Fabiano de O. Lucchese
Hi, folks. I'm conducting a series of experiments and tests with CephFS and have been facing a behavior over which I can't seem to have much control. I configured a 5-node Ceph cluster running on enterprise servers. Each server has 10 x 6TB HDDs and 2 x 800GB SSDs. I configured the SSDs as a RAID

[ceph-users] Storage tiering in Ceph

2016-07-19 Thread Andrey Ptashnik
Hi Team, Is there any way to implement storage tiering in Ceph Jewel? I’ve read about different placing different pool on different performance hardware, however is there any automation possible in Ceph that will promote data from slow hardware to fast one and back? Regards, Andrey __

Re: [ceph-users] Cache Tier configuration

2016-07-19 Thread Christian Balzer
Hello, On Tue, 19 Jul 2016 15:15:55 +0200 Mateusz Skała wrote: > Hello, > > > -Original Message- > > From: Christian Balzer [mailto:ch...@gol.com] > > Sent: Wednesday, July 13, 2016 4:03 AM > > To: ceph-users@lists.ceph.com > > Cc: Mateusz Skała > > Subject: Re: [ceph-users] Cache Tier

Re: [ceph-users] how to use cache tiering with proxy in ceph-10.2.2

2016-07-19 Thread Christian Balzer
Hello, On Tue, 19 Jul 2016 12:24:01 +0200 Oliver Dzombic wrote: > Hi, > > i have in my ceph.conf under [OSD] Section: > > osd_tier_promote_max_bytes_sec = 1610612736 > osd_tier_promote_max_objects_sec = 2 > > #ceph --show-config is showing: > > osd_tier_promote_max_objects_sec = 5242880

Re: [ceph-users] Cache Tier configuration

2016-07-19 Thread Mateusz Skała
Hello, > -Original Message- > From: Christian Balzer [mailto:ch...@gol.com] > Sent: Wednesday, July 13, 2016 4:03 AM > To: ceph-users@lists.ceph.com > Cc: Mateusz Skała > Subject: Re: [ceph-users] Cache Tier configuration > > > Hello, > > On Tue, 12 Jul 2016 11:01:30 +0200 Mateusz Skał

Re: [ceph-users] ceph OSD with 95% full

2016-07-19 Thread M Ranga Swami Reddy
+1 .. I agree Thanks Swami On Tue, Jul 19, 2016 at 4:57 PM, Lionel Bouton wrote: > Hi, > > On 19/07/2016 13:06, Wido den Hollander wrote: >>> Op 19 juli 2016 om 12:37 schreef M Ranga Swami Reddy : >>> >>> >>> Thanks for the correction...so even one OSD reaches to 95% full, the >>> total ceph

Re: [ceph-users] ceph OSD with 95% full

2016-07-19 Thread M Ranga Swami Reddy
>That should be a config option, since reading while writes still block is also >a danger. Multiple clients could read the same object, >perform a in-memory >change and their write will block. >Now, which client will 'win' after the full flag has been removed? >That could lead to data corruption

Re: [ceph-users] ceph OSD with 95% full

2016-07-19 Thread Lionel Bouton
Hi, On 19/07/2016 13:06, Wido den Hollander wrote: >> Op 19 juli 2016 om 12:37 schreef M Ranga Swami Reddy : >> >> >> Thanks for the correction...so even one OSD reaches to 95% full, the >> total ceph cluster IO (R/W) will be blocked...Ideally read IO should >> work... > That should be a config op

Re: [ceph-users] ceph OSD with 95% full

2016-07-19 Thread Wido den Hollander
> Op 19 juli 2016 om 12:37 schreef M Ranga Swami Reddy : > > > Thanks for the correction...so even one OSD reaches to 95% full, the > total ceph cluster IO (R/W) will be blocked...Ideally read IO should > work... That should be a config option, since reading while writes still block is also a

Re: [ceph-users] ceph admin socket from non root

2016-07-19 Thread Stefan Priebe - Profihost AG
Am 18.07.2016 um 20:14 schrieb Gregory Farnum: > I'm not familiar with how it's set up but skimming and searching > through the code I'm not seeing anything, no. We've got a chown but no > chmod. That's odd ;-) how do all the people do their monitoring? running as root? > That's a reasonably feat

Re: [ceph-users] ceph OSD with 95% full

2016-07-19 Thread M Ranga Swami Reddy
Thanks for the correction...so even one OSD reaches to 95% full, the total ceph cluster IO (R/W) will be blocked...Ideally read IO should work... Thanks Swami On Tue, Jul 19, 2016 at 3:41 PM, Wido den Hollander wrote: > >> Op 19 juli 2016 om 11:55 schreef M Ranga Swami Reddy : >> >> >> Thanks fo

Re: [ceph-users] how to use cache tiering with proxy in ceph-10.2.2

2016-07-19 Thread Oliver Dzombic
Hi, i have in my ceph.conf under [OSD] Section: osd_tier_promote_max_bytes_sec = 1610612736 osd_tier_promote_max_objects_sec = 2 #ceph --show-config is showing: osd_tier_promote_max_objects_sec = 5242880 osd_tier_promote_max_bytes_sec = 25 But in fact its working. Maybe some Bug in showing

Re: [ceph-users] ceph OSD with 95% full

2016-07-19 Thread Wido den Hollander
> Op 19 juli 2016 om 11:55 schreef M Ranga Swami Reddy : > > > Thanks for detail... > When an OSD is 95% full, then that specific OSD's write IO blocked. > No, the *whole* cluster will block. In the OSDMap the flag 'full' is set which causes all I/O to stop (even read!) until you make sure th

Re: [ceph-users] ceph OSD with 95% full

2016-07-19 Thread M Ranga Swami Reddy
Thanks for detail... When an OSD is 95% full, then that specific OSD's write IO blocked. Thanks Swami On Tue, Jul 19, 2016 at 3:07 PM, Christian Balzer wrote: > > Hello, > > On Tue, 19 Jul 2016 14:23:32 +0530 M Ranga Swami Reddy wrote: > >> >> Using ceph cluster with 100+ OSDs and cluster is fil

Re: [ceph-users] ceph OSD with 95% full

2016-07-19 Thread Christian Balzer
Hello, On Tue, 19 Jul 2016 14:23:32 +0530 M Ranga Swami Reddy wrote: > >> Using ceph cluster with 100+ OSDs and cluster is filled with 60% data. > >> One of the OSD is 95% full. > >> If an OSD is 95% full, is it impact the any storage operation? Is this > >> impacts on VM/Instance? > > >Yes, on

Re: [ceph-users] ceph-fuse segfaults ( jewel 10.2.2)

2016-07-19 Thread Yan, Zheng
On Tue, Jul 19, 2016 at 1:03 PM, Goncalo Borges wrote: > Hi All... > > We do have some good news. > > As promised, I've recompiled ceph 10.2.2 (in an intel processor without > AVX2) with and without the patch provided by Zheng. It turns out that > Zheng's patch is the solution for the segfaults we

Re: [ceph-users] ceph OSD with 95% full

2016-07-19 Thread M Ranga Swami Reddy
>> Using ceph cluster with 100+ OSDs and cluster is filled with 60% data. >> One of the OSD is 95% full. >> If an OSD is 95% full, is it impact the any storage operation? Is this >> impacts on VM/Instance? >Yes, one OSD will impact whole cluster. It will block write operations to the >cluster Th

Re: [ceph-users] ceph OSD with 95% full

2016-07-19 Thread Henrik Korkuc
On 16-07-19 11:44, M Ranga Swami Reddy wrote: Hi, Using ceph cluster with 100+ OSDs and cluster is filled with 60% data. One of the OSD is 95% full. If an OSD is 95% full, is it impact the any storage operation? Is this impacts on VM/Instance? Yes, one OSD will impact whole cluster. It will block

[ceph-users] ceph OSD with 95% full

2016-07-19 Thread M Ranga Swami Reddy
Hi, Using ceph cluster with 100+ OSDs and cluster is filled with 60% data. One of the OSD is 95% full. If an OSD is 95% full, is it impact the any storage operation? Is this impacts on VM/Instance? Immediately I have reduced the OSD weight, which was filled with 95 % data. After re-weight, data re

Re: [ceph-users] how to use cache tiering with proxy in ceph-10.2.2

2016-07-19 Thread Nick Fisk
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of m13913886...@yahoo.com Sent: 19 July 2016 07:44 To: Oliver Dzombic ; ceph-users@lists.ceph.com Subject: Re: [ceph-users] how to use cache tiering with proxy in ceph-10.2.2 I have configured ceph.conf with "osd_tier_pro