osd: new pool flags: noscrub, nodeep-scrub

2015-09-11 Thread Mykola Golub
Hi, I would like to add new pool flags: noscrub and nodeep-scrub, to be able to control scrubbing on per pool basis. In our case it could be helpful in order to disable scrubbing on cache pools, which does not work well right now, but I can imagine other scenarios where it could be useful too.

Ceph Wiki has moved!

2015-09-11 Thread Patrick McGarry
Hey cephers, Just a note to let you know that the wiki migration is complete. http://wiki.ceph.com now 301s to the deep link inside of our Ceph tracker instance. All content from the original wiki has been moved over and is ready for consumption and editing. You should be able to create a

[GIT PULL] Ceph changes for 4.3-rc1

2015-09-11 Thread Sage Weil
Hi Linus, Please pull the following Ceph updates from git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git for-linus There are a few fixes for snapshot behavior with CephFS and support for the new keepalive protocol from Zheng, a libceph fix that affects both RBD and CephFS, a

Re: osd: new pool flags: noscrub, nodeep-scrub

2015-09-11 Thread Gregory Farnum
On Fri, Sep 11, 2015 at 7:42 AM, Mykola Golub wrote: > Hi, > > I would like to add new pool flags: noscrub and nodeep-scrub, to be > able to control scrubbing on per pool basis. In our case it could be > helpful in order to disable scrubbing on cache pools, which does not >

RE: loadable objectstore

2015-09-11 Thread James (Fei) Liu-SSI
Hi Varada, Got a chance to go through the code. Great job. It is much cleaner . Several questions: 1. What you think about the performance impact with the new implementation? Such as dynamic library vs static link? 2. Could any vendor just provide a objectstore interfaces complied dynamic

Re: About Fio backend with ObjectStore API

2015-09-11 Thread Casey Bodley
Hi James, I just looked back at the results you posted, and saw that you were using iodepth=1. Setting this higher should help keep the FileStore busy. Casey - Original Message - > From: "James (Fei) Liu-SSI" > To: "Casey Bodley" > Cc:

Re: [PATCH] nfsd: add a new EXPORT_OP_NOWCC flag to struct export_operations

2015-09-11 Thread J. Bruce Fields
On Fri, Sep 11, 2015 at 06:20:30AM -0400, Jeff Layton wrote: > With NFSv3 nfsd will always attempt to send along WCC data to the > client. This generally involves saving off the in-core inode information > prior to doing the operation on the given filehandle, and then issuing a > vfs_getattr to it

[rgw] Multi-tenancy support in radosgw

2015-09-11 Thread Radoslaw Zarzynski
Hello, It's a well-known trait of radosgw that an user cannot create new bucket with a given name if the name is already occupied by other user's bucket (request to do that will be rejected with 409 Conflict). This behaviour is entirely expected in S3. However, when it comes to Swift API, it

Re: [PATCH] nfsd: add a new EXPORT_OP_NOWCC flag to struct export_operations

2015-09-11 Thread Jeff Layton
On Fri, 11 Sep 2015 17:29:57 -0400 "J. Bruce Fields" wrote: > On Fri, Sep 11, 2015 at 06:20:30AM -0400, Jeff Layton wrote: > > With NFSv3 nfsd will always attempt to send along WCC data to the > > client. This generally involves saving off the in-core inode information > >

RE: loadable objectstore

2015-09-11 Thread Varada Kari
Hi Sage/ Matt, I have submitted the pull request based on wip-plugin branch for the object store factory implementation at https://github.com/ceph/ceph/pull/5884 . Haven't rebased to the master yet. Working on rebase and including new store in the factory implementation. Please have a look

Re: osd: new pool flags: noscrub, nodeep-scrub

2015-09-11 Thread Andrey Korolyov
On Fri, Sep 11, 2015 at 4:24 PM, Mykola Golub wrote: > On Fri, Sep 11, 2015 at 05:59:56AM -0700, Sage Weil wrote: > >> I wonder if, in addition, we should also allow scrub and deep-scrub >> intervals to be set on a per-pool basis? > > ceph osd pool set [deep-]scrub_interval

Re: Failed on starting osd-daemon after upgrade giant-0.87.1 tohammer-0.94.3

2015-09-11 Thread Haomai Wang
Yesterday I have a chat with wangrui and the reason is "infos"(legacy oid) is missing. I'm not sure why it's missing. PS: resend again because of plain text On Fri, Sep 11, 2015 at 8:56 PM, Sage Weil wrote: > On Fri, 11 Sep 2015, ?? wrote: >> Thank Sage Weil: >> >> 1. I

Re: make check bot failures (2 hours today)

2015-09-11 Thread Daniel Gryniewicz
Maybe periodically run git gc on the clone out-of-line? Git runs it occasionally when it thinks it's necessary, and that can take a while on large and/or fragmented repos. Daniel On Fri, Sep 11, 2015 at 9:03 AM, Loic Dachary wrote: > Hi Ceph, > > The make check bot failed a

Re: Failed on starting osd-daemon after upgrade giant-0.87.1 tohammer-0.94.3

2015-09-11 Thread Sage Weil
On Fri, 11 Sep 2015, Haomai Wang wrote: > On Fri, Sep 11, 2015 at 8:56 PM, Sage Weil wrote: > On Fri, 11 Sep 2015, ?? wrote: > > Thank Sage Weil: > > > > 1. I delete some testing pools in the past, but is was a long > time ago (may be 2 months

Re: pet project: OSD compatible daemon

2015-09-11 Thread Shinobu Kinjo
What I'm thinking of is to use fluentd to get log with quite human-readable format. Is it same of what you are thinking of? Shinobu - Original Message - From: "Shinobu" To: ski...@redhat.com Sent: Friday, September 11, 2015 6:16:18 PM Subject: Fwd: pet project:

Re: [HPDD-discuss] [PATCH] nfsd: add a new EXPORT_OP_NOWCC flag to struct export_operations

2015-09-11 Thread Dilger, Andreas
On 2015/09/11, 4:20 AM, "HPDD-discuss on behalf of Jeff Layton" wrote: >With NFSv3 nfsd will always attempt to send along WCC data to the >client. This generally involves saving off the in-core inode information >prior to

[PATCH] nfsd: add a new EXPORT_OP_NOWCC flag to struct export_operations

2015-09-11 Thread Jeff Layton
With NFSv3 nfsd will always attempt to send along WCC data to the client. This generally involves saving off the in-core inode information prior to doing the operation on the given filehandle, and then issuing a vfs_getattr to it after the op. Some filesystems (particularly clustered or networked

Re: pet project: OSD compatible daemon

2015-09-11 Thread Shinobu Kinjo
Yes, that is what I'm thinking. Shinobu - Original Message - From: "Loic Dachary" To: "Shinobu Kinjo" Cc: "ceph-devel" Sent: Saturday, September 12, 2015 12:10:43 AM Subject: Re: pet project: OSD compatible daemon On

Re: Backfill

2015-09-11 Thread Sage Weil
On Thu, 10 Sep 2015, GuangYang wrote: > Today I played around recovery and backfill of a Ceph cluster (by > manually bringing some OSDs down/out), and got one question regards to > the current flow: > > Does backfill push everything to the backfill target regardless what the > backfill target

Re: osd: new pool flags: noscrub, nodeep-scrub

2015-09-11 Thread Sage Weil
On Fri, 11 Sep 2015, Mykola Golub wrote: > On Fri, Sep 11, 2015 at 11:08:29AM +0100, Gregory Farnum wrote: > > On Fri, Sep 11, 2015 at 7:42 AM, Mykola Golub wrote: > > > Hi, > > > > > > I would like to add new pool flags: noscrub and nodeep-scrub, to be > > > able to control

make check bot failures (2 hours today)

2015-09-11 Thread Loic Dachary
Hi Ceph, The make check bot failed a number of pull request verifications today. Each of them was notified as false negative (you should have received a short note if your pull request is concerned). The problem is now fixed[1] and all should be back to normal. If you want to schedule another

Re: Failed on starting osd-daemon after upgrade giant-0.87.1 tohammer-0.94.3

2015-09-11 Thread Sage Weil
On Fri, 11 Sep 2015, ?? wrote: > Thank Sage Weil: > > 1. I delete some testing pools in the past, but is was a long time ago (may > be 2 months ago), in recently upgrade, do not delete pools. > 2. ceph osd dump please see the (attachment file ceph.osd.dump.log) > 3. debug osd = 20' and 'debug

Re: About Fio backend with ObjectStore API

2015-09-11 Thread Casey Bodley
Hi James, That's great that you were able to get fio-objectstore running! Thanks to you and Haomai for all the help with testing. In terms of performance, it's possible that we're not handling the completions optimally. When profiling with MemStore I remember seeing a significant amount of

Re: Failed on starting osd-daemon after upgrade giant-0.87.1 tohammer-0.94.3

2015-09-11 Thread Haomai Wang
On Fri, Sep 11, 2015 at 10:09 PM, Sage Weil wrote: > On Fri, 11 Sep 2015, Haomai Wang wrote: >> On Fri, Sep 11, 2015 at 8:56 PM, Sage Weil wrote: >> On Fri, 11 Sep 2015, ?? wrote: >> > Thank Sage Weil: >> > >> > 1. I delete some

Re: About Fio backend with ObjectStore API

2015-09-11 Thread Casey Bodley
I forgot to mention for the list, you can find the latest version of the fio-objectstore branch at https://github.com/cbodley/ceph/commits/fio-objectstore. Casey - Original Message - From: "Casey Bodley" To: "James (Fei) Liu-SSI" Cc:

Re: Failed on starting osd-daemon after upgrade giant-0.87.1 tohammer-0.94.3

2015-09-11 Thread Sage Weil
On Fri, 11 Sep 2015, Haomai Wang wrote: > On Fri, Sep 11, 2015 at 10:09 PM, Sage Weil wrote: > > On Fri, 11 Sep 2015, Haomai Wang wrote: > >> On Fri, Sep 11, 2015 at 8:56 PM, Sage Weil wrote: > >> On Fri, 11 Sep 2015, ?? wrote: > >> > Thank Sage

Re: pet project: OSD compatible daemon

2015-09-11 Thread Loic Dachary
On 11/09/2015 16:08, Shinobu Kinjo wrote: > What I'm thinking of is to use fluentd to get log with > quite human-readable format. If you refer to https://github.com/fluent/fluentd it's different. Or is it something else ? > > Is it same of what you are thinking of? > > Shinobu > > -

Re: make check bot failures (2 hours today)

2015-09-11 Thread Loic Dachary
On 11/09/2015 15:53, Daniel Gryniewicz wrote: > Maybe periodically run git gc on the clone out-of-line? Git runs it > occasionally when it thinks it's necessary, and that can take a while > on large and/or fragmented repos. I did git prune + git gc on the clone on both sides (receiving and

RE: About Fio backend with ObjectStore API

2015-09-11 Thread James (Fei) Liu-SSI
Hi Casey, You are right. I think the bottleneck is in fio side rather than in filestore side in this case. The fio did not issue the io commands faster enough to saturate the filestore. Here is one of possible solution for it: Create a async engine which are normally way faster than sync