Hi,
I would like to add new pool flags: noscrub and nodeep-scrub, to be
able to control scrubbing on per pool basis. In our case it could be
helpful in order to disable scrubbing on cache pools, which does not
work well right now, but I can imagine other scenarios where it could
be useful too.
Hey cephers,
Just a note to let you know that the wiki migration is complete.
http://wiki.ceph.com now 301s to the deep link inside of our Ceph
tracker instance.
All content from the original wiki has been moved over and is ready
for consumption and editing. You should be able to create a
Hi Linus,
Please pull the following Ceph updates from
git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git for-linus
There are a few fixes for snapshot behavior with CephFS and support for
the new keepalive protocol from Zheng, a libceph fix that affects both RBD
and CephFS, a
On Fri, Sep 11, 2015 at 7:42 AM, Mykola Golub wrote:
> Hi,
>
> I would like to add new pool flags: noscrub and nodeep-scrub, to be
> able to control scrubbing on per pool basis. In our case it could be
> helpful in order to disable scrubbing on cache pools, which does not
>
Hi Varada,
Got a chance to go through the code. Great job. It is much cleaner . Several
questions:
1. What you think about the performance impact with the new implementation?
Such as dynamic library vs static link?
2. Could any vendor just provide a objectstore interfaces complied dynamic
Hi James,
I just looked back at the results you posted, and saw that you were using
iodepth=1. Setting this higher should help keep the FileStore busy.
Casey
- Original Message -
> From: "James (Fei) Liu-SSI"
> To: "Casey Bodley"
> Cc:
On Fri, Sep 11, 2015 at 06:20:30AM -0400, Jeff Layton wrote:
> With NFSv3 nfsd will always attempt to send along WCC data to the
> client. This generally involves saving off the in-core inode information
> prior to doing the operation on the given filehandle, and then issuing a
> vfs_getattr to it
Hello,
It's a well-known trait of radosgw that an user cannot create new
bucket with a given name if the name is already occupied by other
user's bucket (request to do that will be rejected with 409 Conflict).
This behaviour is entirely expected in S3. However, when it comes
to Swift API, it
On Fri, 11 Sep 2015 17:29:57 -0400
"J. Bruce Fields" wrote:
> On Fri, Sep 11, 2015 at 06:20:30AM -0400, Jeff Layton wrote:
> > With NFSv3 nfsd will always attempt to send along WCC data to the
> > client. This generally involves saving off the in-core inode information
> >
Hi Sage/ Matt,
I have submitted the pull request based on wip-plugin branch for the object
store factory implementation at https://github.com/ceph/ceph/pull/5884 .
Haven't rebased to the master yet. Working on rebase and including new store in
the factory implementation. Please have a look
On Fri, Sep 11, 2015 at 4:24 PM, Mykola Golub wrote:
> On Fri, Sep 11, 2015 at 05:59:56AM -0700, Sage Weil wrote:
>
>> I wonder if, in addition, we should also allow scrub and deep-scrub
>> intervals to be set on a per-pool basis?
>
> ceph osd pool set [deep-]scrub_interval
Yesterday I have a chat with wangrui and the reason is "infos"(legacy
oid) is missing. I'm not sure why it's missing.
PS: resend again because of plain text
On Fri, Sep 11, 2015 at 8:56 PM, Sage Weil wrote:
> On Fri, 11 Sep 2015, ?? wrote:
>> Thank Sage Weil:
>>
>> 1. I
Maybe periodically run git gc on the clone out-of-line? Git runs it
occasionally when it thinks it's necessary, and that can take a while
on large and/or fragmented repos.
Daniel
On Fri, Sep 11, 2015 at 9:03 AM, Loic Dachary wrote:
> Hi Ceph,
>
> The make check bot failed a
On Fri, 11 Sep 2015, Haomai Wang wrote:
> On Fri, Sep 11, 2015 at 8:56 PM, Sage Weil wrote:
> On Fri, 11 Sep 2015, ?? wrote:
> > Thank Sage Weil:
> >
> > 1. I delete some testing pools in the past, but is was a long
> time ago (may be 2 months
What I'm thinking of is to use fluentd to get log with
quite human-readable format.
Is it same of what you are thinking of?
Shinobu
- Original Message -
From: "Shinobu"
To: ski...@redhat.com
Sent: Friday, September 11, 2015 6:16:18 PM
Subject: Fwd: pet project:
On 2015/09/11, 4:20 AM, "HPDD-discuss on behalf of Jeff Layton"
wrote:
>With NFSv3 nfsd will always attempt to send along WCC data to the
>client. This generally involves saving off the in-core inode information
>prior to
With NFSv3 nfsd will always attempt to send along WCC data to the
client. This generally involves saving off the in-core inode information
prior to doing the operation on the given filehandle, and then issuing a
vfs_getattr to it after the op.
Some filesystems (particularly clustered or networked
Yes, that is what I'm thinking.
Shinobu
- Original Message -
From: "Loic Dachary"
To: "Shinobu Kinjo"
Cc: "ceph-devel"
Sent: Saturday, September 12, 2015 12:10:43 AM
Subject: Re: pet project: OSD compatible daemon
On
On Thu, 10 Sep 2015, GuangYang wrote:
> Today I played around recovery and backfill of a Ceph cluster (by
> manually bringing some OSDs down/out), and got one question regards to
> the current flow:
>
> Does backfill push everything to the backfill target regardless what the
> backfill target
On Fri, 11 Sep 2015, Mykola Golub wrote:
> On Fri, Sep 11, 2015 at 11:08:29AM +0100, Gregory Farnum wrote:
> > On Fri, Sep 11, 2015 at 7:42 AM, Mykola Golub wrote:
> > > Hi,
> > >
> > > I would like to add new pool flags: noscrub and nodeep-scrub, to be
> > > able to control
Hi Ceph,
The make check bot failed a number of pull request verifications today. Each of
them was notified as false negative (you should have received a short note if
your pull request is concerned). The problem is now fixed[1] and all should be
back to normal. If you want to schedule another
On Fri, 11 Sep 2015, ?? wrote:
> Thank Sage Weil:
>
> 1. I delete some testing pools in the past, but is was a long time ago (may
> be 2 months ago), in recently upgrade, do not delete pools.
> 2. ceph osd dump please see the (attachment file ceph.osd.dump.log)
> 3. debug osd = 20' and 'debug
Hi James,
That's great that you were able to get fio-objectstore running! Thanks to you
and Haomai for all the help with testing.
In terms of performance, it's possible that we're not handling the completions
optimally. When profiling with MemStore I remember seeing a significant amount
of
On Fri, Sep 11, 2015 at 10:09 PM, Sage Weil wrote:
> On Fri, 11 Sep 2015, Haomai Wang wrote:
>> On Fri, Sep 11, 2015 at 8:56 PM, Sage Weil wrote:
>> On Fri, 11 Sep 2015, ?? wrote:
>> > Thank Sage Weil:
>> >
>> > 1. I delete some
I forgot to mention for the list, you can find the latest version of the
fio-objectstore branch at
https://github.com/cbodley/ceph/commits/fio-objectstore.
Casey
- Original Message -
From: "Casey Bodley"
To: "James (Fei) Liu-SSI"
Cc:
On Fri, 11 Sep 2015, Haomai Wang wrote:
> On Fri, Sep 11, 2015 at 10:09 PM, Sage Weil wrote:
> > On Fri, 11 Sep 2015, Haomai Wang wrote:
> >> On Fri, Sep 11, 2015 at 8:56 PM, Sage Weil wrote:
> >> On Fri, 11 Sep 2015, ?? wrote:
> >> > Thank Sage
On 11/09/2015 16:08, Shinobu Kinjo wrote:
> What I'm thinking of is to use fluentd to get log with
> quite human-readable format.
If you refer to https://github.com/fluent/fluentd it's different. Or is it
something else ?
>
> Is it same of what you are thinking of?
>
> Shinobu
>
> -
On 11/09/2015 15:53, Daniel Gryniewicz wrote:
> Maybe periodically run git gc on the clone out-of-line? Git runs it
> occasionally when it thinks it's necessary, and that can take a while
> on large and/or fragmented repos.
I did git prune + git gc on the clone on both sides (receiving and
Hi Casey,
You are right. I think the bottleneck is in fio side rather than in filestore
side in this case. The fio did not issue the io commands faster enough to
saturate the filestore.
Here is one of possible solution for it: Create a async engine which are
normally way faster than sync
29 matches
Mail list logo