On Mon, Nov 16, 2015 at 4:20 PM, Sage Weil wrote:
> On Mon, 16 Nov 2015, Dan van der Ster wrote:
>> Instead of keeping a 24hr loadavg, how about we allow scrubs whenever
>> the loadavg is decreasing (or below the threshold)? As long as the
>> 1min loadavg is less than the 15min
On Mon, 16 Nov 2015, Dan van der Ster wrote:
> Instead of keeping a 24hr loadavg, how about we allow scrubs whenever
> the loadavg is decreasing (or below the threshold)? As long as the
> 1min loadavg is less than the 15min loadavg, we should be ok to allow
> new scrubs. If you agree I'll add the
On Mon, 16 Nov 2015, yangruifeng.09...@h3c.com wrote:
> an ENOTEMPTY error mybe happen when removing a pg in previous
> versions?but the error is hidden in new versions?
When did this change?
sage
> _destroy_collection maybe return 0 when get_index or prep_delete return < 0;
>
> is this
On Mon, Nov 16, 2015 at 4:32 PM, Dan van der Ster wrote:
> On Mon, Nov 16, 2015 at 4:20 PM, Sage Weil wrote:
>> On Mon, 16 Nov 2015, Dan van der Ster wrote:
>>> Instead of keeping a 24hr loadavg, how about we allow scrubs whenever
>>> the loadavg is
-- All Branches --
Adam C. Emerson
2015-10-16 13:49:09 -0400 wip-cxx11time
2015-10-17 13:20:15 -0400 wip-cxx11concurrency
Adam Crume
2014-12-01 20:45:58 -0800 wip-doc-rbd-replay
Alfredo Deza
list_next_entry has been defined in list.h, so I replace list_entry_next
with it.
Signed-off-by: Geliang Tang
---
net/ceph/messenger.c | 7 ++-
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index
On Thu, Nov 12, 2015 at 4:34 PM, Dan van der Ster wrote:
> On Thu, Nov 12, 2015 at 4:10 PM, Sage Weil wrote:
>> On Thu, 12 Nov 2015, Dan van der Ster wrote:
>>> On Thu, Nov 12, 2015 at 2:29 PM, Sage Weil wrote:
>>> > On Thu, 12 Nov
Hey cephers,
Just letting you know that due to unforeseen circumstances, and then
the holidays and travel concerns, the Ceph Tech Talk program will be
placed on hold until after the new year. See you all in Jan 2016!
http://ceph.com/ceph-tech-talks/
--
Best Regards,
Patrick McGarry
Director
On Mon, 16 Nov 2015, Dan van der Ster wrote:
> On Mon, Nov 16, 2015 at 4:58 PM, Dan van der Ster wrote:
> > On Mon, Nov 16, 2015 at 4:32 PM, Dan van der Ster
> > wrote:
> >> On Mon, Nov 16, 2015 at 4:20 PM, Sage Weil wrote:
> >>> On
On Mon, Nov 16, 2015 at 6:13 PM, Sage Weil wrote:
> On Mon, 16 Nov 2015, Dan van der Ster wrote:
>> On Mon, Nov 16, 2015 at 4:58 PM, Dan van der Ster
>> wrote:
>> > On Mon, Nov 16, 2015 at 4:32 PM, Dan van der Ster
>> > wrote:
>> >>
On Mon, Nov 16, 2015 at 4:58 PM, Dan van der Ster wrote:
> On Mon, Nov 16, 2015 at 4:32 PM, Dan van der Ster wrote:
>> On Mon, Nov 16, 2015 at 4:20 PM, Sage Weil wrote:
>>> On Mon, 16 Nov 2015, Dan van der Ster wrote:
Instead of
This release QE validation took longer time due to the #11104
additional fixing/testing and discovered related to it issues ##13794,
13622
We agreed to release v0.80.11 based on tests results.
Thx
YuriW
On Wed, Oct 28, 2015 at 9:04 AM, Yuri Weinstein wrote:
> Summary of
Hi Loic and Yuri,
issue 11104 can be resolved only after the stable epel[with Boris Fix] is out
and tested in teuthology.
what Warren has currently done is to tweak the install task to work around this
issue inorder to clear the test blocker for v0.80.11. It doesnt count as a
"real" fix.
Thanks for the update Tamil, that makes sense now :-)
On 16/11/2015 23:05, Tamil Muthamizhan wrote:
> Hi Loic and Yuri,
>
> issue 11104 can be resolved only after the stable epel[with Boris Fix] is out
> and tested in teuthology.
>
> what Warren has currently done is to tweak the install task
Loic,
I am not actually sure about resolving #11104.
Warren?
Thx
YuriW
On Mon, Nov 16, 2015 at 1:04 PM, Loic Dachary wrote:
> Hi Yuri,
>
> Thanks for the update :-) Should we mark #11104 as resolved ?
>
> Cheers
>
> On 16/11/2015 19:45, Yuri Weinstein wrote:
>> This
Hi Yuri,
Thanks for the update :-) Should we mark #11104 as resolved ?
Cheers
On 16/11/2015 19:45, Yuri Weinstein wrote:
> This release QE validation took longer time due to the #11104
> additional fixing/testing and discovered related to it issues ##13794,
> 13622
>
> We agreed to release
Hello,
Last week, while running an rbd test which does a lot of maps and
unmaps (read losetup / losetup -d) with slab debugging enabled, I hit
the attached splat. That 6a byte corresponds to the atomic_long_t
count of the percpu_ref refcnt in request_queue::backing_dev_info::wb,
pointing to a
I spoke to a leveldb expert, it looks like this is a known pattern on
LSM tree data structure - the tail latency for range scan could be far
longer than avg/median since it might need to mmap several sst files
to get the record.
Hi Sage,
Do you see any harm to increase the default value for this
On Mon, 16 Nov 2015, Guang Yang wrote:
> I spoke to a leveldb expert, it looks like this is a known pattern on
> LSM tree data structure - the tail latency for range scan could be far
> longer than avg/median since it might need to mmap several sst files
> to get the record.
>
> Hi Sage,
> Do you
On Mon, Nov 16, 2015 at 5:42 PM, Sage Weil wrote:
> On Mon, 16 Nov 2015, Guang Yang wrote:
>> I spoke to a leveldb expert, it looks like this is a known pattern on
>> LSM tree data structure - the tail latency for range scan could be far
>> longer than avg/median since it might
This patch makes ceph_frag_contains_value return bool to improve
readability due to this particular function only using either one or
zero as its return value.
No functional change.
Signed-off-by: Yaowei Bai
---
include/linux/ceph/ceph_frag.h | 2 +-
1 file
These functions were introduced in commit 3d14c5d2b ("ceph: factor
out libceph from Ceph file system"). Howover, there's no user of
these functions since then, so remove them for simplicity.
Signed-off-by: Yaowei Bai
---
include/linux/ceph/ceph_frag.h | 35
hi, all
Test RGW found, put more than 1G file to bucket.monitor network
traffic, detect network traffic fluctuates greatly, even no traffic,
hope you can help me.
thx,
yapeng
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to
23 matches
Mail list logo