Re: dead code in gf-complete

2014-12-08 Thread Loic Dachary
Since both of you think this is a false positive, let's just ignore it. On 05/12/2014 21:06, Kevin Greenan wrote: I can take a look... Man, some of our conditional expressions and if-then-else chains are pretty ugly! -kevin On Fri, Dec 5, 2014 at 7:06 AM, Loic Dachary l...@dachary.org

RE: CEPH Shared RBD Volumes

2014-12-08 Thread Cook, Nigel
Folks, Can CEPH support 2 clients simultaneously accessing a single volume - for example a database cluster - and honor read and write order of blocks across the multiple clients? Regards, Nigel Cook Intel Fellow Cloud Chief Architect Cloud Platforms Group Intel Corporation +1 720 319 7508

Re: Three times retries on write

2014-12-08 Thread Sage Weil
On Mon, 8 Dec 2014, Wang, Zhiqiang wrote: Hi all, I wrote some proxy write code and is doing testing now. I use 'rados put' to write a full object. I notice that every time when the cache tier OSD sends the object to the base tier OSD through the Objecter::mutate interface, it retries 3

Re: experimental features

2014-12-08 Thread Mark Nelson
I've been thinking for a while that we need another more general command than Ceph health to more generally inform you about your cluster. IE I personally don't like having min/max PG warnings in Ceph health (they can be independently controlled by ceph.conf options but that kind of approach

ceph branch status

2014-12-08 Thread ceph branch robot
-- All Branches -- Adam Crume adamcr...@gmail.com 2014-12-01 20:45:58 -0800 wip-doc-rbd-replay Alfredo Deza alfredo.d...@inktank.com 2014-07-08 13:58:35 -0400 wip-8679 2014-09-04 13:58:14 -0400 wip-8366 2014-10-13 11:10:10 -0400 wip-9730 2014-11-11

rados read ordering

2014-12-08 Thread Sage Weil
The current RADOS behavior is that reads (on any given object) are always processed in the order they are submitted by the client. This causes a few headaches for the cache tiering that it would be nice to avoid. It also occurs to me that there are likely cases where we could go a lot faster

Re: rados read ordering

2014-12-08 Thread Yehuda Sadeh
On Mon, Dec 8, 2014 at 9:03 AM, Sage Weil sw...@redhat.com wrote: The current RADOS behavior is that reads (on any given object) are always processed in the order they are submitted by the client. This causes a few headaches for the cache tiering that it would be nice to avoid. It also

Re: CEPH Shared RBD Volumes

2014-12-08 Thread Smart Weblications GmbH - Florian Wiessner
Hi Nigel, Am 08.12.2014 15:30, schrieb Cook, Nigel: Folks, Can CEPH support 2 clients simultaneously accessing a single volume - for example a database cluster - and honor read and write order of blocks across the multiple clients? Yes, we use ocfs2 ontop of rbd, gfs2 should also

Re: CEPH Shared RBD Volumes

2014-12-08 Thread Sage Weil
On Mon, 8 Dec 2014, Smart Weblications GmbH - Florian Wiessner wrote: Hi Nigel, Am 08.12.2014 15:30, schrieb Cook, Nigel: Folks, Can CEPH support 2 clients simultaneously accessing a single volume - for example a database cluster - and honor read and write order of blocks across

wip-claim-2

2014-12-08 Thread Matt W. Benjamin
Hi devs, We've created a new branch wip-claim-2, and new pull request https://github.com/linuxbox2/linuxbox-ceph/pull/3 based on review feedback. The big change is to replace volatile with sharable, and replace strong_claim() with clone_nonsharable(). This may not be perfect, feedback

Re: wip-claim-2

2014-12-08 Thread Sage Weil
On Mon, 8 Dec 2014, Matt W. Benjamin wrote: Hi devs, We've created a new branch wip-claim-2, and new pull request https://github.com/linuxbox2/linuxbox-ceph/pull/3 based on review feedback. The big change is to replace volatile with sharable, and replace strong_claim() with

Re: rados read ordering

2014-12-08 Thread Josh Durgin
On 12/08/2014 09:03 AM, Sage Weil wrote: The current RADOS behavior is that reads (on any given object) are always processed in the order they are submitted by the client. This causes a few headaches for the cache tiering that it would be nice to avoid. It also occurs to me that there are

crush: straw is dead, long live straw2

2014-12-08 Thread Sage Weil
Hey everyone, When I was writing the original CRUSH code ages ago, I made several different bucket types, each using a different 'choose' algorithm for pseudorandomly selecting an item. Most of these were modeled after the original RUSH algorithms from RJ Honicky, but there was one new bucket

Re: crush: straw is dead, long live straw2

2014-12-08 Thread Josh Durgin
On 12/08/2014 03:48 PM, Sage Weil wrote: - Use floating point log function. This is problematic for the kernel implementation (no floating point), is slower than the lookup table, and makes me worry about whether the floating point calculations are consistent across architectures (the mapping

RE: Three times retries on write

2014-12-08 Thread Wang, Zhiqiang
Hi Sage, I aslo set debug ms to 20. The log file is in https://drive.google.com/file/d/0B1aauR3uQ9ECTjk1TUJ0OHMzQVk/view?usp=sharing Seems like the problem is in the pipe. -Original Message- From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of

Re: [ceph-users] experimental features

2014-12-08 Thread Christian Balzer
On Mon, 08 Dec 2014 08:33:25 -0600 Mark Nelson wrote: I've been thinking for a while that we need another more general command than Ceph health to more generally inform you about your cluster. IE I personally don't like having min/max PG warnings in Ceph health (they can be independently