Since both of you think this is a false positive, let's just ignore it.
On 05/12/2014 21:06, Kevin Greenan wrote:
I can take a look... Man, some of our conditional expressions and
if-then-else chains are pretty ugly!
-kevin
On Fri, Dec 5, 2014 at 7:06 AM, Loic Dachary l...@dachary.org
Folks,
Can CEPH support 2 clients simultaneously accessing a single volume - for
example a database cluster - and honor read and write order of blocks across
the multiple clients?
Regards,
Nigel Cook
Intel Fellow Cloud Chief Architect
Cloud Platforms Group
Intel Corporation
+1 720 319 7508
On Mon, 8 Dec 2014, Wang, Zhiqiang wrote:
Hi all,
I wrote some proxy write code and is doing testing now. I use 'rados put' to
write a full object. I notice that every time when the cache tier OSD sends
the object to the base tier OSD through the Objecter::mutate interface, it
retries 3
I've been thinking for a while that we need another more general command
than Ceph health to more generally inform you about your cluster. IE I
personally don't like having min/max PG warnings in Ceph health (they
can be independently controlled by ceph.conf options but that kind of
approach
-- All Branches --
Adam Crume adamcr...@gmail.com
2014-12-01 20:45:58 -0800 wip-doc-rbd-replay
Alfredo Deza alfredo.d...@inktank.com
2014-07-08 13:58:35 -0400 wip-8679
2014-09-04 13:58:14 -0400 wip-8366
2014-10-13 11:10:10 -0400 wip-9730
2014-11-11
The current RADOS behavior is that reads (on any given object) are always
processed in the order they are submitted by the client. This causes a
few headaches for the cache tiering that it would be nice to avoid. It
also occurs to me that there are likely cases where we could go a lot
faster
On Mon, Dec 8, 2014 at 9:03 AM, Sage Weil sw...@redhat.com wrote:
The current RADOS behavior is that reads (on any given object) are always
processed in the order they are submitted by the client. This causes a
few headaches for the cache tiering that it would be nice to avoid. It
also
Hi Nigel,
Am 08.12.2014 15:30, schrieb Cook, Nigel:
Folks,
Can CEPH support 2 clients simultaneously accessing a single volume - for
example a database cluster - and honor read and write order of blocks across
the multiple clients?
Yes, we use ocfs2 ontop of rbd, gfs2 should also
On Mon, 8 Dec 2014, Smart Weblications GmbH - Florian Wiessner wrote:
Hi Nigel,
Am 08.12.2014 15:30, schrieb Cook, Nigel:
Folks,
Can CEPH support 2 clients simultaneously accessing a single volume - for
example a database cluster - and honor read and write order of blocks
across
Hi devs,
We've created a new branch wip-claim-2, and new pull request
https://github.com/linuxbox2/linuxbox-ceph/pull/3
based on review feedback.
The big change is to replace volatile with sharable, and replace
strong_claim() with clone_nonsharable().
This may not be perfect, feedback
On Mon, 8 Dec 2014, Matt W. Benjamin wrote:
Hi devs,
We've created a new branch wip-claim-2, and new pull request
https://github.com/linuxbox2/linuxbox-ceph/pull/3
based on review feedback.
The big change is to replace volatile with sharable, and replace
strong_claim() with
On 12/08/2014 09:03 AM, Sage Weil wrote:
The current RADOS behavior is that reads (on any given object) are always
processed in the order they are submitted by the client. This causes a
few headaches for the cache tiering that it would be nice to avoid. It
also occurs to me that there are
Hey everyone,
When I was writing the original CRUSH code ages ago, I made several
different bucket types, each using a different 'choose' algorithm for
pseudorandomly selecting an item. Most of these were modeled after the
original RUSH algorithms from RJ Honicky, but there was one new bucket
On 12/08/2014 03:48 PM, Sage Weil wrote:
- Use floating point log function. This is problematic for the kernel
implementation (no floating point), is slower than the lookup table, and
makes me worry about whether the floating point calculations are
consistent across architectures (the mapping
Hi Sage,
I aslo set debug ms to 20. The log file is in
https://drive.google.com/file/d/0B1aauR3uQ9ECTjk1TUJ0OHMzQVk/view?usp=sharing
Seems like the problem is in the pipe.
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of
On Mon, 08 Dec 2014 08:33:25 -0600 Mark Nelson wrote:
I've been thinking for a while that we need another more general command
than Ceph health to more generally inform you about your cluster. IE I
personally don't like having min/max PG warnings in Ceph health (they
can be independently
16 matches
Mail list logo