I've noticed that
* with a single node cluster with 4 osds
* and running rados bench rand on that same node so no network traffic
* with a number of objects small enough so that everything is in the cache
so no disk traffic
we still peak out at about 1600 MB/sec.
And the cpu is 40%
Hi Shylesh,
On 28/05/2015 21:25, shylesh kumar wrote:
Hi,
I created a LRC ec pool with the configuration
# ceph osd erasure-code-profile get mylrc
directory=/usr/lib64/ceph/erasure-code
k=4
l=3
m=2
plugin=lrc
ruleset-failure-domain=osd
One of the pg mapping looks like
Hi Andrew,
I'm copying Milan Broz, who has looked at this ome. There was some
subsequent off-list discussion in Red Hat about using Petera[1] for the
key management, but this'll require a bit more effort than what was
described in that blueprint.
On Thu, 28 May 2015, Andrew Bartlett wrote:
On Thu, May 28, 2015 at 4:09 PM, Deneau, Tom tom.den...@amd.com wrote:
I've noticed that
* with a single node cluster with 4 osds
* and running rados bench rand on that same node so no network traffic
* with a number of objects small enough so that everything is in the cache
so no
On Thu, 28 May 2015, Robert LeBlanc wrote:
On Thu, May 28, 2015 at 11:02 AM, Sage Weil s...@newdream.net wrote:
The MDS could combine a tenant ID and a UID/GID to store unique
UID/GIDs on the back end and just strip off the tenant ID when
presented to the client so there are no
On Thu, May 28, 2015 at 2:32 AM, Loic Dachary l...@dachary.org wrote:
Hi,
This morning I'll schedule a job with priority 50, assuming nobody will get
mad at me for using such a low priority because the associated bug fix blocks
the release of v0.94.2 (http://tracker.ceph.com/issues/11546)
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On Thu, May 28, 2015 at 11:32 AM, Sage Weil wrote:
If for instance a directory is shared between tenant A and B, and A
can write and B can't, then when B tries to write because the perms
are correct for the UID/GID on the client side, the MDS
On Thu, May 28, 2015 at 12:59 AM, kefu chai tchai...@gmail.com wrote:
On Wed, May 27, 2015 at 3:36 AM, Patrick McGarry pmcga...@redhat.com wrote:
Due to popular demand we are expanding the Ceph lists to include a
Chinese-language list to allow for direct communications for all of
our friends
On Thu, May 28, 2015 at 11:02 AM, Sage Weil s...@newdream.net wrote:
The MDS could combine a tenant ID and a UID/GID to store unique
UID/GIDs on the back end and just strip off the tenant ID when
presented to the client so there are no collisions of UID/GIDs between
tenants in the MDS.
-Original Message-
From: Gregory Farnum [mailto:g...@gregs42.com]
Sent: Thursday, May 28, 2015 6:18 PM
To: Deneau, Tom
Cc: ceph-devel
Subject: Re: rados bench throughput with no disk or network activity
On Thu, May 28, 2015 at 4:09 PM, Deneau, Tom tom.den...@amd.com wrote:
On Thu, May 28, 2015 at 7:50 PM, Deneau, Tom tom.den...@amd.com wrote:
-Original Message-
From: Gregory Farnum [mailto:g...@gregs42.com]
Sent: Thursday, May 28, 2015 6:18 PM
To: Deneau, Tom
Cc: ceph-devel
Subject: Re: rados bench throughput with no disk or network activity
On
On Thu, May 28, 2015 at 4:50 PM, Deneau, Tom tom.den...@amd.com wrote:
-Original Message-
From: Gregory Farnum [mailto:g...@gregs42.com]
Sent: Thursday, May 28, 2015 6:18 PM
To: Deneau, Tom
Cc: ceph-devel
Subject: Re: rados bench throughput with no disk or network activity
On
Robert LeBlanc rob...@leblancnet.us writes:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
At first I thought this was to allow the OSDs to stub the location of
the real data after a CRUSH map change so that it didn't have to
relocate the data right away (or at all) and reduce the number
I updated the release notes and sent a pull request.
Thanks,
Yehuda
- Original Message -
From: Loic Dachary l...@dachary.org
To: Yehuda Sadeh yeh...@redhat.com
Cc: Ceph Development ceph-devel@vger.kernel.org
Sent: Wednesday, May 27, 2015 2:35:35 PM
Subject: rgw release notes for
Hi!
I have been watching changes in ceph -s output for a while and
noticed that in this line:
3324/7888981 objects degraded (0.042%); 1995972/7888981 objects
misplaced (25.301%)
rather misplaced object count drops constantly, but degraded object
count drops just occasionally.
Quick googling did
Would it be possible to backport this as well to 0.80.11:
http://tracker.ceph.com/issues/9792#change-46498
And I think this commit would be the easiest to backport:
https://github.com/ceph/ceph/commit/6b982e4cc00f9f201d7fbffa0282f8f3295f2309
This way we add a simple safeguard against pool
Hi Ken,
The commits with a + are found in v0.94.1.2 and are not in hammer
$ git rev-parse ceph/hammer
eb69cf758eb25e7ac71e36c754b9b959edb67cee
$ git --no-pager cherry -v ceph/hammer tags/v0.94.1.2
- 46e85f72a26186963836ee9071b93417ebc41af2 Dencoder should never be built with
tcmalloc
-
Hi,
This morning I'll schedule a job with priority 50, assuming nobody will get mad
at me for using such a low priority because the associated bug fix blocks the
release of v0.94.2 (http://tracker.ceph.com/issues/11546) and also assuming
noone uses a priority lower than 100 just to get in
Gregory Farnum g...@gregs42.com writes:
On Wed, May 27, 2015 at 1:39 AM, Marcel Lauhoff m...@irq0.org wrote:
Hi,
I wrote a prototype for an OSD-based object stub feature. An object stub
being an object with it's data moved /elsewhere/. I hope to get some
feedback, especially whether I'm on
On 28/05/2015 06:37, Gregory Farnum wrote:
On Tue, May 12, 2015 at 5:42 PM, Josh Durgin jdur...@redhat.com wrote:
It will need some metadata regarding positions in the journal. These
could be stored as omap values in a 'journal header' object in a
replicated pool, for rbd perhaps the same
This patch is to do write back throttling for cache tiering,
which is similar to what the Linux kernel does for
page cache write back. The motivation and original idea are
proposed by Nick Fisk, detailed in his email as below. In our
implementation, we introduce a paramter
On Thu, May 28, 2015 at 3:42 AM, John Spray john.sp...@redhat.com wrote:
On 28/05/2015 06:37, Gregory Farnum wrote:
On Tue, May 12, 2015 at 5:42 PM, Josh Durgin jdur...@redhat.com wrote:
Parallelism
^^^
Mirroring many images is embarrassingly parallel. A simple unit of
work is an
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I've been trying to follow this and I've been lost many times, but I'd
like to put in my $0.02. In my mind any multi-tenant system that
relies on the client to specify UID/GID as authoritative is
fundamentally flawed. The server needs to be
I usually use:
priority [90,100]
for point releases validations.
This is a good thread to bring up for open approval/disapproval.
Does that sound reasonable ??
Thx
YuriW
- Original Message -
From: Loic Dachary l...@dachary.org
To: Ceph Development ceph-devel@vger.kernel.org
Sent:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Let me see if I understand this... Your idea is to have a progress bar
that show (active+clean + active+scrub + active+deep-scrub) / pgs and
then estimate time remaining?
So if PGs are split the numbers change and the progress bar go
backwards, is
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I've got some more tests running right now. Once those are done, I'll
find a couple of tests that had extreme difference and gather some
perf data for them.
-
Robert LeBlanc
GPG Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2
I've been trying to debug this issue, and it's why I haven't pushed that
epel-testing package to stable yet. Your email is helping to illuminate a bit
more what is happening. Today I've unpushed -0.5 from epel-testing in Bodhi
because it's clear -0.5 doesn't resolve the situation and just makes
For the record:
[28.05 18:09] sjusthm loicd: you have my ack
On 22/05/2015 21:55, Loic Dachary wrote:
Hi Sam,
The next firefly release as found at
https://github.com/ceph/ceph/tree/firefly
(68211f695941ee128eb9a7fd0d80b615c0ded6cf) passed the rados suite
Hi Dan,
Thanks for the pointer. I've added Milan Broz as a watcher to that ticket,
since Milan's working on SELinux integration with Ceph.
- Ken
- Original Message -
From: Dan van der Ster d...@vanderster.com
To: Ken Dreyer kdre...@redhat.com
Cc: ceph-devel@vger.kernel.org
Sent:
Hi,
On Thu, 28 May 2015, Ugis wrote:
Hi!
I have been watching changes in ceph -s output for a while and
noticed that in this line:
3324/7888981 objects degraded (0.042%); 1995972/7888981 objects
misplaced (25.301%)
rather misplaced object count drops constantly, but degraded object
On Thu, May 28, 2015 at 9:20 AM, Robert LeBlanc rob...@leblancnet.us wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I've been trying to follow this and I've been lost many times, but I'd
like to put in my $0.02. In my mind any multi-tenant system that
relies on the client to specify
Hi Li,
Reviewing this now! See comments on the PR.
Just FYI, the current convention is to send kernel patches to the list,
and to use github for the userland stuff. Emails like this are helpful to
get people's attention but not strictly needed--we'll notice the PR either
way!
Thanks-
sage
On 28/05/2015 17:41, Robert LeBlanc wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Let me see if I understand this... Your idea is to have a progress bar
that show (active+clean + active+scrub + active+deep-scrub) / pgs and
then estimate time remaining?
Not quite: it's not about doing
On Thu, 28 May 2015, Gregory Farnum wrote:
On Thu, May 28, 2015 at 9:20 AM, Robert LeBlanc rob...@leblancnet.us wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I've been trying to follow this and I've been lost many times, but I'd
like to put in my $0.02. In my mind any
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I think if there is a way to store the tenant ID with the UID/GID,
then a lot of the challenges could be resolved.
On Thu, May 28, 2015 at 10:42 AM, Gregory Farnum wrote:
Right, this is basically what we're planning. The sticky bits are about
35 matches
Mail list logo