Hi Alex,
Thanks! So far we've gotten a report that asyncmesseneger was a little
slower than simple messenger, but not this bad! I imagine Greg will
have lots of questions. :)
Mark
On 04/28/2015 03:36 AM, Alexandre DERUMIER wrote:
Hi,
here a small bench 4k randread of simple messenger vs
Hi,
The default cache size is 32M, the tcmalloc documentation is outdated.
As Somnath mentioned, the tcmalloc fix is to make the env effective as without
this fix the library does not use exported value of '
TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES'.
The degenerated case is hit less frequently
Hi Guys,
Sage has been furiously working away at fixing bugs in newstore and
improving performance. Specifically we've been focused on write
performance as newstore was lagging filestore but quite a bit
previously. A lot of work has gone into implementing libaio behind the
scenes and as a
Hi,
The make check bot needs to go through all passed pull requests before it can
start working again. This is because of
https://issues.jenkins-ci.org/browse/JENKINS-22752 and we just need to wait it
out. It should not last more than 24h.
Cheers
--
Loïc Dachary, Artisan Logiciel Libre
Nothing official, though roughly from memory:
~1.7GB/s and something crazy like 100K IOPS for the SSD.
~150MB/s and ~125-150 IOPS for the spinning disk.
Mark
On 04/28/2015 07:00 PM, Venkateswara Rao Jujjuri wrote:
Thanks for sharing; newstore numbers look lot better;
Wondering if we have
Thanks for sharing; newstore numbers look lot better;
Wondering if we have any base line numbers to put things into perspective.
like what is it on XFS or on librados?
JV
On Tue, Apr 28, 2015 at 4:25 PM, Mark Nelson mnel...@redhat.com wrote:
Hi Guys,
Sage has been furiously working away at
On 04/28/2015 06:25 PM, Mark Nelson wrote:
Hi Guys,
Sage has been furiously working away at fixing bugs in newstore and
improving performance. Specifically we've been focused on write
performance as newstore was lagging filestore but quite a bit
previously. A lot of work has gone into
Hi,
I am trying to measure 4k RW performance on Newstore, and I am not
anywhere close to the numbers you are getting!
Could you share your ceph.conf for these test ?
I'll try also to help testing newstore with my ssd cluster.
what is used for benchmark ? rados bench ?
any command line to
Hi Mark,
I am trying to measure 4k RW performance on Newstore, and I am not
anywhere close to the numbers you are getting!
Could you share your ceph.conf for these test ?
-Neo
On Tue, Apr 28, 2015 at 5:07 PM, Mark Nelson mnel...@redhat.com wrote:
Nothing official, though roughly from memory:
The following patches made over v4.0 of the upstream linux kernel add
functionality needed to support HA LIO using RBD for the backing store.
More info can be found here
https://wiki.ceph.com/Planning/Blueprints/Hammer/Clustered_SCSI_target_using_RBD
These patches just add the ceph/rbd code. I
From: Mike Christie micha...@cs.wisc.edu
This stores the LIO PR info in the rbd header. Other
clients (LIO nodes) are notified when the data changes,
so they can update their info.
I added a sysfs file to test it here temporarily. I will
remove it in the final version. The final patches will
From: Mike Christie micha...@cs.wisc.edu
This patch breaks out the code that allocates buffers and executes
the request from rbd_obj_method_sync, so future functions in this
patchset can use it.
It also adds support for OBJ_OP_WRITE requests which is needed for
the locking functions which will
From: Mike Christie micha...@cs.wisc.edu
This adds support for rados's notify call. It is being used to notify
scsi PR and TMF watchers that the scsi pr info has changed, or that
we want to sync up on TMF execution (currently only LUN_RESET).
I did not add support for the notify2 recv buffer as
From: Mike Christie micha...@cs.wisc.edu
This syncs the ceph_osd_op struct with the current version of ceph
where the watch struct has been updated to support more ops and
the notify-ack support has been broken out of the watch struct.
Ceph commits
1a82cc3926fc7bc4cfbdd2fd4dfee8660d5107a1
From: Mike Christie micha...@cs.wisc.edu
The lock info code wants to print this in the debug code so this
patch just exports it.
Signed-off-by: Mike Christie micha...@cs.wisc.edu
---
net/ceph/ceph_strings.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/net/ceph/ceph_strings.c
From: Mike Christie micha...@cs.wisc.edu
This patch adds helpers to encode/decode the starting blocks
locking code. They are the equivalent of ENCODE_START and
DECODE_START_LEGACY_COMPAT_LEN in the userspace ceph code.
Signed-off-by: Mike Christie micha...@cs.wisc.edu
---
From: Mike Christie micha...@cs.wisc.edu
This patch adds support for proto version 1 of watch-notify,
so drivers like rbd can be sent a buffer with information like
the notify operation being performed.
Signed-off-by: Mike Christie micha...@cs.wisc.edu
---
drivers/block/rbd.c | 3
From: Mike Christie micha...@cs.wisc.edu
This adds support watch-notify header 2 and 3 support, so we can
get a return_code from those operations.
Signed-off-by: Mike Christie micha...@cs.wisc.edu
---
drivers/block/rbd.c | 5 +++--
include/linux/ceph/osd_client.h | 10 ++
From: Mike Christie micha...@cs.wisc.edu
Add support for userspace ceph DECODE_START.
Signed-off-by: Mike Christie micha...@cs.wisc.edu
---
include/linux/ceph/decode.h | 23 +++
1 file changed, 23 insertions(+)
diff --git a/include/linux/ceph/decode.h
[adding ceph-devel]
Okay, I see the problem. This seems to be unrelated ot the giant -
hammer move... it's a result of the tiering changes you made:
The following:
ceph osd tier add img images --force-nonempty
ceph osd tier cache-mode images forward
ceph osd
From: Mike Christie micha...@cs.wisc.edu
This patch adds support for rados lock, unlock and break lock.
This will be used to sync up scsi pr info manipulation and
TMF execution.
It also adds support for list locks and get lock info, but
that and the sysfs support is only for debugging. I do not
Hi Tuomas,
I've pushed an updated wip-hammer-snaps branch. Can you please try it?
The build will appear here
http://gitbuilder.ceph.com/ceph-deb-trusty-x86_64-basic/sha1/08bf531331afd5e2eb514067f72afda11bcde286
(or a similar url; adjust for your distro).
Thanks!
sage
On Tue, 28
On Mon, 27 Apr 2015 23:48:34 -0700 Ming Lin m...@kernel.org wrote:
From: Kent Overstreet kent.overstr...@gmail.com
As generic_make_request() is now able to handle arbitrarily sized bios,
it's no longer necessary for each individual block driver to define its
own -merge_bvec_fn() callback.
Hi Milosz,
The OSD op worker threads which handle requests are part of sharded thread
pool. We observed that the distribution across these shards was a bit uneven.
Most of the new/delete were originating from Index Manager code in the read
path when we last checked.
Thanks,
Viju
Hi,
here a small bench 4k randread of simple messenger vs async messenger
This is with 2 osd, and 15 fio jobs on a single rbd volume
simple messager : 345kiops
async messenger : 139kiops
Regards,
Alexandre
simple messenger
---
^Cbs: 15 (f=15): [r(15)] [0.0% done]
Thanks for your benchmark!
Yeah, async messenger exists a bottleneck when meet high concurrency
and iops. Because it exists a annoying lock related to crc calculate.
Now my main job is focus on passing on qa tests for async messenger.
If no failed tests, I will solve this problem.
On Tue, Apr
On Tue, Apr 28, 2015 at 9:58 AM, Chaitanya Huilgol
chaitanya.huil...@sandisk.com wrote:
Hi,
The default cache size is 32M, the tcmalloc documentation is outdated.
As Somnath mentioned, the tcmalloc fix is to make the env effective as
without this fix the library does not use exported
On Mon, Apr 27, 2015 at 11:48:34PM -0700, Ming Lin wrote:
As generic_make_request() is now able to handle arbitrarily sized bios,
it's no longer necessary for each individual block driver to define its
own -merge_bvec_fn() callback. Remove every invocation completely.
merge_bvec_fn is also
From: Kent Overstreet kent.overstr...@gmail.com
As generic_make_request() is now able to handle arbitrarily sized bios,
it's no longer necessary for each individual block driver to define its
own -merge_bvec_fn() callback. Remove every invocation completely.
Cc: Jens Axboe ax...@kernel.dk
Cc:
29 matches
Mail list logo