As an alternative, would dm-cache or bcache on the client VM be an option?
Mark
On 06/16/2015 06:43 AM, Mike wrote:
16.06.2015 12:54, Sebastien Han пишет:
This is not possible at the moment but long ago a BP was register to allow a
multiple backend functionality. (can’t find it anymore)
Unfor
On 06/16/2015 03:48 PM, GuangYang wrote:
Thanks Sage for the quick response.
It is on Firefly v0.80.4.
While trying to put with *rados* directly, the xattrs can be inline. The
problem comes to light when using radosgw, since we have a bunch of metadata to
keep via xattrs, including:
rgw
8AM PST as usual! Discussion topics for this week include:
- Alexandre's tcmalloc / memory pressure QEMU performance tests.
- Continuation of SimpleMessenger fastpath discussion?
Please feel free to add your own!
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To joi
8AM PST as usual!
Please feel free to add discussion topics to the etherpad!
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To join the Meeting:
https://bluejeans.com/268261044
To join via Browser:
https://bluejeans.com/268261044/browser
To join with Lync:
https://b
ave infos about this
Fujitsu presenting on bufferlist tuning
- about 2X savings in overall CPU Time with new code.
- Mail original -
De: "Robert LeBlanc"
À: "Mark Nelson"
Cc: "ceph-devel"
Envoyé: Jeudi 25 Juin 2015 17:58:10
Objet: Re: 06/24/2015 Weekly Ce
It would be fantastic if folks decided to work on this and got it pushed
upstream into fio proper. :D
Mark
On 06/30/2015 04:19 PM, James (Fei) Liu-SSI wrote:
Hi Casey,
Thanks a lot.
Regards,
James
-Original Message-
From: Casey Bodley [mailto:cbod...@gmail.com]
Sent: Tuesda
8AM PST as usual! Also join us later today and tomorrow for the Ceph
Developer Summit where we will be discussing many different performance
related blueprints!
Please feel free to add discussion topics to the etherpad! Current
topics include new cache teiring promote probability investigati
Hi Guys,
Looks like I got my timezone conversion off and we are overlapping with
CDS today, so let's cancel and push this off to next week. Enjoy CDS
everyone!
Mark
On 07/01/2015 09:20 AM, Mark Nelson wrote:
8AM PST as usual! Also join us later today and tomorrow for the Ceph
Deve
8AM PST as usual! Let's discuss the topics that we didn't cover last
week due to CDS: New cache teiring promote probability investigation and
wbthrottle auto-tuning. Please feel free to add your own!
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To join the Meetin
Hi Konstantin,
I'm definitely interested in looking at your tools and seeing if we can
merge them into cbt! One of the things we lack right now in cbt is any
kind of real openstack integration. Right now CBT basically just
assumes you've already launched VMs and specified them as clients in
FWIW,
It would be very interesting to see the output of:
https://github.com/ceph/cbt/blob/master/tools/readpgdump.py
If you see something that looks anomalous. I'd like to make sure that
I'm detecting issues like this.
Mark
On 07/09/2015 06:03 PM, Samuel Just wrote:
I've seen some odd teu
Absolutely!
On 07/10/2015 03:00 AM, Konstantin Danilov wrote:
Can I propose a topic for meeting by adding it to etherpad?
On Wed, Jul 8, 2015 at 5:30 PM, Mark Nelson mailto:mnel...@redhat.com>> wrote:
8AM PST as usual! Let's discuss the topics that we didn't cover
have two roots (default with 1056
osds and ssd_default with 30 osds).
>
> It seems that our distribution is slightly better than expected in
your code.
>
> Thanks.
>
> On Mon, Jul 13, 2015 at 6:20 PM, Mark Nelson mailto:mnel...@redhat.com>> wrote:
>>
>> F
On 07/14/2015 03:11 AM, Konstantin Danilov wrote:
Mark,
does Wednesday performance meeting is a good place for discussion, or
we need a separated one?
On Mon, Jul 13, 2015 at 6:16 PM, Mark Nelson wrote:
Hi Konstantin,
I'm definitely interested in looking at your tools and seeing if we can
8AM PST as usual! Topics today include proposed additions to cbt by
Mirantis! Please feel free to add your own!
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To join the Meeting:
https://bluejeans.com/268261044
To join via Browser:
https://bluejeans.com/268261044/
8AM PST as usual! Topics today include a new ceph_test_rados benchmark
being added to CBT. Please feel free to add your own!
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To join the Meeting:
https://bluejeans.com/268261044
To join via Browser:
https://bluejeans.c
Hi John,
I had similar thoughts on the benchmarking side, which is why I started
writing cbt a couple years ago. I needed the ability to quickly spin up
clusters and run benchmarks on arbitrary sets of hardware. The outcome
isn't perfect, but it's been extremely useful for running benchmarks
On 07/23/2015 07:37 AM, John Spray wrote:
On 23/07/15 12:56, Mark Nelson wrote:
I had similar thoughts on the benchmarking side, which is why I
started writing cbt a couple years ago. I needed the ability to
quickly spin up clusters and run benchmarks on arbitrary sets of
hardware. The
Hi Konstantin,
It might be best to move this to the cbt mailing list for discussion so
that we don't end up filling up ceph-devel. Would you mind re-posting
there?
Mark
On 07/24/2015 07:21 AM, Konstantin Danilov wrote:
Hi all,
This is BP/summary for changes in cbt, we discuss on previous w
On 07/23/2015 06:12 PM, Travis Rhoden wrote:
HI Everyone,
I’m working on ways to improve Ceph installation with ceph-deploy, and a common
hurdle we have hit involves dependency issues between ceph.com hosted RPM
repos, and packages within EPEL. For a while we were able to managed this with
t
Haha, yes. I still use yum for everything. :D
Mark
On 07/24/2015 08:18 PM, Shinobu Kinjo wrote:
If there's really no better way around this, I think we need to
communicate to the Yum/DMF team(s) what the problem is and that we
need to come up with some better way to control the pr
My first reading of the topic was "Citerias (ie a project named
Citerias) to become a Ceph project". It wasn't until I re-read it more
closely that I realized it was criteria. :)
On 07/28/2015 12:51 PM, Joao Eduardo Luis wrote:
On 07/28/2015 07:59 AM, Loic Dachary wrote:
The title sound eve
Just here to provide moral support. Go CMake go! :)
Mark
On 07/30/2015 02:01 PM, Ali Maredia wrote:
After discussing with several other Ceph developers and Sage, I wanted
to start a discussion about making CMake the primary build system for Ceph.
CMake works just fine as it is (make -j4 on mas
8AM PST as usual (that's in 13 minutes folks!) No specific topics for
this week, please feel free to add your own!
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To join the Meeting:
https://bluejeans.com/268261044
To join via Browser:
https://bluejeans.com/268261044
On 08/05/2015 04:26 PM, Sage Weil wrote:
Today I learned that syncfs(2) does an O(n) search of the superblock's
inode list searching for dirty items. I've always assumed that it was
only traversing dirty inodes (e.g., a list of dirty inodes), but that
appears not to be the case, even on the la
Hi Srikanth,
Can you make a ticket on tracker.ceph.com for this? We'd like to not
loose track of it.
Thanks!
Mark
On 08/05/2015 07:01 PM, Srikanth Madugundi wrote:
Hi,
After upgrading to Hammer and moving from apache to civetweb. We
started seeing high PUT latency in the order of 2 sec for
Meeting is canceled this week due to the Ceph Hackathon. See you next
week guys!
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi Everyone,
One of the goals at the Ceph Hackathon last week was to examine how to
improve Ceph Small IO performance. Jian Zhang presented findings
showing a dramatic improvement in small random IO performance when Ceph
is used with jemalloc. His results build upon Sandisk's original
findi
-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Mark Nelson
Sent: Tuesday, August 18, 2015 9:46 PM
To: ceph-devel
Subject: Ceph Hackathon: More Memory Allocator Testing
Hi Everyone,
One of the goals at the Ceph Hackathon last week was to examine how to improve
C
k we
probably need to run the tests.
Thanks & Regards
Somnath
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Mark Nelson
Sent: Tuesday, August 18, 2015 9:46 PM
To: ceph-devel
Subject: Ceph Hackathon: More Memory Alloc
Nope! So in this case it's just server side.
On 08/19/2015 01:33 AM, Stefan Priebe - Profihost AG wrote:
Thanks for sharing. Do those tests use jemalloc for fio too? Otherwise
librbd on client side is running with tcmalloc again.
Stefan
Am 19.08.2015 um 06:45 schrieb Mark Nelson
On 08/19/2015 07:36 AM, Dałek, Piotr wrote:
-Original Message-
From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
ow...@vger.kernel.org] On Behalf Of Mark Nelson
Sent: Wednesday, August 19, 2015 2:17 PM
The RSS memory usage in the report is per OSD I guess(really?). It
can'
8AM PST. Let's followup with what was discussed at the hackathon and
look over the tcmalloc/jemalloc data from the new community cluster!
Also, discuss potentially moving the meeting 15 minutes later to not
conflict with the daily core standup. Please feel free to add your own
topic as well!
th Roy
Sent: Wednesday, August 19, 2015 10:30 AM
To: Alexandre DERUMIER
Cc: Mark Nelson; ceph-devel
Subject: RE: Ceph Hackathon: More Memory Allocator Testing
Yes, it should be 1 per OSD...
There is no doubt that TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES is relative to the
number of threads running..
Hi Guys,
About a month ago we were going through the process of trying to figure
out how to replace some of the hardware in the community laboratory that
runs all of the nightly Teuthology tests. Given a limited budget to
replace the existing nodes, we wanted to understand how the current QA
dware for one reason or the other.
My 3.3cts ;-)
On 19/08/2015 21:46, Mark Nelson wrote:
Hi Guys,
About a month ago we were going through the process of trying to figure out how
to replace some of the hardware in the community laboratory that runs all of
the nightly Teuthology tests. Given a li
This is a really neat idea Loic! Do you think at some point it will
make sense to just setup the sepia lab as cloud infrastructure that lets
folks run teuthology on it in the same fashion?
Mark
On 08/24/2015 06:51 AM, Loic Dachary wrote:
Hi Sam,
Maybe we can start an experiment to enable co
8AM PST Still! Discussion topics include kernel RBD client readahead
issues, and memory allocator testing under recovery. Please feel free
to add your own!
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To join the Meeting:
https://bluejeans.com/268261044
To join
8AM PST as usual! No set topics for this morning. Please feel free to
add your own!
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To join the Meeting:
https://bluejeans.com/268261044
To join via Browser:
https://bluejeans.com/268261044/browser
To join with Lync:
On 09/03/2015 11:23 AM, Robert LeBlanc wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Somnath,
I'm having a hard time with your slide deck. Am I understanding
correctly that the default Hammer install was performed on SSDs with
co-located journals, but the optimized code was performed o
On 09/08/2015 02:19 PM, Sage Weil wrote:
On Tue, Sep 8, 2015 at 9:58 PM, Haomai Wang wrote:
Hi Sage,
I notice your post in rocksdb page about make rocksdb aware of short
alive key/value pairs.
I think it would be great if one keyvalue db impl could support
different key types with different
Excellent investigation Alexandre! Have you noticed any performance
difference with tp=never?
Mark
On 09/08/2015 06:33 PM, Alexandre DERUMIER wrote:
I have done small benchmark with tcmalloc and jemalloc, transparent
hugepage=always|never.
for tcmalloc, they are no difference.
but for jemal
Also, for what it's worth, I did analysis during recovery (though not
with different transparent hugepage settings). You can see it on slide
#13 here:
http://nhm.ceph.com/mark_nelson_ceph_tech_talk.odp
On 09/08/2015 06:49 PM, Mark Nelson wrote:
Excellent investigation Alexandre! Hav
8AM PST as usual! Discussion topics include: delayed hashmaps for
memory usage reduction, transparent huge pages, newstore testing, async
messenger testing, and others. Please feel free to add your own!
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To join the Mee
8AM PST as usual! No set discussion topics this week. Please feel free
to add your own!
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To join the Meeting:
https://bluejeans.com/268261044
To join via Browser:
https://bluejeans.com/268261044/browser
To join with Ly
8AM PST as usual! Discussion topics include an update on transparent
huge pages testing and I think Ben would like to talk a bit about CBT
PRs. Please feel free to add your own!
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To join the Meeting:
https://bluejeans.co
On 09/23/2015 01:25 PM, Gregory Farnum wrote:
On Wed, Sep 23, 2015 at 11:19 AM, Sage Weil wrote:
On Wed, 23 Sep 2015, Deneau, Tom wrote:
Hi all --
Looking for guidance with perf counters...
I am trying to see whether the perf counters can tell me anything about the
following discrepancy
I
FWIW, we've got some 40GbE Intel cards in the community performance
cluster on a Mellanox 40GbE switch that appear (knock on wood) to be
running fine with 3.10.0-229.7.2.el7.x86_64. We did get feedback from
Intel that older drivers might cause problems though.
Here's ifconfig from one of the
e meetings public.
Mark
On 09/23/2015 11:44 AM, Alexandre DERUMIER wrote:
Hi Mark,
can you post the video records of previous meetings ?
Thanks
Alexandre
- Mail original -
De: "Mark Nelson"
À: "ceph-devel"
Envoyé: Mercredi 23 Septembre 2015 15:51:21
Objet: 09/2
Hi Everyone,
A while back Alexandre Derumier posted some test results looking at how
transparent huge pages can reduce memory usage with jemalloc. I went
back and ran a number of new tests on the community performance cluster
to verify his findings and also look at how performance and cpu usa
On 09/29/2015 12:59 AM, Dałek, Piotr wrote:
-Original Message-
From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
ow...@vger.kernel.org] On Behalf Of Mark Nelson
Sent: Tuesday, September 29, 2015 1:34 AM
Hi Everyone,
A while back Alexandre Derumier posted some test results
Yay! Very much looking forward to checking this out Somnath! I'll let
you know how it goes.
Mark
On 09/29/2015 01:14 PM, Somnath Roy wrote:
Hi Mark,
I have sent out the following pull request for my write path changes.
https://github.com/ceph/ceph/pull/6112
Meanwhile, if you want to give i
8AM PST as usual! Discussion topics include Somnath's writepath PR and
more updates on transparent huge pages testing and async messenger
testing. Please feel free to add your own!
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To join the Meeting:
https://bluejeans
On 10/01/2015 10:32 AM, Curley, Matthew wrote:
We've been trying to reproduce the allocator performance impact on 4K random
reads seen in the Hackathon (and more recent tests). At this point though,
we're not seeing any significant difference between tcmalloc and jemalloc so
we're looking for
Hi Guy,
Given all of the recent data on how different memory allocator
configurations improve SimpleMessenger performance (and the effect of
memory allocators and transparent hugepages on RSS memory usage), I
thought I'd run some tests looking how AsyncMessenger does in
comparison. We spoke
015 09:56 PM, Haomai Wang wrote:
resend
On Tue, Oct 13, 2015 at 10:56 AM, Haomai Wang wrote:
COOL
Interesting that async messenger will consume more memory than simple, in my
mind I always think async should use less memory. I will give a look at this
On Tue, Oct 13, 2015 at 12:50 AM, Mark Nelson
On 10/12/2015 11:12 PM, Gregory Farnum wrote:
On Mon, Oct 12, 2015 at 9:50 AM, Mark Nelson wrote:
Hi Guy,
Given all of the recent data on how different memory allocator
configurations improve SimpleMessenger performance (and the effect of memory
allocators and transparent hugepages on RSS
8AM PST as usual (ie in 10 minutes). Discussion topics include the
release of the async messenger results and some initial
wip-newstore-frags testing. Please feel free to add your own!
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To join the Meeting:
https://bluej
same performance as async but using much less
memory?
-Xiaoxi
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Mark Nelson
Sent: Tuesday, October 13, 2015 9:03 PM
To: Haomai Wang
Cc: ceph-devel; ceph-us...@lists.ceph.com
Subject: Re: [ceph-users] In
r.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Curley, Matthew
Sent: Thursday, October 01, 2015 1:09 PM
To: Mark Nelson; ceph-devel@vger.kernel.org
Subject: RE: Reproducing allocator performance differences
Thanks a bunch for the feedback Mark. I'll push this back to the gu
On 10/20/2015 07:30 AM, Sage Weil wrote:
On Tue, 20 Oct 2015, Chen, Xiaoxi wrote:
+1, nowadays K-V DB care more about very small key-value pairs, say
several bytes to a few KB, but in SSD case we only care about 4KB or
8KB. In this way, NVMKV is a good design and seems some of the SSD
vendor are
CA 95134
T: +1 408 801 7030| M: +1 408 780 6416
allen.samu...@sandisk.com
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Mark Nelson
Sent: Tuesday, October 20, 2015 6:20 AM
To: Sage Weil ; Chen, Xiaoxi
Cc: James (Fei) Liu-SSI
On 10/21/2015 05:06 AM, Allen Samuels wrote:
I agree that moving newStore to raw block is going to be a significant
development effort. But the current scheme of using a KV store combined with a
normal file system is always going to be problematic (FileStore or NewStore).
This is caused by the
On 10/21/2015 06:24 AM, Ric Wheeler wrote:
On 10/21/2015 06:06 AM, Allen Samuels wrote:
I agree that moving newStore to raw block is going to be a significant
development effort. But the current scheme of using a KV store
combined with a normal file system is always going to be problematic
(
On 10/21/2015 10:51 AM, Ric Wheeler wrote:
On 10/21/2015 10:14 AM, Mark Nelson wrote:
On 10/21/2015 06:24 AM, Ric Wheeler wrote:
On 10/21/2015 06:06 AM, Allen Samuels wrote:
I agree that moving newStore to raw block is going to be a significant
development effort. But the current scheme
Hi Paul,
Sorry for the late reply, Sage will be out and I imagine a number of
other folks will be too. I don't have any topics on the agenda, so
let's just cancel and wait until next week.
Thanks,
Mark
On 10/21/2015 11:55 AM, Paul Von-Stamwitz wrote:
Hi Mark,
In light of OpenStack Summit,
Various folks are going to be away at Openstack Summit and I don't have
anything on the agenda yet, so let's cancel this week. See you next week!
Mark
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo in
8AM PST as usual! Discussion topics include newstore changes, outdated
benchmarks (doh!) and anything else folks want to talk about. See you
there!
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To join the Meeting:
https://bluejeans.com/268261044
To join via Brows
Hi Robert,
It definitely is exciting I think. Keep up the good work! :)
Mark
On 11/05/2015 09:14 AM, Robert LeBlanc wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Thanks Gregory,
People are most likely busy and haven't had time to digest this and I
may be expecting more excitement fr
8AM PST as usual! (ie in 10 minutes, sorry for the late notice)
Discussion topics include newstore block and anything else folks want to
talk about. See you there!
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To join the Meeting:
https://bluejeans.com/268261044
T
Hi Stephen,
That's about what I expected to see, other than the write performance
drop with more shards. We clearly still have some room for improvement.
Good job doing the testing!
Mark
On 11/11/2015 02:57 PM, Blinick, Stephen L wrote:
Sorry about the microphone issues in the performance
whatever you did, it appears to work. :)
On 11/11/2015 05:44 PM, Somnath Roy wrote:
Sorry for the spam , having some issues with devl
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger
8AM PST as usual! I won't be there as I'm out giving a talk at SC15.
Sage will be there though, so go talk to him about newstore-block!
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To join the Meeting:
https://bluejeans.com/268261044
To join via Browser:
https://bl
This is actually the direction newstore is heading with the
newstore_min_alloc_size so that you can configure when to write into the
rocksdb WAL. By default it's set to 512k, but for SSDs we will almost
certainly want to go smaller.
On 11/19/2015 05:29 AM, Mike Almateia wrote:
Hello.
By now
FWIW, if you've got collectl per-process logs, you might look for major
pagefaults associated with the osd processes. I've seen process
swapping cause heartbeat timeouts in the past. Not to say that's the
issue, but worth confirming it's not happening.
Mark
On 11/23/2015 01:03 PM, Robert Le
ching solution - Intel
CAS with NVMe PCIe SSD.
I would like to present this approach and show some results.
Maciej
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Mark Nelson
Sent: Wednesday, November 18, 2015 4:05 PM
To: ceph-
proxied to take advantage of
larger read cache.
Nick
-Original Message-
From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
ow...@vger.kernel.org] On Behalf Of Sage Weil
Sent: 25 November 2015 20:41
To: Nick Fisk
Cc: 'ceph-users' ; ceph-devel@vger.kernel.org;
'Mark
On 12/02/2015 12:23 PM, Gregory Farnum wrote:
On Tue, Dec 1, 2015 at 5:23 AM, Vimal wrote:
Hello,
This mail is to discuss the feature request at
http://tracker.ceph.com/issues/13578.
If done, such a tool should help point out several mis-configurations that
may cause problems in a cluster l
On 01/24/2014 06:29 AM, Maciej Bonin wrote:
Gregory Farnum writes:
On Mon, Aug 19, 2013 at 3:09 PM, Mostowiec Dominik
wrote:
Hi,
Yes, it definitely can as scrubbing takes locks on the PG, which will
prevent reads or writes while the
message is being processed (which will involve the rgw
On 02/12/2014 02:58 PM, Sebastien Han wrote:
Hi guys,
First implementation of the ceph-brag is ready.
We have a public repo available here, so can try it out.
https://github.com/enovance/ceph-brag
However I don’t have any idea on how to submit this to github.com/ceph.
Can someone help me on th
On 03/02/2014 08:07 PM, Shu, Xinxin wrote:
Hi all,
This patch added rocksdb support for ceph, enabled rocksdb for omap directory.
Rocksdb source code can be get from link. To use use rocksdb, C++11 standard
should be enabled, gcc version >= 4.7 is required to get C++11 support. Rocksdb
can
On 03/18/2014 03:05 PM, Yaron Haviv wrote:
Im happy to share test results we run in the lab with Matt's latest
XioMessenger code which implements Ceph messaging over Accelio RDMA library
Results look pretty encouraging, demonstrating a * 20x * performance boost
Below is a table comparing XioMes
On 03/20/2014 05:49 AM, Andreas Joachim Peters wrote:
Hi,
I did some Firefly ceph-0.77-900.gce9bfb8 testing of EC/Tiering deploying 64
OSD with in-memory filesystems (RapidDisk with ext4) on a single 256 GB box.
The raw write performance of this box is ~3 GB/s for all and ~450 MB/s per OSD.
I
e 4M case.
As time goes on we'll be able to optimize small IO performance with EC,
but there's a lot more processing involved so I suspect it's always
going to be slower (especially for reads!) than simple replication when
latency is critical.
Cheers Andreas.
e processing involved so I suspect it's always
going to be slower (especially for reads!) than simple replication when
latency is critical.
Cheers Andreas.
____
From: Mark Nelson [mark.nel...@inktank.com]
Sent: 20 March 2014 14:09
To: Andreas Joachi
On 04/09/2014 05:05 AM, Haomai Wang wrote:
Hi all,
Hi Haomai!
I would like to share some ideas about how to improve performance on
ceph with SSD. Not much preciseness.
Aha, that's ok, but I'm going to pester you with lots of questions below. ;)
Our ssd is 500GB and each OSD own a SSD(jo
For what it's worth, I've been able to achieve up to around 120MB/s with
btrfs before things fragment.
Mark
On 04/25/2014 03:59 PM, Xing wrote:
Hi Gregory,
Thanks very much for your quick reply. When I started to look into Ceph,
Bobtail was the latest stable release and that was why I picked
? I used btrfs in my experiments as well.
Thanks,
Xing
On 04/25/2014 03:36 PM, Mark Nelson wrote:
For what it's worth, I've been able to achieve up to around 120MB/s
with btrfs before things fragment.
Mark
On 04/25/2014 03:59 PM, Xing wrote:
Hi Gregory,
Thanks very much for your quick
On 04/30/2014 03:21 PM, Gandalf Corvotempesta wrote:
2014-04-30 14:18 GMT+02:00 Sage Weil :
Today we are announcing some very big news: Red Hat is acquiring Inktank.
Great news.
Any changes to get native Infiniband support in ceph like in GlusterFS ?
Check out the xio work that the linuxbox/
Have I mentioned how exciting that is?
Mark
On 04/30/2014 04:14 PM, Samuel Just wrote:
Cool. Looks like we read the same guide on the autotools bit :P. I'm
focusing on instrumenting the OSD op life cycle.
-Sam
On Wed, Apr 30, 2014 at 6:19 AM, Danny Al-Gaaf wrote:
Am 30.04.2014 04:21, schri
On 04/30/2014 05:05 PM, Gandalf Corvotempesta wrote:
2014-04-30 22:27 GMT+02:00 Mark Nelson :
Check out the xio work that the linuxbox/mellanox folks are working on.
Matt Benjamin has posted quite a bit of info to the list recently!
Is that usable ?
Usable is such a vague word. I imagine
On 04/30/2014 05:33 PM, Gandalf Corvotempesta wrote:
2014-05-01 0:20 GMT+02:00 Matt W. Benjamin :
Hi,
Sure, that's planned for integration in Giant (see Blueprints).
Great. Any ETA? Firefly was planned for February :)
At least on the plus side you can download the code whenever you want,
On 05/07/2014 02:54 PM, Milosz Tanski wrote:
On Wed, May 7, 2014 at 3:32 PM, Sage Weil wrote:
On Wed, 7 May 2014, Allen Samuels wrote:
Ok, now I think I understand. Essentially, you have a write-ahead log +
lazy application of the log to the backend + code that correctly deals
with the RAW haz
it's in large
chunks.
On Wed, May 7, 2014 at 4:00 PM, Mark Nelson wrote:
On 05/07/2014 02:54 PM, Milosz Tanski wrote:
On Wed, May 7, 2014 at 3:32 PM, Sage Weil wrote:
On Wed, 7 May 2014, Allen Samuels wrote:
Ok, now I think I understand. Essentially, you have a write-ahead log +
lazy appl
On 05/13/2014 04:08 PM, Matt W. Benjamin wrote:
Hi Ceph Devs,
I've pushed two Ceph+Accelio branches, xio-firefly and xio-firefly-cmake to our
public ceph repository https://github.com/linuxbox2/linuxbox-ceph.git .
These branches are pulled up to the HEAD of ceph/firefly, and also have
improvem
On 05/21/2014 07:54 AM, Shu, Xinxin wrote:
Hi, sage
I will add rocksdb submodule into the makefile , currently we want to have
fully performance tests on key-value db backend , both leveldb and rocksdb.
Then optimize on rocksdb performance.
I'm definitely interested in any performance tests
Hi Guys,
FWIW, the test suite I ran through was DBT3 on mariadb using a KVM
virtual machine, rbd cache, and Ceph dumpling. What I saw was that in
some tests performance was reasonably good given the replication level,
but in other cases it was slow (at least relative to a locally attached
di
On 05/21/2014 10:50 AM, Mike Dawson wrote:
Haomai,
Thanks for finding this!
Yes agreed, this looks very exciting. :D
Sage,
We have a client that runs an io intensive, closed-source software
package that seems to issue overzealous flushes which may benefit from
this patch (or the other met
On 05/26/2014 07:05 AM, Wang, Zhiqiang wrote:
Hi all,
Hi Zhigiang!
The problem you describe below is something we've seen internally at
Inktank as well. I've got a couple of comments / suggestions.
I'm experiencing a problem when running ceph with cache tiering. Can someone
help to take
YAY!
On 05/30/2014 05:04 PM, Patrick McGarry wrote:
Hey cephers,
Sorry to push this announcement so late on a Friday but...
Calamari has arrived!
The source code bits have been flipped, the ticket tracker has been
moved, and we have even given you a little bit of background from both
a techni
1 - 100 of 521 matches
Mail list logo