Hi There
The basic setup Im trying to get is a backend to a Hypervisor cluster, so that
auto-failover and live migration works. The mail thing is that we have a number
of datacenters with a gigabit interconnect that is not always 100% reliable. In
the event of a failure we want all the virtual
Hi,
Recent tests on my test rack with 20G IB(iboip, 64k mtu, default
CUBIC, CFQ, LSI SAS 2108 w/ wb cache) interconnect shows a quite
fantastic performance - on both reads and writes Ceph completely
utilizing all disk bandwidth as high as 0.9 of theoretical limit of
sum of all bandwidths bearing
Hi Guys,
Just wanted to let you know we've published a short introductory article
on the ceph blog looking at write performance on a couple of different
RAID/SAS controllers configured in different ways. Hopefully you guys
find it useful! We'll likely be publishing more articles in the
On Tue, 09 Oct 2012 13:57:09 -0700 Alex Elder el...@inktank.com wrote:
This series includes updates for two patches posted previously.
-Alex
Greetings,
We're gearing up to test v0.52 (specifically the RBD stuff) on our
cluster. After reading this
On Wed, 10 Oct 2012, Andrey Korolyov wrote:
Hi,
Recent tests on my test rack with 20G IB(iboip, 64k mtu, default
CUBIC, CFQ, LSI SAS 2108 w/ wb cache) interconnect shows a quite
fantastic performance - on both reads and writes Ceph completely
utilizing all disk bandwidth as high as 0.9 of
On 10/10/2012 08:55 AM, Cláudio Martins wrote:
On Tue, 09 Oct 2012 13:57:09 -0700 Alex Elder el...@inktank.com wrote:
This series includes updates for two patches posted previously.
-Alex
Greetings,
We're gearing up to test v0.52 (specifically
On 10/10/2012 09:23 AM, Sage Weil wrote:
On Wed, 10 Oct 2012, Andrey Korolyov wrote:
Hi,
Recent tests on my test rack with 20G IB(iboip, 64k mtu, default
CUBIC, CFQ, LSI SAS 2108 w/ wb cache) interconnect shows a quite
fantastic performance - on both reads and writes Ceph completely
utilizing
On Wed, 10 Oct 2012, James Horner wrote:
Hi There
The basic setup Im trying to get is a backend to a Hypervisor cluster,
so that auto-failover and live migration works. The mail thing is that
we have a number of datacenters with a gigabit interconnect that is not
always 100% reliable. In
On 10/09/2012 08:26 PM, Alex Elder wrote:
On 09/11/2012 02:17 PM, Alex Elder wrote:
On 09/06/2012 06:30 AM, Guangliang Zhao wrote:
The bio_pair alloced in bio_chain_clone would not be freed,
this will cause a memory leak. It could be freed actually only
after 3 times release, because the
Hi Hemant -
I'll be happy to help you with the problem. The first things that would be
helpful for me to know is what version of ceph you are trying to build, what
distribution you are building on, and what your yum repositories are. You can
get the last piece of information with the yum
On 10 October 2012 18:10, Travis Rhoden trho...@gmail.com wrote:
Additionally, 500G - 7.5G != 467G (the number shown as Avail). Why
the huge discrepancy? I don't expect the numbers to add up exact due
to rounding from kB, MB, GB, etc, but they should be darn close, a la
ext4 keeps some
Damien,
Thanks for solving that part of the mystery. I can't believe I forgot
about that. Thanks for the reminder and the clear explanation.
- Travis
On Wed, Oct 10, 2012 at 1:28 PM, Damien Churchill dam...@gmail.com wrote:
On 10 October 2012 18:10, Travis Rhoden trho...@gmail.com wrote:
On 10/10/2012 10:10 AM, Travis Rhoden wrote:
Hey folks,
I have two questions about determining how much storage has been used
*inside* of an RBD.
First, I'm confused by the output of df. I've created, mapped, and
mounted a 500GB RBD, and see the following:
# df -h /srv/test
Filesystem
I don't know the ext4 internals at all, but filesystems tend to
require allocation tables of various sorts (for managing extents,
etc). 7.5GB out of 500GB seems a little large for that metadata, but
isn't ridiculously so...
On Wed, Oct 10, 2012 at 10:28 AM, Damien Churchill dam...@gmail.com
Thanks for the input, Gregory and Josh.
What I am hearing is that this has everything to do with the
filesystem, and nothing to do with the block device on Ceph.
Thanks again,
- Travis
On Wed, Oct 10, 2012 at 1:55 PM, Gregory Farnum g...@inktank.com wrote:
I don't know the ext4 internals at
After applying the patch, we went through 65 successful cluster
reinstalls without encountering the error (previously it would happen
at least every 8-10 reinstalls). Therefore it really looks like this
fixed the issue. Thanks!
On Mon, Oct 8, 2012 at 5:17 PM, Sage Weil s...@inktank.com wrote:
On 10/09/2012 08:26 PM, Alex Elder wrote:
On 09/11/2012 02:17 PM, Alex Elder wrote:
On 09/06/2012 06:30 AM, Guangliang Zhao wrote:
The bio_pair alloced in bio_chain_clone would not be freed,
this will cause a memory leak. It could be freed actually only
after 3 times release, because the
On Wed, 10 Oct 2012, Gregory Farnum wrote:
I don't know the ext4 internals at all, but filesystems tend to
require allocation tables of various sorts (for managing extents,
etc). 7.5GB out of 500GB seems a little large for that metadata, but
isn't ridiculously so...
ext3/4 are particularly
Wonderful, thanks!
sage
On Wed, 10 Oct 2012, Nick Bartos wrote:
After applying the patch, we went through 65 successful cluster
reinstalls without encountering the error (previously it would happen
at least every 8-10 reinstalls). Therefore it really looks like this
fixed the issue.
This needed for latest master:
diff --git a/src/rgw/rgw_rest.cc b/src/rgw/rgw_rest.cc
index 53bbeca..3612a9e 100644
--- a/src/rgw/rgw_rest.cc
+++ b/src/rgw/rgw_rest.cc
@@ -1,4 +1,5 @@
#include errno.h
+#include limits.h
#include common/Formatter.h
#include common/utf8.h
to fix:
CXX
I'll apply this, can I assume you have signed off this patch?
On Wed, Oct 10, 2012 at 2:25 PM, Noah Watkins jayh...@cs.ucsc.edu wrote:
This needed for latest master:
diff --git a/src/rgw/rgw_rest.cc b/src/rgw/rgw_rest.cc
index 53bbeca..3612a9e 100644
--- a/src/rgw/rgw_rest.cc
+++
On Wed, Oct 10, 2012 at 2:29 PM, Yehuda Sadeh yeh...@inktank.com wrote:
I'll apply this, can I assume you have signed off this patch?
Ahh, yes, sorry.
Signed-off-by: Noah Watkins noahwatk...@gmail.com
On Wed, Oct 10, 2012 at 2:25 PM, Noah Watkins jayh...@cs.ucsc.edu wrote:
This needed for
Laszlo, James:
Changes based on your previous feedback are ready for review. I pushed the
changes here:
git://github.com/noahdesu/ceph.git wip-java-cephfs
Thanks!
- Noah
From 0d8c4dc39f9b8f2e264bb2503c053418ad72b705 Mon Sep 17 00:00:00 2001
From: Noah Watkins noahwatk...@gmail.com
Date:
Hi Noah,
On Wed, 2012-10-10 at 15:00 -0700, Noah Watkins wrote:
Laszlo, James:
Changes based on your previous feedback are ready for review. I pushed the
changes here:
git://github.com/noahdesu/ceph.git wip-java-cephfs
Checking only the diff, as it's 3 am here. It looks quite OK. But
On Wed, Oct 10, 2012 at 5:53 PM, Laszlo Boszormenyi (GCS) g...@debian.hu
wrote:
Hi Noah,
On Wed, 2012-10-10 at 15:00 -0700, Noah Watkins wrote:
Laszlo, James:
Changes based on your previous feedback are ready for review. I pushed the
changes here:
git://github.com/noahdesu/ceph.git
These three patches simplify a few paths through the code
involving read and write requests.
-Alex
[PATCH 1/3] rbd: kill rbd_req_{read,write}()
[PATCH 2/3] rbd: kill drop rbd_do_op() opcode and flags
[PATCH 3/3] rbd: consolidate rbd_do_op() calls
--
To
The two calls to rbd_do_op() from rbd_rq_fn() differ only in the
value passed for the snapshot id and the snapshot context.
For reads the snapshot always comes from the mapping, and for writes
the snapshot id is always CEPH_NOSNAP.
The snapshot context is always null for reads. For writes, the
27 matches
Mail list logo