On 2013-07-25 17:46, Sage Weil wrote:
On Thu, 25 Jul 2013, pe...@2force.nl wrote:
We did not upgrade from bobtail to cuttlefish and are still seeing
this issue.
I posted this on the ceph-users mailinglist and I missed this thread
(sorry!)
so I didn't know.
That's interesting; a bobtail
We performed Iozone read test on a 32-node HPC server. Regarding the
hardware of each node, the CPU is very powerful, so does the network,
with a bandwidth 1.5 GB/s. 64GB memory, the IO is relatively slow, the
throughput measured by ‘dd’ locally is around 70MB/s. We configured a
Ceph cluster
Hi Sam,
Sorry to bother you with this again. Would you have time to quickly review this
proposal ? I'm sure you'll have comments that will require work on my part ;-)
Cheers
On 22/07/2013 22:33, Loic Dachary wrote:
Hi Sam,
Here is the proposed ObjectContext PGRegistry API:
Wow, very glad to hear that. I tried with the regular FS tunable and
there was almost no effect on the regular test, so I thought that
reads cannot be improved at all in this direction.
On Mon, Jul 29, 2013 at 2:24 PM, Li Wang liw...@ubuntukylin.com wrote:
We performed Iozone read test on a
On 07/29/2013 05:24 AM, Li Wang wrote:
We performed Iozone read test on a 32-node HPC server. Regarding the
hardware of each node, the CPU is very powerful, so does the network,
with a bandwidth 1.5 GB/s. 64GB memory, the IO is relatively slow, the
throughput measured by ‘dd’ locally is around
-- All Branches --
Dan Mick dan.m...@inktank.com
2012-12-18 12:27:36 -0800 wip-rbd-striping
2013-07-16 23:00:06 -0700 wip-5634
2013-07-18 16:34:23 -0700 wip-daemon
David Zafman david.zaf...@inktank.com
2013-01-28 20:26:34 -0800 wip-wireshark-zafman
CephFS currently deadlocks under CTDB's ping_pong POSIX locking test
when run concurrently on multiple nodes.
The deadlock is caused by failed removal of a waiting_locks entry when
the waiting lock is merged with an existing lock, e.g:
Initial MDS state (two clients, same file):
held_locks --
Hello,
my Ceph test cluster runs fine with 0.61.4.
I have removed all data and have setup a new cluster with 0.61.7 using
the same configuration (see ceph.conf).
After
mkcephfs -c /etc/ceph/ceph.conf -a
/etc/init.d/ceph -a start
the mds.0 crashed:
-1 2013-07-29 17:02:57.626886
Hi Sage,
as this crash had been around for a while already: do you
know whether this had happened in ceph version 0.61.4 as well?
Best Regards
Andreas Bluemle
On Mon, 29 Jul 2013 08:47:00 -0700 (PDT)
Sage Weil s...@inktank.com wrote:
Hi Andreas,
Can you reproduce this (from mkcephfs
On Mon, 29 Jul 2013, Andreas Bluemle wrote:
Hi Sage,
as this crash had been around for a while already: do you
know whether this had happened in ceph version 0.61.4 as well?
Pretty sure, yeah.
sage
Best Regards
Andreas Bluemle
On Mon, 29 Jul 2013 08:47:00 -0700 (PDT)
Sage
Just signed up, looking forward to it.
On Thu, Jul 25, 2013 at 5:18 PM, Travis Rhoden trho...@gmail.com wrote:
I'm already signed up. Looking forward to it!
- Travis
On Thu, Jul 25, 2013 at 12:19 AM, Sage Weil s...@inktank.com wrote:
I'm going to be in NYC next week at our first Ceph Day
I have a draft blueprint up for supporting object redirects, a
basic building block that will be used for tiering in RADOS. The basic
idea is that an object may have symlink-like semantics indicating that it
is stored in another pool.. maybe something slower, or erasure-encoded, or
whatever.
Hi,
This works:
lvcreate --name tmp --size 10G all
Logical volume tmp created
mkfs.ext4 /dev/all/tmp
mount /dev/all/tmp /mnt
blockdev --getsize64 /dev/all/tmp
10737418240
lvextend -L+1G /dev/all/tmp
Extending logical volume tmp to 11,00 GiB
Logical volume tmp successfully resized
blockdev
I've created a blueprint for the second phase of the multiregion / DR project:
http://wiki.ceph.com/index.php?title=01Planning/02Blueprints/Emperor/RGW_Multi-region_%2F%2F_Disaster_Recovery_(phase_2)
While a huge amount of work was done for Dumpling, there's still some
work that needs to be done
I've created a blueprint for a RADOS level mechanism for discovering
cold objects.
http://wiki.ceph.com/01Planning/02Blueprints/Dumpling/RADOS_Object_Temperature_Monitoring
Such a mechanism will be crucial to future tiering implementations.
If you are interested in discussing this at CDS, please
I created a blueprint for rgw bucket quotas. The document itself is
mainly a placeholder and a reference to the older bucket quota that we
prepared for Dumpling. If you're interested in discussing this at CDS,
please add yourself as an interested party to the blueprint.
-- Forwarded message --
From: Erik Logtenberg e...@logtenberg.eu
Date: Mon, Jul 29, 2013 at 7:07 PM
Subject: [ceph-users] Small fix for ceph.spec
To: ceph-us...@lists.ceph.com
Hi,
The spec file used for building rpm's misses a build time dependency on
snappy-devel. Please see
I created another blueprint for defining and creating a library for
rgw. This is also just a placeholder and a pointer at an older
blueprint.
http://wiki.ceph.com/01Planning/02Blueprints/Emperor/librgw
If you wish to discuss this at CDS, please add yourself to the blueprint.
Yehuda
--
To
I created a new blueprint that discusses rgw bucket scalability:
http://wiki.ceph.com/01Planning/02Blueprints/Emperor/rgw:_bucket_index_scalability
As was brought up on the mailing list recently, bucket index may serve
as a contention point. There are a few suggestions in how to solve /
mitigate
I created a new blueprint that discusses rgw multitenancy. The rgw
multitenancy defines a level of hierarchy on top of users and their
data which provides the ability to separate the users into different
organizational entities.
On Mon, Jul 29, 2013 at 11:00 AM, majianpeng majianp...@gmail.com wrote:
[snip]
I don't think the later was_short can handle the hole case. For the hole
case,
we should try reading next strip object instead of return. how about
below patch.
Hi Yan,
i uesed this demo to test hole
On Tue, Jul 30, 2013 at 10:08 AM, majianpeng majianp...@gmail.com wrote:
On Mon, Jul 29, 2013 at 11:00 AM, majianpeng majianp...@gmail.com wrote:
[snip]
I don't think the later was_short can handle the hole case. For the hole
case,
we should try reading next strip object instead of return.
22 matches
Mail list logo