Re: Upgrading from 0.61.5 to 0.61.6 ended in disaster

2013-07-29 Thread peter
On 2013-07-25 17:46, Sage Weil wrote: On Thu, 25 Jul 2013, pe...@2force.nl wrote: We did not upgrade from bobtail to cuttlefish and are still seeing this issue. I posted this on the ceph-users mailinglist and I missed this thread (sorry!) so I didn't know. That's interesting; a bobtail

Read ahead affect Ceph read performance much

2013-07-29 Thread Li Wang
We performed Iozone read test on a 32-node HPC server. Regarding the hardware of each node, the CPU is very powerful, so does the network, with a bandwidth 1.5 GB/s. 64GB memory, the IO is relatively slow, the throughput measured by ‘dd’ locally is around 70MB/s. We configured a Ceph cluster

Re: ObjectContext PGRegistry API

2013-07-29 Thread Loic Dachary
Hi Sam, Sorry to bother you with this again. Would you have time to quickly review this proposal ? I'm sure you'll have comments that will require work on my part ;-) Cheers On 22/07/2013 22:33, Loic Dachary wrote: Hi Sam, Here is the proposed ObjectContext PGRegistry API:

Re: Read ahead affect Ceph read performance much

2013-07-29 Thread Andrey Korolyov
Wow, very glad to hear that. I tried with the regular FS tunable and there was almost no effect on the regular test, so I thought that reads cannot be improved at all in this direction. On Mon, Jul 29, 2013 at 2:24 PM, Li Wang liw...@ubuntukylin.com wrote: We performed Iozone read test on a

Re: Read ahead affect Ceph read performance much

2013-07-29 Thread Mark Nelson
On 07/29/2013 05:24 AM, Li Wang wrote: We performed Iozone read test on a 32-node HPC server. Regarding the hardware of each node, the CPU is very powerful, so does the network, with a bandwidth 1.5 GB/s. 64GB memory, the IO is relatively slow, the throughput measured by ‘dd’ locally is around

ceph branch status

2013-07-29 Thread ceph branch robot
-- All Branches -- Dan Mick dan.m...@inktank.com 2012-12-18 12:27:36 -0800 wip-rbd-striping 2013-07-16 23:00:06 -0700 wip-5634 2013-07-18 16:34:23 -0700 wip-daemon David Zafman david.zaf...@inktank.com 2013-01-28 20:26:34 -0800 wip-wireshark-zafman

[PATCH] mds: remove waiting lock before merging with neighbours

2013-07-29 Thread David Disseldorp
CephFS currently deadlocks under CTDB's ping_pong POSIX locking test when run concurrently on multiple nodes. The deadlock is caused by failed removal of a waiting_locks entry when the waiting lock is merged with an existing lock, e.g: Initial MDS state (two clients, same file): held_locks --

mds.0 crashed with 0.61.7

2013-07-29 Thread Andreas Friedrich
Hello, my Ceph test cluster runs fine with 0.61.4. I have removed all data and have setup a new cluster with 0.61.7 using the same configuration (see ceph.conf). After mkcephfs -c /etc/ceph/ceph.conf -a /etc/init.d/ceph -a start the mds.0 crashed: -1 2013-07-29 17:02:57.626886

Re: mds.0 crashed with 0.61.7

2013-07-29 Thread Andreas Bluemle
Hi Sage, as this crash had been around for a while already: do you know whether this had happened in ceph version 0.61.4 as well? Best Regards Andreas Bluemle On Mon, 29 Jul 2013 08:47:00 -0700 (PDT) Sage Weil s...@inktank.com wrote: Hi Andreas, Can you reproduce this (from mkcephfs

Re: mds.0 crashed with 0.61.7

2013-07-29 Thread Sage Weil
On Mon, 29 Jul 2013, Andreas Bluemle wrote: Hi Sage, as this crash had been around for a while already: do you know whether this had happened in ceph version 0.61.4 as well? Pretty sure, yeah. sage Best Regards Andreas Bluemle On Mon, 29 Jul 2013 08:47:00 -0700 (PDT) Sage

Re: Anyone in NYC next week?

2013-07-29 Thread Milosz Tanski
Just signed up, looking forward to it. On Thu, Jul 25, 2013 at 5:18 PM, Travis Rhoden trho...@gmail.com wrote: I'm already signed up. Looking forward to it! - Travis On Thu, Jul 25, 2013 at 12:19 AM, Sage Weil s...@inktank.com wrote: I'm going to be in NYC next week at our first Ceph Day

blueprint: object redirects

2013-07-29 Thread Sage Weil
I have a draft blueprint up for supporting object redirects, a basic building block that will be used for tiering in RADOS. The basic idea is that an object may have symlink-like semantics indicating that it is stored in another pool.. maybe something slower, or erasure-encoded, or whatever.

krbd live resize

2013-07-29 Thread Loic Dachary
Hi, This works: lvcreate --name tmp --size 10G all Logical volume tmp created mkfs.ext4 /dev/all/tmp mount /dev/all/tmp /mnt blockdev --getsize64 /dev/all/tmp 10737418240 lvextend -L+1G /dev/all/tmp Extending logical volume tmp to 11,00 GiB Logical volume tmp successfully resized blockdev

blueprint: rgw multi-region disaster recovery, second phase

2013-07-29 Thread Yehuda Sadeh
I've created a blueprint for the second phase of the multiregion / DR project: http://wiki.ceph.com/index.php?title=01Planning/02Blueprints/Emperor/RGW_Multi-region_%2F%2F_Disaster_Recovery_(phase_2) While a huge amount of work was done for Dumpling, there's still some work that needs to be done

blueprint: RADOS Object Temperature Monitoring

2013-07-29 Thread Samuel Just
I've created a blueprint for a RADOS level mechanism for discovering cold objects. http://wiki.ceph.com/01Planning/02Blueprints/Dumpling/RADOS_Object_Temperature_Monitoring Such a mechanism will be crucial to future tiering implementations. If you are interested in discussing this at CDS, please

blueprint: rgw quota

2013-07-29 Thread Yehuda Sadeh
I created a blueprint for rgw bucket quotas. The document itself is mainly a placeholder and a reference to the older bucket quota that we prepared for Dumpling. If you're interested in discussing this at CDS, please add yourself as an interested party to the blueprint.

Fwd: [ceph-users] Small fix for ceph.spec

2013-07-29 Thread Patrick McGarry
-- Forwarded message -- From: Erik Logtenberg e...@logtenberg.eu Date: Mon, Jul 29, 2013 at 7:07 PM Subject: [ceph-users] Small fix for ceph.spec To: ceph-us...@lists.ceph.com Hi, The spec file used for building rpm's misses a build time dependency on snappy-devel. Please see

blueprint: librgw

2013-07-29 Thread Yehuda Sadeh
I created another blueprint for defining and creating a library for rgw. This is also just a placeholder and a pointer at an older blueprint. http://wiki.ceph.com/01Planning/02Blueprints/Emperor/librgw If you wish to discuss this at CDS, please add yourself to the blueprint. Yehuda -- To

blueprint: rgw bucket scalability

2013-07-29 Thread Yehuda Sadeh
I created a new blueprint that discusses rgw bucket scalability: http://wiki.ceph.com/01Planning/02Blueprints/Emperor/rgw:_bucket_index_scalability As was brought up on the mailing list recently, bucket index may serve as a contention point. There are a few suggestions in how to solve / mitigate

blueprint: rgw multitenancy

2013-07-29 Thread Yehuda Sadeh
I created a new blueprint that discusses rgw multitenancy. The rgw multitenancy defines a level of hierarchy on top of users and their data which provides the ability to separate the users into different organizational entities.

Re: Re: question about striped_read

2013-07-29 Thread majianpeng
On Mon, Jul 29, 2013 at 11:00 AM, majianpeng majianp...@gmail.com wrote: [snip] I don't think the later was_short can handle the hole case. For the hole case, we should try reading next strip object instead of return. how about below patch. Hi Yan, i uesed this demo to test hole

Re: Re: question about striped_read

2013-07-29 Thread Yan, Zheng
On Tue, Jul 30, 2013 at 10:08 AM, majianpeng majianp...@gmail.com wrote: On Mon, Jul 29, 2013 at 11:00 AM, majianpeng majianp...@gmail.com wrote: [snip] I don't think the later was_short can handle the hole case. For the hole case, we should try reading next strip object instead of return.