RE: High-availability testing of ceph

2012-07-31 Thread Eric_YH_Chen
Hi, Josh: Thanks for your reply. However, I had asked a question about replica setting before. http://www.spinics.net/lists/ceph-devel/msg07346.html If the performance of rbd device is n MB/s under replica=2, then that means the total io throughputs on hard disk is over 3 * n MB/s. Because I

The cluster do not aware some osd are disappear

2012-07-31 Thread Eric_YH_Chen
Dear All: My Environment: two servers, and 12 hard-disk on each server. Version: Ceph 0.48, Kernel: 3.2.0-27 We create a ceph cluster with 24 osd, 3 monitors Osd.0 ~ osd.11 is on server1 Osd.12 ~ osd.23 is on server2 Mon.0 is on server1 Mon.1 is on server2 Mon.2 is on server3

How to integrate ceph with opendedup.

2012-07-31 Thread ramu
Hi all, I want to integrate ceph with opendedup(sdfs) using java-rados. Please help me to integration of ceph with opendedup. Thanks, Ramu. -- To unsubscribe from this list: send the line unsubscribe ceph-devel in the body of a message to majord...@vger.kernel.org More majordomo info at

Re: How to integrate ceph with opendedup.

2012-07-31 Thread Wido den Hollander
On 07/31/2012 11:18 AM, ramu wrote: Hi all, I want to integrate ceph with opendedup(sdfs) using java-rados. Please help me to integration of ceph with opendedup. What is the exact use-case for this? I get the point of de-duplication, but having a filesystem running on top of RADOS and not

cannot startup one of the osd

2012-07-31 Thread Eric_YH_Chen
Hi, all: My Environment: two servers, and 12 hard-disk on each server. Version: Ceph 0.48, Kernel: 3.2.0-27 We create a ceph cluster with 24 osd, 3 monitors Osd.0 ~ osd.11 is on server1 Osd.12 ~ osd.23 is on server2 Mon.0 is on server1 Mon.1 is on server2 Mon.2 is on server3

Re: Ceph Benchmark HowTo

2012-07-31 Thread Mehdi Abaakouk
Hi all, I have updated the how-to here: http://ceph.com/wiki/Benchmark And published the results of my latest tests: http://ceph.com/wiki/Benchmark#First_Example All results are good, my benchmark is clearly limited by my network connection ~ 110MB/s. In exception of the rest-api bench, the

About teuthology

2012-07-31 Thread Mehdi Abaakouk
Hi, I have taken a look into teuthology, the automation of all this tests are good, but are they any way to run it into a already installed ceph clusters ? Thanks in advance. Cheers, -- Mehdi Abaakouk for eNovance mail: sil...@sileht.net irc: sileht signature.asc Description: Digital

Re: About teuthology

2012-07-31 Thread Mark Nelson
On 7/31/12 8:59 AM, Mehdi Abaakouk wrote: Hi, I have taken a look into teuthology, the automation of all this tests are good, but are they any way to run it into a already installed ceph clusters ? Thanks in advance. Cheers, Hi Mehdi, I think a number of the test related tasks should run

another performance-related thread

2012-07-31 Thread Andrey Korolyov
Hi, I`ve finally managed to run rbd-related test on relatively powerful machines and what I have got: 1) Reads on almost fair balanced cluster(eight nodes) did very well, utilizing almost all disk and bandwidth (dual gbit 802.3ad nics, sata disks beyond lsi sas 2108 with wt cache gave me

Re: [EXTERNAL] Re: avoiding false detection of down OSDs

2012-07-31 Thread Jim Schutt
On 07/30/2012 06:24 PM, Gregory Farnum wrote: On Mon, Jul 30, 2012 at 3:47 PM, Jim Schuttjasc...@sandia.gov wrote: Above you mentioned that you are seeing these issues as you scaled out a storage cluster, but none of the solutions you mentioned address scaling. Let's assume your preferred

Re: another performance-related thread

2012-07-31 Thread Mark Nelson
Hi Andrey! On 07/31/2012 10:03 AM, Andrey Korolyov wrote: Hi, I`ve finally managed to run rbd-related test on relatively powerful machines and what I have got: 1) Reads on almost fair balanced cluster(eight nodes) did very well, utilizing almost all disk and bandwidth (dual gbit 802.3ad nics,

Re: another performance-related thread

2012-07-31 Thread Josh Durgin
On 07/31/2012 08:03 AM, Andrey Korolyov wrote: Hi, I`ve finally managed to run rbd-related test on relatively powerful machines and what I have got: 1) Reads on almost fair balanced cluster(eight nodes) did very well, utilizing almost all disk and bandwidth (dual gbit 802.3ad nics, sata disks

Re: another performance-related thread

2012-07-31 Thread Andrey Korolyov
On 07/31/2012 07:17 PM, Mark Nelson wrote: Hi Andrey! On 07/31/2012 10:03 AM, Andrey Korolyov wrote: Hi, I`ve finally managed to run rbd-related test on relatively powerful machines and what I have got: 1) Reads on almost fair balanced cluster(eight nodes) did very well, utilizing almost

Re: About teuthology

2012-07-31 Thread Mehdi Abaakouk
On Tue, Jul 31, 2012 at 09:27:54AM -0500, Mark Nelson wrote: On 7/31/12 8:59 AM, Mehdi Abaakouk wrote: Hi Mehdi, I think a number of the test related tasks should run fine without strictly requiring the ceph task. You may have to change binary locations for things like rados, but those

Re: [PATCH v3] rbd: fix the memory leak of bio_chain_clone

2012-07-31 Thread Guangliang Zhao
On Mon, Jul 30, 2012 at 02:54:44PM -0700, Yehuda Sadeh wrote: On Thu, Jul 26, 2012 at 11:20 PM, Guangliang Zhao gz...@suse.com wrote: The bio_pair alloced in bio_chain_clone would not be freed, this will cause a memory leak. It could be freed actually only after 3 times release, because the

Re: cannot startup one of the osd

2012-07-31 Thread Samuel Just
This crash happens on each startup? -Sam On Tue, Jul 31, 2012 at 2:32 AM, eric_yh_c...@wiwynn.com wrote: Hi, all: My Environment: two servers, and 12 hard-disk on each server. Version: Ceph 0.48, Kernel: 3.2.0-27 We create a ceph cluster with 24 osd, 3 monitors Osd.0 ~

Re: About teuthology

2012-07-31 Thread Tommi Virtanen
On Tue, Jul 31, 2012 at 6:59 AM, Mehdi Abaakouk sil...@sileht.net wrote: Hi, I have taken a look into teuthology, the automation of all this tests are good, but are they any way to run it into a already installed ceph clusters ? Thanks in advance. Many of the actual tests being run are

Re: High-availability testing of ceph

2012-07-31 Thread Tommi Virtanen
On Tue, Jul 31, 2012 at 12:31 AM, eric_yh_c...@wiwynn.com wrote: If the performance of rbd device is n MB/s under replica=2, then that means the total io throughputs on hard disk is over 3 * n MB/s. Because I think the total number of copies is 3 in original. So, it seems not correct now,

Re: [EXTERNAL] Re: avoiding false detection of down OSDs

2012-07-31 Thread Gregory Farnum
On Tue, Jul 31, 2012 at 8:07 AM, Jim Schutt jasc...@sandia.gov wrote: On 07/30/2012 06:24 PM, Gregory Farnum wrote: Hmm. The concern is that if an OSD is stuck on disk swapping then it's going to be just as stuck for the monitors as the OSDs — they're all using the same network in the basic

[GIT PULL] Ceph changes for 3.6

2012-07-31 Thread Sage Weil
Hi Linus, Please pull the following Ceph changes for 3.6 from git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git for-linus There are several trivial conflicts to resolve; sorry! Stephen is carrying fixes for them in linux-next as well. Lots of stuff this time around: *

Re: How to integrate ceph with opendedup.

2012-07-31 Thread Tommi Virtanen
On Tue, Jul 31, 2012 at 2:18 AM, ramu ramu.freesyst...@gmail.com wrote: I want to integrate ceph with opendedup(sdfs) using java-rados. Please help me to integration of ceph with opendedup. It sounds like you could use radosgw and just use S3ChunkStore. If you really want to implement your own

Re: [EXTERNAL] Re: avoiding false detection of down OSDs

2012-07-31 Thread Sage Weil
On Tue, 31 Jul 2012, Gregory Farnum wrote: On Tue, Jul 31, 2012 at 8:07 AM, Jim Schutt jasc...@sandia.gov wrote: On 07/30/2012 06:24 PM, Gregory Farnum wrote: Hmm. The concern is that if an OSD is stuck on disk swapping then it's going to be just as stuck for the monitors as the OSDs ?

Re: Puppet modules for Ceph

2012-07-31 Thread Tommi Virtanen
On Tue, Jul 24, 2012 at 6:15 AM, loic.dach...@enovance.com wrote: Note that if puppet client was run on nodeB before it was run on nodeA, all three steps would have been run in sequence instead of being spread over two puppet client invocations. Unfortunately, even that is not enough. The

Re: Puppet modules for Ceph

2012-07-31 Thread Sage Weil
On Tue, 31 Jul 2012, Tommi Virtanen wrote: On Tue, Jul 24, 2012 at 6:15 AM, loic.dach...@enovance.com wrote: Note that if puppet client was run on nodeB before it was run on nodeA, all three steps would have been run in sequence instead of being spread over two puppet client invocations.

Re: Puppet modules for Ceph

2012-07-31 Thread Tommi Virtanen
On Tue, Jul 31, 2012 at 11:51 AM, Sage Weil s...@inktank.com wrote: It is also possible to feed initial keys to the monitors during the 'mkfs' stage. If the keys can be agreed on somehow beforehand, then they will already be in place when the initial quorum is reached. Not sure if that helps

Re: another performance-related thread

2012-07-31 Thread Andrey Korolyov
On 07/31/2012 07:53 PM, Josh Durgin wrote: On 07/31/2012 08:03 AM, Andrey Korolyov wrote: Hi, I`ve finally managed to run rbd-related test on relatively powerful machines and what I have got: 1) Reads on almost fair balanced cluster(eight nodes) did very well, utilizing almost all disk and

Re: Quick CentOS/RHEL question ...

2012-07-31 Thread 袁冬
Ceph can work well on CentOS6.2 including File Access and RBD while radowsgw is not still under our testing. To install ceph on CentOS6, the main problem is the difference of the packages' names between CentOS and Ubuntu, 'yum search ' may help. and some times, 'ldconfig' is needed after the

RE: cannot startup one of the osd

2012-07-31 Thread Eric_YH_Chen
Hi, Samuel: It happens every startup, I cannot fix it now. -Original Message- From: Samuel Just [mailto:sam.j...@inktank.com] Sent: Wednesday, August 01, 2012 1:36 AM To: Eric YH Chen/WYHQ/Wiwynn Cc: ceph-devel@vger.kernel.org; Chris YT Huang/WYHQ/Wiwynn; Victor CY Chang/WYHQ/Wiwynn