Re: FreeBSD Building and Testing

2015-12-21 Thread Willem Jan Withagen
On 21-12-2015 01:45, Xinze Chi (信泽) wrote: sorry for delay reply. Please have a try https://github.com/ceph/ceph/commit/ae4a8162eacb606a7f65259c6ac236e144bfef0a. Tried this one first: Testsuite summary for ceph 10.0.1

RBD performance with many childs and snapshots

2015-12-21 Thread Wido den Hollander
Hi, While implementing the buildvolfrom method in libvirt for RBD I'm stuck at some point. $ virsh vol-clone --pool myrbdpool image1 image2 This would clone image1 to a new RBD image called 'image2'. The code I've written now does: 1. Create a snapshot called image1@libvirt- 2. Protect the

Re: Is rbd_discard enough to wipe an RBD image?

2015-12-21 Thread Wido den Hollander
On 12/21/2015 04:50 PM, Josh Durgin wrote: > On 12/21/2015 07:09 AM, Jason Dillaman wrote: >> You will have to ensure that your writes are properly aligned with the >> object size (or object set if fancy striping is used on the RBD >> volume). In that case, the discard is translated to remove

Re: FreeBSD Building and Testing

2015-12-21 Thread Willem Jan Withagen
On 20-12-2015 17:10, Willem Jan Withagen wrote: Hi, Most of the Ceph is getting there in the most crude and rough state. So beneath is a status update on what is not working for me jet. Further: A) unittest_erasure_code_plugin failes on the fact that there is a different error code returned

Re: Is rbd_discard enough to wipe an RBD image?

2015-12-21 Thread Josh Durgin
On 12/21/2015 11:00 AM, Wido den Hollander wrote: My discard code now works, but I wanted to verify. If I understand Jason correctly it would be a matter of figuring out the 'order' of a image and call rbd_discard in a loop until you reach the end of the image. You'd need to get the order via

Fwd: FileStore : no wait thread queue_sync

2015-12-21 Thread David Casier
FYI. -- Forwarded message -- From: David Casier Date: 2015-12-21 23:19 GMT+01:00 Subject: FileStore : no wait thread queue_sync To: Ceph Development , Sage Weil Cc: Benoît LORIOT ,

Re: RBD performance with many childs and snapshots

2015-12-21 Thread Josh Durgin
On 12/21/2015 11:06 AM, Wido den Hollander wrote: Hi, While implementing the buildvolfrom method in libvirt for RBD I'm stuck at some point. $ virsh vol-clone --pool myrbdpool image1 image2 This would clone image1 to a new RBD image called 'image2'. The code I've written now does: 1. Create

Re: Is rbd_discard enough to wipe an RBD image?

2015-12-21 Thread Alexandre DERUMIER
>>I just want to know if this is sufficient to wipe a RBD image? AFAIK, ceph write zeroes in the rados objects with discard is used. They are an option for skip zeroes write if needed OPTION(rbd_skip_partial_discard, OPT_BOOL, false) // when trying to discard a range inside an object, set to

Re: Fwd: Client still connect failed leader after that mon down

2015-12-21 Thread Sage Weil
On Mon, 21 Dec 2015, Zhi Zhang wrote: > Regards, > Zhi Zhang (David) > Contact: zhang.david2...@gmail.com > zhangz.da...@outlook.com > > > > -- Forwarded message -- > From: Jaze Lee > Date: Mon, Dec 21, 2015 at 4:08 PM > Subject: Re: Client

Re: Issue with Ceph File System and LIO

2015-12-21 Thread Gregory Farnum
On Sun, Dec 20, 2015 at 6:38 PM, Eric Eastman wrote: > On Fri, Dec 18, 2015 at 12:18 AM, Yan, Zheng wrote: >> On Fri, Dec 18, 2015 at 2:23 PM, Eric Eastman >> wrote: Hi Yan Zheng, Eric Eastman Similar

RFC: tool for applying 'ceph daemon ' command to all OSDs

2015-12-21 Thread Dan Mick
I needed something to fetch current config values from all OSDs (sorta the opposite of 'injectargs --key value), so I hacked it, and then spiffed it up a bit. Does this seem like something that would be useful in this form in the upstream Ceph, or does anyone have any thoughts on its design or

Time to move the make check bot to jenkins.ceph.com

2015-12-21 Thread Loic Dachary
Hi, The make check bot is broken in a way that I can't figure out right now. Maybe now is the time to move it to jenkins.ceph.com ? It should not be more difficult than launching the run-make-check.sh script. It does not need network or root access. Cheers -- Loïc Dachary, Artisan Logiciel

Re: Improving Data-At-Rest encryption in Ceph

2015-12-21 Thread Adam Kupczyk
On Wed, Dec 16, 2015 at 11:33 PM, Sage Weil wrote: > On Wed, 16 Dec 2015, Adam Kupczyk wrote: >> On Tue, Dec 15, 2015 at 3:23 PM, Lars Marowsky-Bree wrote: >> > On 2015-12-14T14:17:08, Radoslaw Zarzynski wrote: >> > >> > Hi all, >> > >>

Fwd: Client still connect failed leader after that mon down

2015-12-21 Thread Zhi Zhang
Regards, Zhi Zhang (David) Contact: zhang.david2...@gmail.com zhangz.da...@outlook.com -- Forwarded message -- From: Jaze Lee Date: Mon, Dec 21, 2015 at 4:08 PM Subject: Re: Client still connect failed leader after that mon down To: Zhi Zhang

Re: RFC: tool for applying 'ceph daemon ' command to all OSDs

2015-12-21 Thread Gregory Farnum
On Mon, Dec 21, 2015 at 9:59 PM, Dan Mick wrote: > I needed something to fetch current config values from all OSDs (sorta > the opposite of 'injectargs --key value), so I hacked it, and then > spiffed it up a bit. Does this seem like something that would be useful > in this

Re: Is rbd_discard enough to wipe an RBD image?

2015-12-21 Thread Josh Durgin
On 12/21/2015 07:09 AM, Jason Dillaman wrote: You will have to ensure that your writes are properly aligned with the object size (or object set if fancy striping is used on the RBD volume). In that case, the discard is translated to remove operations on each individual backing object. The

Re: Is rbd_discard enough to wipe an RBD image?

2015-12-21 Thread Jason Dillaman
You will have to ensure that your writes are properly aligned with the object size (or object set if fancy striping is used on the RBD volume). In that case, the discard is translated to remove operations on each individual backing object. The only time zeros are written to disk is if you

ceph branch status

2015-12-21 Thread ceph branch robot
-- All Branches -- Abhishek Varshney 2015-12-09 11:22:26 +0530 infernalis Abhishek Varshney 2015-11-23 11:45:29 +0530 infernalis-backports Adam C. Emerson 2015-12-17

cluster_network goes slow during erasure code pool's stress testing

2015-12-21 Thread huang jun
hi,all We meet a problem related to erasure pool with k:m=3:1 and stripe_unit=64k*3. We have a cluster with 96 OSDs on 4 Hosts(hosts are: srv1, srv2, srv3, srv4), each host have 24 OSDs, each host have 12 core processors (Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz) and 48GB memory. cluster