Re: Crc32 Challenge

2015-11-18 Thread Dan van der Ster
Hi, I checked the partial crc after each iteration in google's python implementation and found that the crc of the last iteration matches ceph's [1]: >>> from crc32c import crc >>> crc('foo bar baz') crc 1197962378 crc 3599162226 crc 2946501991 crc 2501826906 crc 3132034983 crc 3851841059 crc

Re: scrub randomization and load threshold

2015-11-16 Thread Dan van der Ster
On Mon, Nov 16, 2015 at 4:20 PM, Sage Weil <s...@newdream.net> wrote: > On Mon, 16 Nov 2015, Dan van der Ster wrote: >> Instead of keeping a 24hr loadavg, how about we allow scrubs whenever >> the loadavg is decreasing (or below the threshold)? As long as the >> 1min loa

Re: scrub randomization and load threshold

2015-11-16 Thread Dan van der Ster
On Mon, Nov 16, 2015 at 4:32 PM, Dan van der Ster <d...@vanderster.com> wrote: > On Mon, Nov 16, 2015 at 4:20 PM, Sage Weil <s...@newdream.net> wrote: >> On Mon, 16 Nov 2015, Dan van der Ster wrote: >>> Instead of keeping a 24hr loadavg, how about we allow s

Re: scrub randomization and load threshold

2015-11-16 Thread Dan van der Ster
On Thu, Nov 12, 2015 at 4:34 PM, Dan van der Ster <d...@vanderster.com> wrote: > On Thu, Nov 12, 2015 at 4:10 PM, Sage Weil <s...@newdream.net> wrote: >> On Thu, 12 Nov 2015, Dan van der Ster wrote: >>> On Thu, Nov 12, 2015 at 2:29 PM, Sage Weil <s...@newdream.net&

Re: scrub randomization and load threshold

2015-11-16 Thread Dan van der Ster
On Mon, Nov 16, 2015 at 6:13 PM, Sage Weil <s...@newdream.net> wrote: > On Mon, 16 Nov 2015, Dan van der Ster wrote: >> On Mon, Nov 16, 2015 at 4:58 PM, Dan van der Ster <d...@vanderster.com> >> wrote: >> > On Mon, Nov 16, 2015 at 4:32 PM, Dan van der St

Re: scrub randomization and load threshold

2015-11-16 Thread Dan van der Ster
On Mon, Nov 16, 2015 at 4:58 PM, Dan van der Ster <d...@vanderster.com> wrote: > On Mon, Nov 16, 2015 at 4:32 PM, Dan van der Ster <d...@vanderster.com> wrote: >> On Mon, Nov 16, 2015 at 4:20 PM, Sage Weil <s...@newdream.net> wrote: >>> On Mon, 16 Nov 2015,

scrub randomization and load threshold

2015-11-12 Thread Dan van der Ster
Hi, Firstly, we just had a look at the new osd_scrub_interval_randomize_ratio option and found that it doesn't really solve the deep scrubbing problem. Given the default options, osd_scrub_min_interval = 60*60*24 osd_scrub_max_interval = 7*60*60*24 osd_scrub_interval_randomize_ratio = 0.5

Re: scrub randomization and load threshold

2015-11-12 Thread Dan van der Ster
On Thu, Nov 12, 2015 at 2:29 PM, Sage Weil <s...@newdream.net> wrote: > On Thu, 12 Nov 2015, Dan van der Ster wrote: >> Hi, >> >> Firstly, we just had a look at the new >> osd_scrub_interval_randomize_ratio option and found that it doesn't >> really

Re: [ceph-users] v9.1.0 Infernalis release candidate released

2015-10-14 Thread Dan van der Ster
Xinze Chi) >> * osd: fix promotion vs full cache tier (Samuel Just) >> * osd: fix replay requeue when pg is still activating (#13116 Samuel Just) >> * osd: fix scrub stat bugs (Sage Weil, Samuel Just) >> * osd: force promotion for ops EC can't handle (Zhiqiang Wang)

gitbuilder mirrors

2015-09-15 Thread Dan van der Ster
Hi, Downloading from gitbuilder.ceph.com is super slow from where I'm sitting (<100KB/s). Does anyone have a publicly accessible mirror? Cheers, Dan -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majord...@vger.kernel.org More majordomo info at

Re: gitbuilder mirrors

2015-09-15 Thread Dan van der Ster
On Tue, Sep 15, 2015 at 5:33 PM, Gregory Farnum <gfar...@redhat.com> wrote: > On Tue, Sep 15, 2015 at 1:13 AM, Dan van der Ster <d...@vanderster.com> wrote: >> Hi, >> Downloading from gitbuilder.ceph.com is super slow from where I'm >> sitting (<100KB/s). Do

Re: [ceph-users] Is it safe to increase pg number in a production environment

2015-08-11 Thread Dan van der Ster
On Tue, Aug 4, 2015 at 9:48 PM, Stefan Priebe s.pri...@profihost.ag wrote: Hi, Am 04.08.2015 um 21:16 schrieb Ketor D: Hi Stefan, Could you describe more about the linger ops bug? I'm runing Firefly as you say still has this bug. It will be fixed in next ff release. This

Re: hdparm -W redux, bug in _check_disk_write_cache for RHEL6?

2015-07-21 Thread Dan van der Ster
On Tue, Jul 21, 2015 at 4:20 PM, Ilya Dryomov idryo...@gmail.com wrote: This one, I think: commit ab0a9735e06914ce4d2a94ffa41497dbc142fe7f Author: Christoph Hellwig h...@lst.de Date: Thu Oct 29 14:14:04 2009 +0100 blkdev: flush disk cache on -fsync Thanks, that looks relevant! Looks

hdparm -W redux, bug in _check_disk_write_cache for RHEL6?

2015-07-21 Thread Dan van der Ster
Hi, Following the sf.net corruption report I've been checking our config w.r.t data consistency. AFAIK the two main recommendations are: 1) don't mount FileStores with nobarrier 2) disable write-caching (hdparm -W 0 /dev/sdX) when using block dev journals and your kernel is 2.6.33

Re: CRC32 of messages

2015-06-29 Thread Dan van der Ster
On Mon, Jun 29, 2015 at 8:31 AM, Dałek, Piotr piotr.da...@ts.fujitsu.com wrote: -Original Message- From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel- ow...@vger.kernel.org] On Behalf Of Erik G. Burrows Sent: Friday, June 26, 2015 6:49 PM All, Can someone explain to me the

Re: firefly v0.80.10 QE validation completed

2015-06-22 Thread Dan van der Ster
Hi Loic, What does published mean? 0.80.10 looks published to me: http://ceph.com/rpm-firefly/el6/x86_64/ Cheers, Dan On Mon, Jun 22, 2015 at 10:48 AM, Loic Dachary l...@dachary.org wrote: Hi Olivier, On 22/06/2015 03:40, Olivier Bonvalet wrote: Hi, I saw the proposed upgrade to

unit test for messaging crc32c

2015-06-18 Thread Dan van der Ster
Hi all, We've recently experienced a broken router than was corrupting packets in way that the tcp checksums were still valid. There has been some resulting data corruption -- thus far the confirmed corruptions were outside of Ceph communications -- but it has made us want to double check the

Re: unit test for messaging crc32c

2015-06-18 Thread Dan van der Ster
On Thu, Jun 18, 2015 at 9:53 AM, Dałek, Piotr piotr.da...@ts.fujitsu.com wrote: Actually, all it takes is just to disable CRC in configuration on one node (or even daemon). It'll cause to put zeros in CRC fields in all messages sent, triggering CRC check failures cluster-wide (on remaining,

Re: [ceph-users] v0.94.2 Hammer released

2015-06-17 Thread Dan van der Ster
On Thu, Jun 11, 2015 at 7:34 PM, Sage Weil sw...@redhat.com wrote: * ceph-objectstore-tool should be in the ceph server package (#11376, Ken Dreyer) We had a little trouble yum updating from 0.94.1 to 0.94.2: file /usr/bin/ceph-objectstore-tool from install of ceph-1:0.94.2-0.el6.x86_64

Re: [ceph-users] Creating and deploying OSDs in parallel

2015-03-31 Thread Dan van der Ster
Hi Somnath, We have deployed many machines in parallel and it generally works. Keep in mind that if you deploy many many (1000) then this will create so many osdmap incrementals, so quickly, that the memory usage on the OSDs will increase substantially (until you reboot). Best Regards, Dan On

Re: Bounding OSD memory requirements during peering/recovery

2015-03-13 Thread Dan van der Ster
this time period? -Sam On 03/13/2015 08:36 AM, Dan van der Ster wrote: On Fri, Mar 13, 2015 at 1:52 PM, Dan van der Ster d...@vanderster.com wrote: Hi Sage, Losing a message would have been plausible given the network issue we had today. I tried: # ceph osd pg-temp 75.45 6689 set 75.45

Re: Bounding OSD memory requirements during peering/recovery

2015-03-13 Thread Dan van der Ster
On Mon, Mar 9, 2015 at 4:47 PM, Gregory Farnum g...@gregs42.com wrote: On Mon, Mar 9, 2015 at 8:42 AM, Dan van der Ster d...@vanderster.com wrote: Hi Sage, On Tue, Feb 10, 2015 at 2:51 AM, Sage Weil s...@newdream.net wrote: On Mon, 9 Feb 2015, David McBride wrote: On 09/02/15 15:31, Gregory

Re: Bounding OSD memory requirements during peering/recovery

2015-03-13 Thread Dan van der Ster
an individual pg to repeer with something like ceph osd pg-temp 75.45 6689 See if that makes it go? sage On March 13, 2015 7:24:48 AM EDT, Dan van der Ster d...@vanderster.com wrote: On Mon, Mar 9, 2015 at 4:47 PM, Gregory Farnum g...@gregs42.com wrote: On Mon, Mar 9, 2015 at 8:42 AM, Dan van

Re: Bounding OSD memory requirements during peering/recovery

2015-03-13 Thread Dan van der Ster
On Fri, Mar 13, 2015 at 1:52 PM, Dan van der Ster d...@vanderster.com wrote: Hi Sage, Losing a message would have been plausible given the network issue we had today. I tried: # ceph osd pg-temp 75.45 6689 set 75.45 pg_temp mapping to [6689] then waited a bit. It's still incomplete

Re: Bounding OSD memory requirements during peering/recovery

2015-03-09 Thread Dan van der Ster
Hi Sage, On Tue, Feb 10, 2015 at 2:51 AM, Sage Weil s...@newdream.net wrote: On Mon, 9 Feb 2015, David McBride wrote: On 09/02/15 15:31, Gregory Farnum wrote: So, memory usage of an OSD is usually linear in the number of PGs it hosts. However, that memory can also grow based on at least

Re: 'Immutable bit' on pools to prevent deletion

2015-01-15 Thread Dan Van Der Ster
Hi Wido, +1 for safeguards. Yeah that is scary: it's one api call to delete a pool, and perhaps even a client with w capability on a pool can delete it?? (I didn’t try...) I can think of many ways that fat fingers can create crazy loads, deny client access, ... 1. changing pool size 2.

Re: [Ceph-Dokan] A Windows CephFS Client Project

2015-01-07 Thread Dan Van Der Ster
Dear Ketor, Congratulations, this is very interesting! Sorry to ask the obvious question, but does Dokan provide the needed interface to write a block device client, i.e. an RBD client ? Cheers, Dan On 07 Jan 2015, at 05:09, Ketor D d.ke...@gmail.com wrote: Hi everyone, A new project

Re: Higher OSD disk util due to RBD snapshots from Dumpling to Firefly

2015-01-07 Thread Dan van der Ster
Hi Wido, I've been trying to reproduce this but haven't been able yet. What I've tried so far is use fio rbd with a 0.80.7 client connected to a 0.80.7 cluster. I created a 10GB format 2 block device, then measured the 4k randwrite iops before and after having snaps. I measured around 2000 iops

Re: full osdmaps in mon txns

2015-01-06 Thread Dan van der Ster
On Mon, Jan 5, 2015 at 10:12 AM, Dan van der Ster daniel.vanders...@cern.ch wrote: Hi Sage, On Tue, Dec 23, 2014 at 10:10 PM, Sage Weil sw...@redhat.com wrote: This fun issue came up again in the form of 10422: http://tracker.ceph.com/issues/10422 I think we have 3 main options

Re: full osdmaps in mon txns

2015-01-05 Thread Dan van der Ster
Hi Sage, On Tue, Dec 23, 2014 at 10:10 PM, Sage Weil sw...@redhat.com wrote: This fun issue came up again in the form of 10422: http://tracker.ceph.com/issues/10422 I think we have 3 main options: 1. Ask users to do a mon scrub prior to upgrade to ensure it is safe. If a mon is

wip-3896 merge?

2014-12-09 Thread Dan Van Der Ster
Hi Yehuda, I just wanted to ping this issue since it seems to have been forgotten (and is still present in all versions of rest-bench I tried). The tracker issue was marked as resolved, but the fix is only (AFAICT) in wip-3896 and there isn’t a pull req to bring it to master or any release

Scrub / SnapTrim IO Prioritization and Latency

2014-10-30 Thread Dan van der Ster
Hi Sam, Sorry I missed the discussion last night about putting the trim/scrub operations in a priority opq alongside client ops. I had a question about the expected latency impact of this approach. I understand that you've previously validated that your priority queue manages to fairly

Re: Scrub / SnapTrim IO Prioritization and Latency

2014-10-30 Thread Dan van der Ster
saturated) a little. -Sam On Thu, Oct 30, 2014 at 3:59 AM, Dan van der Ster daniel.vanders...@cern.ch wrote: Hi Sam, Sorry I missed the discussion last night about putting the trim/scrub operations in a priority opq alongside client ops. I had a question about the expected latency impact

Re: Scrub / SnapTrim IO Prioritization and Latency

2014-10-30 Thread Dan van der Ster
at 11:25 AM, Dan van der Ster daniel.vanders...@cern.ch wrote: Hi Sam, A few comments. 1. My understanding is that your new approach would treat the scrub/trim ops similarly to (or even exactly like?) how we treat recovery ops today. Is that right? Currently even with recovery op

Re: kerberos / AD requirements, blueprint

2014-10-23 Thread Dan Van Der Ster
Hi Sage, On 22 Oct 2014, at 19:08, Sage Weil s...@newdream.net wrote: On Wed, 22 Oct 2014, Dan Van Der Ster wrote: Hi Sage, On 16 Oct 2014, at 21:57, Sage Weil s...@newdream.net wrote: I started to write up a blueprint for kerberos / LDAP / AD support: https://wiki.ceph.com

Re: kerberos / AD requirements, blueprint

2014-10-22 Thread Dan Van Der Ster
Hi Sage, On 16 Oct 2014, at 21:57, Sage Weil s...@newdream.net wrote: I started to write up a blueprint for kerberos / LDAP / AD support: https://wiki.ceph.com/Planning/Blueprints/Hammer/kerberos_authn%2C_AD_authn%2F%2Fauthz In case it isn't clear from that document, I only sort

Re: snap_trimming + backfilling is inefficient with many purged_snaps

2014-10-15 Thread Dan Van Der Ster
Hi Sage, On 19 Sep 2014, at 17:37, Dan Van Der Ster daniel.vanders...@cern.ch wrote: September 19 2014 5:19 PM, Sage Weil sw...@redhat.com wrote: On Fri, 19 Sep 2014, Dan van der Ster wrote: On Fri, Sep 19, 2014 at 10:41 AM, Dan Van Der Ster daniel.vanders...@cern.ch wrote: On 19 Sep

Re: [ceph-users] v0.67.11 dumpling released

2014-09-25 Thread Dan Van Der Ster
Hi Mike, On 25 Sep 2014, at 17:47, Mike Dawson mike.daw...@cloudapt.com wrote: On 9/25/2014 11:09 AM, Sage Weil wrote: v0.67.11 Dumpling === This stable update for Dumpling fixes several important bugs that affect a small set of users. We recommend that all Dumpling

Re: snap_trimming + backfilling is inefficient with many purged_snaps

2014-09-21 Thread Dan van der Ster
Hi Florian, September 21 2014 3:33 PM, Florian Haas flor...@hastexo.com wrote: That said, I'm not sure that wip-9487-dumpling is the final fix to the issue. On the system where I am seeing the issue, even with the fix deployed, osd's still not only go crazy snap trimming (which by itself

Re: snap_trimming + backfilling is inefficient with many purged_snaps

2014-09-19 Thread Dan Van Der Ster
On 19 Sep 2014, at 08:12, Florian Haas flor...@hastexo.com wrote: On Fri, Sep 19, 2014 at 12:27 AM, Sage Weil sw...@redhat.com wrote: On Fri, 19 Sep 2014, Florian Haas wrote: Hi Sage, was the off-list reply intentional? Whoops! Nope :) On Thu, Sep 18, 2014 at 11:47 PM, Sage Weil

Re: snap_trimming + backfilling is inefficient with many purged_snaps

2014-09-19 Thread Dan van der Ster
On Fri, Sep 19, 2014 at 10:41 AM, Dan Van Der Ster daniel.vanders...@cern.ch wrote: On 19 Sep 2014, at 08:12, Florian Haas flor...@hastexo.com wrote: On Fri, Sep 19, 2014 at 12:27 AM, Sage Weil sw...@redhat.com wrote: On Fri, 19 Sep 2014, Florian Haas wrote: Hi Sage, was the off-list reply

Re: snap_trimming + backfilling is inefficient with many purged_snaps

2014-09-19 Thread Dan van der Ster
September 19 2014 5:19 PM, Sage Weil sw...@redhat.com wrote: On Fri, 19 Sep 2014, Dan van der Ster wrote: On Fri, Sep 19, 2014 at 10:41 AM, Dan Van Der Ster daniel.vanders...@cern.ch wrote: On 19 Sep 2014, at 08:12, Florian Haas flor...@hastexo.com wrote: On Fri, Sep 19, 2014 at 12:27 AM

snap_trimming + backfilling is inefficient with many purged_snaps

2014-09-18 Thread Dan Van Der Ster
(moving this discussion to -devel) Begin forwarded message: From: Florian Haas flor...@hastexo.com Date: 17 Sep 2014 18:02:09 CEST Subject: Re: [ceph-users] RGW hung, 2 OSDs using 100% CPU To: Dan Van Der Ster daniel.vanders...@cern.ch Cc: Craig Lewis cle...@centraldesktop.com, ceph-us

Re: snap_trimming + backfilling is inefficient with many purged_snaps

2014-09-18 Thread Dan van der Ster
Hi, September 18 2014 9:03 PM, Florian Haas flor...@hastexo.com wrote: On Thu, Sep 18, 2014 at 8:56 PM, Dan van der Ster daniel.vanders...@cern.ch wrote: Hi Florian, On Sep 18, 2014 7:03 PM, Florian Haas flor...@hastexo.com wrote: Hi Dan, saw the pull request, and can confirm your

Re: snap_trimming + backfilling is inefficient with many purged_snaps

2014-09-18 Thread Dan van der Ster
-- Dan van der Ster || Data Storage Services || CERN IT Department -- September 18 2014 9:12 PM, Dan van der Ster daniel.vanders...@cern.ch wrote: Hi, September 18 2014 9:03 PM, Florian Haas flor...@hastexo.com wrote: On Thu, Sep 18, 2014 at 8:56 PM, Dan van der Ster daniel.vanders

Re: Stopping ceph daemons during the upgrade

2014-08-18 Thread Dan Van Der Ster
Hi, On 18 Aug 2014, at 17:19, Loic Dachary l...@dachary.org wrote: there probably is a way to do something similar with RPM packages. That behaviour (well, close enough) just _removed_ from firefly, see: https://github.com/ceph/ceph/commit/361c1f8554ce1fedfd0020cd306c41b0ba25f53e I don’t

Re: [RFC] add rocksdb support

2014-06-23 Thread Dan van der Ster
Hi, In your test setup do the KV stores use the SSDs in any way? If not, is this really a fair comparison? If the KV stores can give SSD-like ceph performance (especially latency) without the SSDs, that would be quite good. Cheers, Dan -- Dan van der Ster || Data Storage Services || CERN

Re: RADOS translator for GlusterFS

2014-05-05 Thread Dan van der Ster
Hi, On 05/05/14 17:21, Jeff Darcy wrote: Now that we're all one big happy family, I've been mulling over different ways that the two technology stacks could work together. One idea would be to use some of the GlusterFS upper layers for their interface and integration possibilities, but then

default filestore max sync interval

2014-04-29 Thread Dan Van Der Ster
Hi all, Why is the default max sync interval only 5 seconds? Today we realized what a huge difference that increasing this to 30 or 60s can do for the small write latency. Basically, with a 5s interval our 4k write latency is above 30-35ms and once we increase it to 30s we can get under 10ms

Re: default filestore max sync interval

2014-04-29 Thread Dan van der Ster
April 29 2014 10:36 PM, Stefan Priebe wrote: Hi Dan, Am 29.04.2014 22:10, schrieb Dan Van Der Ster: Hi all, Why is the default max sync interval only 5 seconds? Today we realized what a huge difference that increasing this to 30 or 60s can do for the small write latency

Re: contraining crush placement possibilities

2014-03-07 Thread Dan van der Ster
the number of peers for a given OSD in a large cluster is bounded (pg_num / num_osds), but I think we may still be able improve things. I'm surprised they didn't cite Ceph -- aren't copysets ~= placement groups? Cheers, Dan -- Dan van der Ster || Data Storage Services || CERN IT Department

Re: [Ceph-maintainers] disabling updatedb

2014-02-18 Thread Dan van der Ster
cephfs namespace. Cheers, Dan -- Dan van der Ster || Data Storage Services || CERN IT Department -- Gaudenz -- Ever tried. Ever failed. No matter. Try again. Fail again. Fail better. ~ Samuel Beckett ~ -- To unsubscribe from this list: send the line unsubscribe ceph-devel in the body

Re: CDS blueprint: strong auth for cephfs

2013-11-15 Thread Dan van der Ster
Hi Alex, The problem is that the cephx keyring would still be needed on untrusted hosts. With that, anyone can delete/corrupt the underlying objects (though they may be encrypted) using rados. Cheers, Dan On Thu, Nov 14, 2013 at 9:09 PM, Alex Elsayed eternal...@gmail.com wrote: Dan van der Ster

Re: CDS blueprint: strong auth for cephfs

2013-11-14 Thread Dan van der Ster
Hi Greg, On Wed, Nov 13, 2013 at 6:45 PM, Gregory Farnum g...@inktank.com wrote: On Wed, Nov 13, 2013 at 8:05 AM, Dan van der Ster d...@vanderster.com wrote: Hi all, This mail is just to let you know that we've prepared a draft blueprint related to adding strong(er) authn/authz to cephfs

Re: CDS blueprint: strong auth for cephfs

2013-11-14 Thread Dan van der Ster
On Thu, Nov 14, 2013 at 4:55 PM, Gregory Farnum g...@inktank.com wrote: On Thu, Nov 14, 2013 at 2:00 AM, Dan van der Ster d...@vanderster.com wrote: Hi Greg, On Wed, Nov 13, 2013 at 6:45 PM, Gregory Farnum g...@inktank.com wrote: On Wed, Nov 13, 2013 at 8:05 AM, Dan van der Ster d

CDS blueprint: strong auth for cephfs

2013-11-13 Thread Dan van der Ster
Hi all, This mail is just to let you know that we've prepared a draft blueprint related to adding strong(er) authn/authz to cephfs: http://wiki.ceph.com/01Planning/02Blueprints/Firefly/Strong_AuthN_and_AuthZ_for_CephFS The main goal of the idea is that we'd like to be able to use CephFS from

Re: Ceph puppet module

2013-10-16 Thread Dan van der Ster
Hi, We would support this. The enovance module is a good starting point if you will accept something that doesn't yet use all the udev magic. Cheers, Dan CERN IT On Wed, Oct 16, 2013 at 10:00 AM, Loic Dachary l...@dachary.org wrote: Hi Ceph, How about creating

Re: Ceph puppet module

2013-10-16 Thread Dan van der Ster
Hi, On Wed, Oct 16, 2013 at 1:22 PM, Sébastien Han sebastien@enovance.com wrote: Hi Dan, During the cephdays you mentioned that you were about to redistribute all the changes you’ve made on puppet-ceph to the enovance repo. It would be great to merge both before starting anything.

status of fancy striping for krbd

2013-09-30 Thread Dan van der Ster
Hi all, I just noticed that krbd supports mapping format v2 images in 3.11, but it doesn't yet support STRIPINGV2. Is this coming anytime soon? Cheers, Dan CERN IT -- To unsubscribe from this list: send the line unsubscribe ceph-devel in the body of a message to majord...@vger.kernel.org More

Re: status of fancy striping for krbd

2013-09-30 Thread Dan van der Ster
On Mon, Sep 30, 2013 at 5:33 PM, Sage Weil s...@inktank.com wrote: On Mon, 30 Sep 2013, Dan van der Ster wrote: Hi all, I just noticed that krbd supports mapping format v2 images in 3.11, but it doesn't yet support STRIPINGV2. Is this coming anytime soon? We don't have immediate plans to add

Re: Ceph users meetup

2013-09-25 Thread Dan van der Ster
I'd probably be interested, but naive question.. how would this be different from the Ceph Days? On Wed, Sep 25, 2013 at 10:53 AM, Loic Dachary l...@dachary.org wrote: Hi Eric Patrick, Yesterday morning Eric suggested that organizing a ceph user meetup would be great and proposed his help

Re: Object Write Latency

2013-09-24 Thread Dan van der Ster
On Mon, Sep 23, 2013 at 8:18 PM, Sage Weil s...@inktank.com wrote: You might try measuring that directly and comparing it to the 33ms append+fsync that you previously saw. dd with fsync is quite slow... [root@p05151113777233 fio]# time dd if=/dev/zero of=/var/lib/ceph/osd/osd.1045/testtest

Re: Object Write Latency

2013-09-24 Thread Dan van der Ster
512 bytes (512 B) copied, 0.00441663 s, 116 kB/s real 0m0.006s user 0m0.001s sys 0m0.001s So it looks like I really should try those O_DIRECT journals on a separate partition. Cheers, Dan On Tue, Sep 24, 2013 at 11:13 AM, Dan van der Ster d...@vanderster.com wrote: On Mon, Sep 23, 2013 at 8

Re: Object Write Latency

2013-09-24 Thread Dan van der Ster
Thanks Sebastien, we will try that. BTW, as you know we're using the puppet modules of enovance, which set up a file-based journal. Did you guys already puppetize a blockdev journal? I'll be doing that now if not... Cheers, Dan CERN IT On Tue, Sep 24, 2013 at 2:30 PM, Sébastien Han

Re: Object Write Latency

2013-09-23 Thread Dan van der Ster
On Fri, Sep 20, 2013 at 3:11 PM, Mark Nelson mark.nel...@inktank.com wrote: On 09/20/2013 07:27 AM, Andreas Joachim Peters wrote: Hi, Hi Andreas! we made some benchmarks about object read/write latencies on the CERN ceph installation. The cluster has 44 nodes and ~1k disks, all on

Re: Object Write Latency

2013-09-23 Thread Dan van der Ster
On Fri, Sep 20, 2013 at 4:47 PM, Gregory Farnum g...@inktank.com wrote: On Fri, Sep 20, 2013 at 5:27 AM, Andreas Joachim Peters andreas.joachim.pet...@cern.ch wrote: Hi, we made some benchmarks about object read/write latencies on the CERN ceph installation. The cluster has 44 nodes and

Re: Object Write Latency

2013-09-23 Thread Dan van der Ster
On Fri, Sep 20, 2013 at 5:34 PM, Sage Weil s...@inktank.com wrote: On Fri, 20 Sep 2013, Andreas Joachim Peters wrote: Hi, we made some benchmarks about object read/write latencies on the CERN ceph installation. The cluster has 44 nodes and ~1k disks, all on 10GE and the pool

Re: [PATCH] rbd: don't zero-fill non-image object requests

2013-03-27 Thread Dan van der Ster
On Wed, Mar 27, 2013 at 5:10 PM, Sage Weil s...@inktank.com wrote: Does this only affect the current -rc or is this problem older than that? The BUG I reported that triggered this was on 3.8.4. -- Dan -- To unsubscribe from this list: send the line unsubscribe ceph-devel in the body of a message