Re: [ceph-users] PG num calculator live on Ceph.com

2015-01-08 Thread Michael J. Kidd
across those two pools. I welcome anyone with more CephFS experience to weigh in on this! :) Thanks, Michael J. Kidd Sr. Storage Consultant Inktank Professional Services - by Red Hat On Wed, Jan 7, 2015 at 3:59 PM, Lindsay Mathieson lindsay.mathie...@gmail.com wrote: With cephfs we have

Re: [ceph-users] PG num calculator live on Ceph.com

2015-01-07 Thread Michael J. Kidd
. Michael J. Kidd Sr. Storage Consultant Inktank Professional Services - by Red Hat On Wed, Jan 7, 2015 at 4:34 PM, Sanders, Bill bill.sand...@teradata.com wrote: This is interesting. Kudos to you guys for getting the calculator up, I think this'll help some folks. I have 1 pool, 40 OSDs

[ceph-users] PG num calculator live on Ceph.com

2015-01-07 Thread Michael J. Kidd
... As an aside, we're also working to update the documentation to reflect the best practices. See Ceph.com tracker for this at: http://tracker.ceph.com/issues/9867 Thanks! Michael J. Kidd Sr. Storage Consultant Inktank Professional Services - by Red Hat

Re: [ceph-users] PG num calculator live on Ceph.com

2015-01-07 Thread Michael J. Kidd
, 32 PGs total still gives very close to 1 PG per OSD. Being that it's such a low utilization pool, this is still sufficient. Thanks, Michael J. Kidd Sr. Storage Consultant Inktank Professional Services - by Red Hat On Wed, Jan 7, 2015 at 3:17 PM, Christopher O'Connell c...@sendfaster.com wrote

Re: [ceph-users] PG num calculator live on Ceph.com

2015-01-07 Thread Michael J. Kidd
Where is the source ? On the page.. :) It does link out to jquery and jquery-ui, but all the custom bits are embedded in the HTML. Glad it's helpful :) Michael J. Kidd Sr. Storage Consultant Inktank Professional Services - by Red Hat On Wed, Jan 7, 2015 at 3:46 PM, Loic Dachary l

Re: [ceph-users] OSD process exhausting server memory

2014-10-30 Thread Michael J. Kidd
to take down OSDs in multiple hosts. I'm also unsure about the cache tiering and how it could relate to the load being seen. Hope this helps... Michael J. Kidd Sr. Storage Consultant Inktank Professional Services - by Red Hat On Thu, Oct 30, 2014 at 4:00 AM, Lukáš Kubín lukas.ku...@gmail.com wrote

Re: [ceph-users] OSD process exhausting server memory

2014-10-30 Thread Michael J. Kidd
) and may chip in... Wish I could be more help.. Michael J. Kidd Sr. Storage Consultant Inktank Professional Services - by Red Hat On Thu, Oct 30, 2014 at 11:00 AM, Lukáš Kubín lukas.ku...@gmail.com wrote: Thanks Michael, still no luck. Letting the problematic OSD.10 down has no effect. Within

Re: [ceph-users] OSD process exhausting server memory

2014-10-29 Thread Michael J. Kidd
nodeep-scrub ## For help identifying why memory usage was so high, please provide: * ceph osd dump | grep pool * ceph osd crush rule dump Let us know if this helps... I know it looks extreme, but it's worked for me in the past.. Michael J. Kidd Sr. Storage Consultant Inktank Professional Services

Re: [ceph-users] OSD process exhausting server memory

2014-10-29 Thread Michael J. Kidd
Ah, sorry... since they were set out manually, they'll need to be set in manually.. for i in $(ceph osd tree | grep osd | awk '{print $3}'); do ceph osd in $i; done Michael J. Kidd Sr. Storage Consultant Inktank Professional Services - by Red Hat On Wed, Oct 29, 2014 at 12:33 PM, Lukáš Kubín

Re: [ceph-users] RBD for ephemeral

2014-05-19 Thread Michael J. Kidd
Since the status is 'Abandoned', it would appear that the fix has not been merged into any release of OpenStack. Thanks, Michael J. Kidd Sr. Storage Consultant Inktank Professional Services On Sun, May 18, 2014 at 5:13 PM, Yuming Ma (yumima) yum...@cisco.comwrote: Wondering what

Re: [ceph-users] RBD for ephemeral

2014-05-19 Thread Michael J. Kidd
After sending my earlier email, I found another commit that was merged in March: https://review.openstack.org/#/c/59149/ Seems to follow a newer image handling technique that was being sought which prevented the first patch from being merged in... Michael J. Kidd Sr. Storage Consultant Inktank

Re: [ceph-users] Ceph not replicating

2014-04-19 Thread Michael J. Kidd
You may also want to check your 'min_size'... if it's 2, then you'll be incomplete even with 1 complete copy. ceph osd dump | grep pool You can reduce the min size with the following syntax: ceph osd pool set poolname min_size 1 Thanks, Michael J. Kidd Sent from my mobile device. Please

Re: [ceph-users] Ceph not replicating

2014-04-19 Thread Michael J. Kidd
seen show BTRFS slows drastically after only a few hours with a high file count in the filesystem. Better to re-deploy now than when you have data serving in production. Thanks, Michael J. Kidd Sr. Storage Consultant Inktank Professional Services On Sat, Apr 19, 2014 at 5:51 PM, Gonzalo Aguilar

Re: [ceph-users] No more Journals ?

2014-03-14 Thread Michael J. Kidd
Journals will default to being on-disk with the OSD if there is nothing specified on the ceph-deploy line. If you have a separate journal device, then you should specify it per the original example syntax. Michael J. Kidd Sr. Storage Consultant Inktank Professional Services On Fri, Mar 14

Re: [ceph-users] Very high latency values

2014-03-07 Thread Michael J. Kidd
, aside from occasional mailing list posts about specific counters.. Hope this helps! Michael J. Kidd Sr. Storage Consultant Inktank Professional Services On Fri, Mar 7, 2014 at 11:39 AM, Dan Ryder (daryder) dary...@cisco.comwrote: Hello, I'm working with two different Ceph clusters

Re: [ceph-users] pausing recovery when adding new machine

2014-03-07 Thread Michael J. Kidd
up osdid' to bring them up manually. Hope this helps! Michael J. Kidd Sr. Storage Consultant Inktank Professional Services On Fri, Mar 7, 2014 at 3:06 PM, Sidharta Mukerjee smukerje...@gmail.comwrote: When I use ceph-deploy to add a bunch of new OSDs (from a new machine), the ceph cluster

Re: [ceph-users] CephFS: files never stored on OSDs

2014-02-28 Thread Michael J. Kidd
Seems that you may also need to tell CephFS to use the new pool instead of the default.. After CephFS is mounted, run: # cephfs /mnt/ceph set_layout -p 4 Michael J. Kidd Sr. Storage Consultant Inktank Professional Services On Fri, Feb 28, 2014 at 9:12 AM, Sage Weil s...@inktank.com wrote

Re: [ceph-users] RedHat ceph boot question

2014-01-25 Thread Michael J. Kidd
While clearly not optimal for long term flexibility, I've found that adding my OSD's to fstab allows the OSDs to mount during boot, and they start automatically when they're already mounted during boot. Hope this helps until a permanent fix is available. Michael J. Kidd Sr. Storage Consultant

Re: [ceph-users] servers advise (dell r515 or supermicro ....)

2014-01-15 Thread Michael J. Kidd
It's also good to note that the m500 has built in RAIN protection (basically, diagonal parity at the nand level). Should be very good for journal consistency. Sent from my mobile device. Please excuse brevity and typographical errors. On Jan 15, 2014 9:07 AM, Stefan Priebe

Re: [ceph-users] servers advise (dell r515 or supermicro ....)

2014-01-15 Thread Michael J. Kidd
actually, they're very inexpensive as far as SSD's go. The 960gb m500 can be had on Amazon for $499 US on prime (as of yesterday anyway). Sent from my mobile device. Please excuse brevity and typographical errors. On Jan 15, 2014 9:50 AM, Sebastien Han sebastien@enovance.com wrote: