across
those two pools.
I welcome anyone with more CephFS experience to weigh in on this! :)
Thanks,
Michael J. Kidd
Sr. Storage Consultant
Inktank Professional Services
- by Red Hat
On Wed, Jan 7, 2015 at 3:59 PM, Lindsay Mathieson
lindsay.mathie...@gmail.com wrote:
With cephfs we have
.
Michael J. Kidd
Sr. Storage Consultant
Inktank Professional Services
- by Red Hat
On Wed, Jan 7, 2015 at 4:34 PM, Sanders, Bill bill.sand...@teradata.com
wrote:
This is interesting. Kudos to you guys for getting the calculator up, I
think this'll help some folks.
I have 1 pool, 40 OSDs
...
As an aside, we're also working to update the documentation to reflect the
best practices. See Ceph.com tracker for this at:
http://tracker.ceph.com/issues/9867
Thanks!
Michael J. Kidd
Sr. Storage Consultant
Inktank Professional Services
- by Red Hat
, 32 PGs total still
gives very close to 1 PG per OSD. Being that it's such a low utilization
pool, this is still sufficient.
Thanks,
Michael J. Kidd
Sr. Storage Consultant
Inktank Professional Services
- by Red Hat
On Wed, Jan 7, 2015 at 3:17 PM, Christopher O'Connell c...@sendfaster.com
wrote
Where is the source ?
On the page.. :) It does link out to jquery and jquery-ui, but all the
custom bits are embedded in the HTML.
Glad it's helpful :)
Michael J. Kidd
Sr. Storage Consultant
Inktank Professional Services
- by Red Hat
On Wed, Jan 7, 2015 at 3:46 PM, Loic Dachary l
to take down OSDs in multiple hosts.
I'm also unsure about the cache tiering and how it could relate to the load
being seen.
Hope this helps...
Michael J. Kidd
Sr. Storage Consultant
Inktank Professional Services
- by Red Hat
On Thu, Oct 30, 2014 at 4:00 AM, Lukáš Kubín lukas.ku...@gmail.com wrote
) and may chip
in...
Wish I could be more help..
Michael J. Kidd
Sr. Storage Consultant
Inktank Professional Services
- by Red Hat
On Thu, Oct 30, 2014 at 11:00 AM, Lukáš Kubín lukas.ku...@gmail.com wrote:
Thanks Michael, still no luck.
Letting the problematic OSD.10 down has no effect. Within
nodeep-scrub
## For help identifying why memory usage was so high, please provide:
* ceph osd dump | grep pool
* ceph osd crush rule dump
Let us know if this helps... I know it looks extreme, but it's worked for
me in the past..
Michael J. Kidd
Sr. Storage Consultant
Inktank Professional Services
Ah, sorry... since they were set out manually, they'll need to be set in
manually..
for i in $(ceph osd tree | grep osd | awk '{print $3}'); do ceph osd in $i;
done
Michael J. Kidd
Sr. Storage Consultant
Inktank Professional Services
- by Red Hat
On Wed, Oct 29, 2014 at 12:33 PM, Lukáš Kubín
Since the status is 'Abandoned', it would appear that the fix has not been
merged into any release of OpenStack.
Thanks,
Michael J. Kidd
Sr. Storage Consultant
Inktank Professional Services
On Sun, May 18, 2014 at 5:13 PM, Yuming Ma (yumima) yum...@cisco.comwrote:
Wondering what
After sending my earlier email, I found another commit that was merged in
March:
https://review.openstack.org/#/c/59149/
Seems to follow a newer image handling technique that was being sought
which prevented the first patch from being merged in...
Michael J. Kidd
Sr. Storage Consultant
Inktank
You may also want to check your 'min_size'... if it's 2, then you'll be
incomplete even with 1 complete copy.
ceph osd dump | grep pool
You can reduce the min size with the following syntax:
ceph osd pool set poolname min_size 1
Thanks,
Michael J. Kidd
Sent from my mobile device. Please
seen show BTRFS slows drastically after only a
few hours with a high file count in the filesystem. Better to re-deploy now
than when you have data serving in production.
Thanks,
Michael J. Kidd
Sr. Storage Consultant
Inktank Professional Services
On Sat, Apr 19, 2014 at 5:51 PM, Gonzalo Aguilar
Journals will default to being on-disk with the OSD if there is nothing
specified on the ceph-deploy line. If you have a separate journal device,
then you should specify it per the original example syntax.
Michael J. Kidd
Sr. Storage Consultant
Inktank Professional Services
On Fri, Mar 14
, aside from occasional
mailing list posts about specific counters..
Hope this helps!
Michael J. Kidd
Sr. Storage Consultant
Inktank Professional Services
On Fri, Mar 7, 2014 at 11:39 AM, Dan Ryder (daryder) dary...@cisco.comwrote:
Hello,
I'm working with two different Ceph clusters
up
osdid' to bring them up manually.
Hope this helps!
Michael J. Kidd
Sr. Storage Consultant
Inktank Professional Services
On Fri, Mar 7, 2014 at 3:06 PM, Sidharta Mukerjee smukerje...@gmail.comwrote:
When I use ceph-deploy to add a bunch of new OSDs (from a new machine),
the ceph cluster
Seems that you may also need to tell CephFS to use the new pool instead of
the default..
After CephFS is mounted, run:
# cephfs /mnt/ceph set_layout -p 4
Michael J. Kidd
Sr. Storage Consultant
Inktank Professional Services
On Fri, Feb 28, 2014 at 9:12 AM, Sage Weil s...@inktank.com wrote
While clearly not optimal for long term flexibility, I've found that adding
my OSD's to fstab allows the OSDs to mount during boot, and they start
automatically when they're already mounted during boot.
Hope this helps until a permanent fix is available.
Michael J. Kidd
Sr. Storage Consultant
It's also good to note that the m500 has built in RAIN protection
(basically, diagonal parity at the nand level). Should be very good for
journal consistency.
Sent from my mobile device. Please excuse brevity and typographical errors.
On Jan 15, 2014 9:07 AM, Stefan Priebe
actually, they're very inexpensive as far as SSD's go. The 960gb m500 can
be had on Amazon for $499 US on prime (as of yesterday anyway).
Sent from my mobile device. Please excuse brevity and typographical errors.
On Jan 15, 2014 9:50 AM, Sebastien Han sebastien@enovance.com wrote:
20 matches
Mail list logo