[ceph-users] Announcing: Embedded Ceph and Rook

2016-11-30 Thread Bassam Tabbara
Hello Cephers, I wanted to let you know about a new library that is now available in master. It's called “libcephd” and it enables the embedding of Ceph daemons like MON and OSD (and soon MDS and RGW) into other applications. Using libcephd it's possible to create new applications that closely

[ceph-users] test

2016-11-30 Thread Dan Mick
please discard ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Adding second interface to storage network - issue

2016-11-30 Thread Mike Jacobacci
Ok thanks... I have read that Ceph prefers jumbo frames and when I noticed that the switch ports that two of the nodes were connected to showed high RX errors due to packet sizes over 1500 even though none of the nodes were configured for jumbo frames. But at this point I am happy with the

Re: [ceph-users] Ceph Ceilometer Integration

2016-11-30 Thread Shake Chen
Hi if you use ceph, and want to use ceilometer monitor, you can try https://github.com/openstack/collectd-ceilometer-plugin though collectd monitor ceph, On Thu, Dec 1, 2016 at 2:05 AM, Patrick McGarry wrote: > Hey Satheesh, > > Moving this over to ceph-user where

Re: [ceph-users] Ceph Ceilometer Integration

2016-11-30 Thread Sean Redmond
Hi Satheesh, Do you have anything in the ceilometer error logs? Thanks On Wed, Nov 30, 2016 at 6:05 PM, Patrick McGarry wrote: > Hey Satheesh, > > Moving this over to ceph-user where it'll get the appropriate > eyeballs. Might also be worth a visit to the #ceph irc

Re: [ceph-users] Adding second interface to storage network - issue

2016-11-30 Thread John Petrini
Yes that should work. Though I'd be weary of increasing the MTU to 9000 as this could introduce other issues. Jumbo Frames don't provide a very significant performance increase so I wouldn't recommend it unless you have a very good reason to make the change. If you do want to go down that path I'd

Re: [ceph-users] Ceph Ceilometer Integration

2016-11-30 Thread Patrick McGarry
Hey Satheesh, Moving this over to ceph-user where it'll get the appropriate eyeballs. Might also be worth a visit to the #ceph irc channel on oftc.net. Thanks. On Wed, Nov 30, 2016 at 12:27 PM, satheesh prabhakaran wrote: > Hi Team, > > I have installed openstack using

Re: [ceph-users] Adding second interface to storage network - issue

2016-11-30 Thread Mike Jacobacci
Hi John, Thanks that makes sense... So I take it if I use the same IP for the bond, I shouldn't run into the issues I ran into last night? Cheers, Mike On Wed, Nov 30, 2016 at 9:55 AM, John Petrini wrote: > For redundancy I would suggest bonding the interfaces using

Re: [ceph-users] Adding second interface to storage network - issue

2016-11-30 Thread John Petrini
For redundancy I would suggest bonding the interfaces using LACP that way both ports are combined under the same interface with the same IP. They will both send and receive traffic and if one link goes down the other continues to work. The ports will need to be configured for LACP on the switch as

Re: [ceph-users] Is there a setting on Ceph that we can use to fix the minimum read size?

2016-11-30 Thread Steve Taylor
I also should have mentioned that you’ll naturally have to remount your OSD filestores once you’ve made the change to ceph.conf. You can either restart each OSD after making the config file change or simply use the mount command yourself with the remount option to add the allocsize option live

[ceph-users] Adding second interface to storage network - issue

2016-11-30 Thread Mike Jacobacci
I ran into an interesting issue last night when I tried to add a second storage interface. The original 10gb storage interface on the OSD node was only set at 1500 MTU, so the plan was to bump it to 9000 and configure the second interface the same way with a diff IP and reboot. Once I did that,

Re: [ceph-users] osd down detection broken in jewel?

2016-11-30 Thread Manuel Lausch
Yes. This parameter is used in the condition described there: http://docs.ceph.com/docs/jewel/rados/configuration/mon-osd-interaction/#osds-report-their-status and works. I think the default timeout of 900s is quiet a bit large. Also in the documentation is a other function wich checks the

[ceph-users] CDM Next Week

2016-11-30 Thread Patrick McGarry
Hey cephers, This is just a friendly reminder that we're 1 week out from our next developer monthly call. This month the call will be at 12:30p EST. Please get your projects entered in the wiki so people have a chance to review and be up to speed before the call. Thanks!

Re: [ceph-users] osd down detection broken in jewel?

2016-11-30 Thread Warren Wang - ISD
FYI - Setting min down reports to 10 is somewhat risky. Unless you have a really large cluster, I would advise turning that down to 5 or lower. In a past life, we used to run that number higher on super dense nodes, but we found that it would result in some instances where legitimately down

Re: [ceph-users] Mount of CephFS hangs

2016-11-30 Thread Jens Offenbach
Thanks a lot... "ceph daemon mds. session ls" was a good starting point. What is happening: I am in an OpenStack environment and start a VM. Afterwards, I mount a Manila share via ceph-fuse. I get a new client session in state "open" on the MDS node. Everything looks fine so far. The problem

Re: [ceph-users] Is there a setting on Ceph that we can use to fix the minimum read size?

2016-11-30 Thread Steve Taylor
We’re using Ubuntu 14.04 on x86_64. We just added ‘osd mount options xfs = rw,noatime,inode64,allocsize=1m’ to the [osd] section of our ceph.conf so XFS allocates 1M blocks for new files. That only affected new files, so manual defragmentation was still necessary to clean up older data, but

Re: [ceph-users] osd down detection broken in jewel?

2016-11-30 Thread John Petrini
It's right there in your config. mon osd report timeout = 900 See: http://docs.ceph.com/docs/jewel/rados/configuration/mon-osd-interaction/ ___ John Petrini NOC Systems Administrator // *CoreDial, LLC* // coredial.com // [image: Twitter] [image:

Re: [ceph-users] export-diff behavior if an initial snapshot is NOT specified

2016-11-30 Thread Jason Dillaman
The underlying issue has existed forever AFAIK, but the issue has been masked since Infernalis due to the deep-flatten changes. If you were to use krbd or an older librbd client to write to the image, the same issue would appear (until the PR is merged and backported). On Nov 30, 2016, at 1:00

Re: [ceph-users] Is there a setting on Ceph that we can use to fix the minimum read size?

2016-11-30 Thread Thomas Bennett
Hi Kate and Steve, Thanks for the replies. Always good to hear back from a community :) I'm using Linux on x86_64 architecture and the block size is limited to the page size which is 4k. So it looks like I'm hitting hard limits in any changes. to increase the block size. I found this out by

[ceph-users] osd down detection broken in jewel?

2016-11-30 Thread Manuel Lausch
Hi, In a test with ceph jewel we tested how long the cluster needs to detect and mark down OSDs after they are killed (with kill -9). The result -> 900 seconds. In Hammer this took about 20 - 30 seconds. In the Logfile from the leader monitor are a lot of messeages like 2016-11-30

Re: [ceph-users] Mount of CephFS hangs

2016-11-30 Thread John Spray
On Wed, Nov 30, 2016 at 6:39 AM, Jens Offenbach wrote: > Hi, > I am confronted with a persistent problem during mounting of the CephFS. I am > using Ubuntu 16.04 and solely ceph-fuse. The CephFS gets mounted by muliple > machines and very ofen (not always, but in most cases)

Re: [ceph-users] Introducing DeepSea: A tool for deploying Ceph using Salt

2016-11-30 Thread Lenz Grimmer
Hi all, (replying to the root of this thread, as the discussions between ceph-users and ceph-devel have somewhat diverged): On 11/03/2016 06:52 AM, Tim Serong wrote: > I thought I should make a little noise about a project some of us at > SUSE have been working on, called DeepSea. It's a

Re: [ceph-users] - cluster stuck and undersized if at least one osd is down

2016-11-30 Thread Nick Fisk
> -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Piotr Dzionek > Sent: 30 November 2016 11:04 > To: Brad Hubbard > Cc: Ceph Users > Subject: Re: [ceph-users] - cluster stuck and undersized

Re: [ceph-users] - cluster stuck and undersized if at least one osd is down

2016-11-30 Thread Piotr Dzionek
Hi, Ok, but I still don't get what advantage would I get from blocked IOs. If I set size=2 and min_size=2 and during rebuild another disk dies on the other node, I will loose data. I know that I should set size=3, it is the much safer. But I don't see what is the advantage of blocked io ?