Re: [ceph-users] Cache mode readforward mode will eat your babies?

2017-06-12 Thread Christian Balzer
Hello, just to throw some hard numbers into the ring, I've (very much STRESS) tested readproxy vs. readforward with more or less expected results. New Jewel cluster, 3 cache-tier nodes (5 OSD SSDs each), 3 HDD nodes, IPoIB network. Notably 2x E5-2623 v3 @ 3.00GHz in the cache-tiers. 2 VMs (on

[ceph-users] cache tier use cases

2017-06-12 Thread Roos'lan
Hi community! I'm wondering what are actual use cases for cache tiering? Can I expect improvement of performance with scenario where I use ceph with RBD for VMs hosting? Current pool includes 15 OSD on 10K SAS drives with SSD journal for each 5 OSDs. Thanks Ruslan Email and Anti-Spam

[ceph-users] osd_op_tp timeouts

2017-06-12 Thread Tyler Bischel
Hi, We've been having this ongoing problem with threads timing out on the OSDs. Typically we'll see the OSD become unresponsive for about a minute, as threads from other OSDs time out. The timeouts don't seem to be correlated to high load. We turned up the logs to 10/10 for part of a day to

Re: [ceph-users] HA Filesystem mode (MON, OSD, MDS) with Ceph and HAof MDS daemon.

2017-06-12 Thread Mazzystr
Since your app is an Apache / php app is it possible for you to reconfigure the app to use S3 module rather than a posix open file()? Then with Ceph drop CephFS and configure Civetweb S3 gateway? You can have "active-active" endpoints with round robin dns or F5 or something. You would also have

Re: [ceph-users] ceph-deploy , osd_journal_size and entire disk partiton for journal

2017-06-12 Thread David Turner
Then you want separate partitions for each OSD journal. if you have 4 HDD OSDs using this as they're journal, you should have 4x 5GB partitions on the SSD. On Mon, Jun 12, 2017 at 12:07 PM Deepak Naidu wrote: > Thanks for the note, yes I know them all. It will be shared

Re: [ceph-users] ceph-deploy , osd_journal_size and entire disk partiton for journal

2017-06-12 Thread Deepak Naidu
Thanks for the note, yes I know them all. It will be shared among multiple 3-4 HDD OSD Disks. -- Deepak On Jun 12, 2017, at 7:07 AM, David Turner > wrote: Why do you want a 70GB journal? You linked to the documentation, so I'm assuming

Re: [ceph-users] HA Filesystem mode (MON, OSD, MDS) with Ceph and HAof MDS daemon.

2017-06-12 Thread Daniel Carrasco
2017-06-12 16:10 GMT+02:00 David Turner : > I have an incredibly light-weight cephfs configuration. I set up an MDS > on each mon (3 total), and have 9TB of data in cephfs. This data only has > 1 client that reads a few files at a time. I haven't noticed any downtime >

Re: [ceph-users] HA Filesystem mode (MON, OSD, MDS) with Ceph and HAof MDS daemon.

2017-06-12 Thread David Turner
I have an incredibly light-weight cephfs configuration. I set up an MDS on each mon (3 total), and have 9TB of data in cephfs. This data only has 1 client that reads a few files at a time. I haven't noticed any downtime when it fails over to a standby MDS. So it definitely depends on your

Re: [ceph-users] ceph-deploy , osd_journal_size and entire disk partiton for journal

2017-06-12 Thread David Turner
Why do you want a 70GB journal? You linked to the documentation, so I'm assuming that you followed the formula stated to figure out how big your journal should be... "osd journal size = {2 * (expected throughput * filestore max sync interval)}". I've never heard of a cluster that requires such a

Re: [ceph-users] HA Filesystem mode (MON, OSD, MDS) with Ceph and HAof MDS daemon.

2017-06-12 Thread John Petrini
We use the following in our ceph.conf for MDS failover. We're running one active and one standby. Last time it failed over there was about 2 minutes of downtime before the mounts started responding again but it did recover gracefully. [mds] max_mds = 1 mds_standby_for_rank = 0 mds_standby_replay

Re: [ceph-users] RGW: Truncated objects and bad error handling

2017-06-12 Thread Jens Rosenboom
Adding ceph-devel as this now involves two bugs that are IMO critical, one resulting in data loss, the other in data not getting removed properly. 2017-06-07 9:23 GMT+00:00 Jens Rosenboom : > 2017-06-01 18:52 GMT+00:00 Gregory Farnum : >> >> >> On Thu,

Re: [ceph-users] HA Filesystem mode (MON, OSD, MDS) with Ceph and HAof MDS daemon.

2017-06-12 Thread Daniel Carrasco
2017-06-12 10:49 GMT+02:00 Burkhard Linke < burkhard.li...@computational.bio.uni-giessen.de>: > Hi, > > > On 06/12/2017 10:31 AM, Daniel Carrasco wrote: > >> Hello, >> >> I'm very new on Ceph, so maybe this question is a noob question. >> >> We have an architecture that have some web servers

Re: [ceph-users] HA Filesystem mode (MON, OSD, MDS) with Ceph and HAof MDS daemon.

2017-06-12 Thread Burkhard Linke
Hi, On 06/12/2017 10:31 AM, Daniel Carrasco wrote: Hello, I'm very new on Ceph, so maybe this question is a noob question. We have an architecture that have some web servers (nginx, php...) with a common File Server through NFS. Of course that is a SPOF, so we want to create a multi FS to

[ceph-users] HA Filesystem mode (MON, OSD, MDS) with Ceph and HA of MDS daemon.

2017-06-12 Thread Daniel Carrasco
Hello, I'm very new on Ceph, so maybe this question is a noob question. We have an architecture that have some web servers (nginx, php...) with a common File Server through NFS. Of course that is a SPOF, so we want to create a multi FS to avoid future problems. We've already tested GlusterFS,

Re: [ceph-users] RGW: Auth error with hostname instead of IP

2017-06-12 Thread Ben Morrice
Hello Eric, You are probably hitting the git commits listed on this thread: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-April/017731.html If this is the same behaviour, your options are: a) set all fqn inside the array of hostnames of your zonegroup(s) or b) remove 'rgw dns