[ceph-users] Build Raw Volume from Recovered RBD Objects

2016-04-19 Thread Mike Dawson
the default 4MB chunk size be handled? Should they be padded somehow? 3) If any objects were completely missing and therefore unavailable to this process, how should they be handled? I assume we need to offset/pad to compensate. -- Thanks, Mike Dawson Co-Founder & Director of Cloud Architec

Re: [ceph-users] Discuss: New default recovery config settings

2015-06-04 Thread Mike Dawson
--osd-recovery-max-active 3' If I see slow requests, I drop them down. The biggest downside to setting either to 1 seems to be the long tail issue detailed in: http://tracker.ceph.com/issues/9566 Thanks, Mike Dawson On 6/3/2015 6:44 PM, Sage Weil wrote: On Mon, 1 Jun 2015, Gregory Farnum

Re: [ceph-users] Negative amount of objects degraded

2014-10-30 Thread Mike Dawson
as your. Your results may vary. - Mike Dawson On 10/30/2014 4:50 PM, Erik Logtenberg wrote: Thanks for pointing that out. Unfortunately, those tickets contain only a description of the problem, but no solution or workaround. One was opened 8 months ago and the other more than a year ago. No love

Re: [ceph-users] converting legacy puppet-ceph configured OSDs to look like ceph-deployed OSDs

2014-10-15 Thread Mike Dawson
cluster deployed with mkcephfs out of the stone ages, so your work will be very useful to me. Thanks again, Mike Dawson ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] v0.67.11 dumpling released

2014-09-25 Thread Mike Dawson
not include the proposed changes to address #9487 or #9503, right? Thanks, Mike Dawson * osd: fix mount/remount sync race (#9144 Sage Weil) Getting Ceph * Git at git://github.com/ceph/ceph.git * Tarball at http://ceph.com/download/ceph-0.67.11.tar.gz * For packages, see http

Re: [ceph-users] v0.67.11 dumpling released

2014-09-25 Thread Mike Dawson
://ceph.com/debian-dumpling/pool/main/c/ceph/libcephfs1_0.67.11-1precise_amd64.deb 404 Not Found Based on the timestamps of the files that made it, it looks like the process to publish the packages isn't still in process, but rather failed yesterday. Thanks, Mike Dawson On 9/25/2014 11:09 AM

Re: [ceph-users] Best practice K/M-parameters EC pool

2014-08-28 Thread Mike Dawson
On 8/28/2014 11:17 AM, Loic Dachary wrote: On 28/08/2014 16:29, Mike Dawson wrote: On 8/28/2014 12:23 AM, Christian Balzer wrote: On Wed, 27 Aug 2014 13:04:48 +0200 Loic Dachary wrote: On 27/08/2014 04:34, Christian Balzer wrote: Hello, On Tue, 26 Aug 2014 20:21:39 +0200 Loic Dachary

Re: [ceph-users] Best practice K/M-parameters EC pool

2014-08-28 Thread Mike Dawson
. On Thu, Aug 28, 2014 at 10:38 AM, Mike Dawson mike.daw...@cloudapt.com mailto:mike.daw...@cloudapt.com wrote: We use 3x replication and have drives that have relatively high steady-state IOPS. Therefore, we tend to prioritize client-side IO more than a reduction from 3 copies to 2

Re: [ceph-users] How to avoid deep-scrubbing performance hit?

2014-06-09 Thread Mike Dawson
locked as it is processed? Some of my PGs will be in deep-scrub for minutes at a time. 0: http://ceph.com/docs/master/dev/osd_internals/scrub/ Thanks, Mike Dawson On 6/9/2014 6:22 PM, Craig Lewis wrote: I've correlated a large deep scrubbing operation to cluster stability problems. My primary

Re: [ceph-users] Calamari Goes Open Source

2014-05-30 Thread Mike Dawson
Great work Inktank / Red Hat! An open source Calamari will be a great benefit to the community! Cheers, Mike Dawson On 5/30/2014 6:04 PM, Patrick McGarry wrote: Hey cephers, Sorry to push this announcement so late on a Friday but... Calamari has arrived! The source code bits have been

Re: [ceph-users] Multiple L2 LAN segments with Ceph

2014-05-28 Thread Mike Dawson
: 10.2.1.1/24 - node2: 10.2.1.2/24 - public-leaf2: 10.2.2.0/24 ceph.conf would be: cluster_network: 10.1.0.0/255.255.0.0 public_network: 10.2.0.0/255.255.0.0 - Mike Dawson On 5/28/2014 1:01 PM, Travis Rhoden wrote: Hi folks, Does anybody know if there are any issues running Ceph

Re: [ceph-users] How to find the disk partitions attached to a OSD

2014-05-21 Thread Mike Dawson
Perhaps: # mount | grep ceph - Mike Dawson On 5/21/2014 11:00 AM, Sharmila Govind wrote: Hi, I am new to Ceph. I have a storage node with 2 OSDs. Iam trying to figure out to which pyhsical device/partition each of the OSDs are attached to. Is there are command that can be executed

Re: [ceph-users] How to find the disk partitions attached to a OSD

2014-05-21 Thread Mike Dawson
any 'ceph' related mounts. Thanks, Sharmila On Wed, May 21, 2014 at 8:34 PM, Mike Dawson mike.daw...@cloudapt.com mailto:mike.daw...@cloudapt.com wrote: Perhaps: # mount | grep ceph - Mike Dawson On 5/21/2014 11:00 AM, Sharmila Govind wrote: Hi, I am new

[ceph-users] PG Selection Criteria for Deep-Scrub

2014-05-20 Thread Mike Dawson
the longest. Thanks, Mike Dawson ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] PG Selection Criteria for Deep-Scrub

2014-05-20 Thread Mike Dawson
have set noscrub and nodeep-scrub, as well as noout and nodown off and on while I performed various maintenance, but that hasn't (apparently) impeded the regular schedule. With what frequency are you setting the nodeep-scrub flag? -Aaron On Tue, May 20, 2014 at 5:21 PM, Mike Dawson mike.daw

[ceph-users] Occasional Missing Admin Sockets

2014-05-13 Thread Mike Dawson
Upstart to control daemons. I never see this issue on Ubuntu / Dumpling / sysvinit. Has anyone else seen this issue or know the likely cause? -- Thanks, Mike Dawson Co-Founder Director of Cloud Architecture Cloudapt LLC 6330 East 75th Street, Suite 170 Indianapolis, IN 46250

Re: [ceph-users] Monitoring ceph statistics using rados python module

2014-05-13 Thread Mike Dawson
sourced at some point in the future. Cheers, Mike Dawson On 5/13/2014 12:33 PM, Adrian Banasiak wrote: Thanks for sugestion with admin daemon but it looks like single osd oriented. I have used perf dump on mon socket and it output some interesting data in case of monitoring whole cluster: { cluster

Re: [ceph-users] Occasional Missing Admin Sockets

2014-05-13 Thread Mike Dawson
Greg/Loic, I can confirm that logrotate --force /etc/logrotate.d/ceph removes the monitor admin socket on my boxes running 0.80.1 just like the description in Issue 7188 [0]. 0: http://tracker.ceph.com/issues/7188 Should that bug be reopened? Thanks, Mike Dawson On 5/13/2014 2:10 PM

Re: [ceph-users] v0.80 Firefly released

2014-05-09 Thread Mike Dawson
the potential to work well in preventing unnecessary read starvation in certain situations. 0: http://tracker.ceph.com/issues/8323#note-1 Cheers, Mike Dawson On 5/8/2014 8:20 AM, Andrey Korolyov wrote: Mike, would you mind to write your experience if you`ll manage to get this flow through

Re: [ceph-users] v0.80 Firefly released

2014-05-07 Thread Mike Dawson
of setting primary affinity is low enough, perhaps this strategy could be automated by the ceph daemons. Thanks, Mike Dawson -Greg Software Engineer #42 @ http://inktank.com | http://ceph.com ___ ceph-users mailing list ceph-users@lists.ceph.com http

Re: [ceph-users] 16 osds: 11 up, 16 in

2014-05-07 Thread Mike Dawson
cause), but that tends to cause me more trouble than its worth. Thanks, Mike Dawson Co-Founder Director of Cloud Architecture Cloudapt LLC 6330 East 75th Street, Suite 170 Indianapolis, IN 46250 On 5/7/2014 1:28 PM, Craig Lewis wrote: The 5 OSDs that are down have all been kicked out for being

[ceph-users] Deep-Scrub Scheduling

2014-05-07 Thread Mike Dawson
://www.mikedawson.com/deep-scrub-issue1.jpg 1: http://www.mikedawson.com/deep-scrub-issue2.jpg Thanks, Mike Dawson ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Deep-Scrub Scheduling

2014-05-07 Thread Mike Dawson
seemingly for days at a time, until the next onslaught. If driven by the max scrub interval, shouldn't it jump quickly back up? Is there way to find the last scrub time for a given PG via the CLI to know for sure? Thanks, Mike Dawson On 5/7/2014 10:59 PM, Gregory Farnum wrote

Re: [ceph-users] ceph-deploy osd activate error: AttributeError: 'module' object has no attribute 'logger' exception

2014-04-30 Thread Mike Dawson
Victor, This is a verified issue reported earlier today: http://tracker.ceph.com/issues/8260 Cheers, Mike On 4/30/2014 3:10 PM, Victor Bayon wrote: Hi all, I am following the quick-ceph-deploy tutorial [1] and I am getting a error when running the ceph-deploy osd activate and I am getting

Re: [ceph-users] Backfill and Recovery traffic shaping

2014-04-19 Thread Mike Dawson
Hi Greg, On 4/19/2014 2:20 PM, Greg Poirier wrote: We have a cluster in a sub-optimal configuration with data and journal colocated on OSDs (that coincidentally are spinning disks). During recovery/backfill, the entire cluster suffers degraded performance because of the IO storm that backfills

Re: [ceph-users] RBD write access patterns and atime

2014-04-17 Thread Mike Dawson
Thanks Dan! Thanks, Mike Dawson On 4/17/2014 4:06 AM, Dan van der Ster wrote: Mike Dawson wrote: Dan, Could you describe how you harvested and analyzed this data? Even better, could you share the code? Cheers, Mike First enable debug_filestore=10, then you'll see logs like this: 2014-04

Re: [ceph-users] RBD write access patterns and atime

2014-04-16 Thread Mike Dawson
Dan, Could you describe how you harvested and analyzed this data? Even better, could you share the code? Cheers, Mike On 4/16/2014 11:08 AM, Dan van der Ster wrote: Dear ceph-users, I've recently started looking through our FileStore logs to better understand the VM/RBD IO patterns, and

[ceph-users] Migrate from mkcephfs to ceph-deploy

2014-04-14 Thread Mike Dawson
Hello, I have a production cluster that was deployed with mkcephfs around the Bobtail release. Quite a bit has changed in regards to ceph.conf conventions, ceph-deploy, symlinks to journal partitions, udev magic, and upstart. Is there any path to migrate these OSDs up to the new style

Re: [ceph-users] Error while provisioning my first OSD

2014-04-05 Thread Mike Dawson
Adam, I believe you need the command 'ceph osd create' prior to 'ceph-osd -i X --mkfs --mkkey' for each OSD you add. http://ceph.com/docs/master/rados/operations/add-or-rm-osds/#adding-an-osd-manual Cheers, Mike On 4/5/2014 7:37 PM, Adam Clark wrote: HI all, I am trying to setup a Ceph

Re: [ceph-users] Pause i/o from time to time

2013-12-29 Thread Mike Dawson
config without any tuning or big configurations. Mit freundlichen Grüßen / Best Regards, Uwe Grohnwaldt - Original Message - From: Timofey timo...@koolin.ru To: Mike Dawson mike.daw...@cloudapt.com Cc: ceph-users@lists.ceph.com Sent: Dienstag, 17. September 2013 22:37:44

Re: [ceph-users] rebooting nodes in a ceph cluster

2013-12-21 Thread Mike Dawson
It is also useful to mention that you can set the noout flag when doing maintenance of any given length needs to exceeds the 'mon osd down out interval'. $ ceph osd set noout ** no re-balancing will happen ** $ ceph osd unset noout ** normal re-balancing rules will resume ** - Mike Dawson

Re: [ceph-users] rebooting nodes in a ceph cluster

2013-12-21 Thread Mike Dawson
I think my wording was a bit misleading in my last message. Instead of no re-balancing will happen, I should have said that no OSDs will be marked out of the cluster with the noout flag set. - Mike On 12/21/2013 2:06 PM, Mike Dawson wrote: It is also useful to mention that you can set

Re: [ceph-users] Sanity check of deploying Ceph very unconventionally (on top of RAID6, with very few nodes and OSDs)

2013-12-17 Thread Mike Dawson
Christian, I think you are going to suffer the effects of spindle contention with this type of setup. Based on your email and my assumptions, I will use the following inputs: - 4 OSDs, each backed by a 12-disk RAID 6 set - 75iops for each 7200rpm 3TB drive - RAID 6 write penalty of 6 - OSD

Re: [ceph-users] Adding new OSDs, need to increase PGs?

2013-12-03 Thread Mike Dawson
have? Any RAID involved under your OSDs? Thanks, Mike Dawson On 12/3/2013 1:31 AM, Robert van Leeuwen wrote: On 2 dec. 2013, at 18:26, Brian Andrus brian.and...@inktank.com wrote: Setting your pg_num and pgp_num to say... 1024 would A) increase data granularity, B) likely lend

Re: [ceph-users] Adding new OSDs, need to increase PGs?

2013-12-03 Thread Mike Dawson
Robert, Do you have rbd writeback cache enabled on these volumes? That could certainly explain the higher than expected write performance. Any chance you could re-test with rbd writeback on vs. off? Thanks, Mike Dawson On 12/3/2013 10:37 AM, Robert van Leeuwen wrote: Hi Mike, I am using

Re: [ceph-users] how to enable rbd cache

2013-11-25 Thread Mike Dawson
-devel@vger.kernel.org/msg16168.html 4) Once you get an RBD admin socket, query it like: ceph --admin-daemon /var/run/ceph/rbd-29050.asok config show | grep rbd Cheers, Mike Dawson On 11/25/2013 11:12 AM, Gregory Farnum wrote: On Mon, Nov 25, 2013 at 5:58 AM, Mark Nelson mark.nel

Re: [ceph-users] Running on disks that lose their head

2013-11-07 Thread Mike Dawson
Thanks, Mike Dawson Co-Founder Director of Cloud Architecture Cloudapt LLC 6330 East 75th Street, Suite 170 Indianapolis, IN 46250 On 11/7/2013 2:12 PM, Kyle Bader wrote: Once I know a drive has had a head failure, do I trust that the rest of the drive isn't going to go at an inconvenient

Re: [ceph-users] Ceph User Committee

2013-11-06 Thread Mike Dawson
I also have time I could spend. Thanks for getting this started Loic! Thanks, Mike Dawson On 11/6/2013 12:35 PM, Loic Dachary wrote: Hi Ceph, I would like to open a discussion about organizing a Ceph User Committee. We briefly discussed the idea with Ross Turk, Patrick McGarry and Sage Weil

Re: [ceph-users] ceph cluster performance

2013-11-06 Thread Mike Dawson
We just fixed a performance issue on our cluster related to spikes of high latency on some of our SSDs used for osd journals. In our case, the slow SSDs showed spikes of 100x higher latency than expected. What SSDs were you using that were so slow? Cheers, Mike On 11/6/2013 12:39 PM, Dinu

Re: [ceph-users] ceph cluster performance

2013-11-06 Thread Mike Dawson
://github.com/gregsfortytwo/fsync-tester Thanks, Mike Dawson On 11/6/2013 4:18 PM, Dinu Vlad wrote: ST240FN0021 connected via a SAS2x36 to a LSI 9207-8i. By fixed - you mean replaced the SSDs? Thanks, Dinu On Nov 6, 2013, at 10:25 PM, Mike Dawson mike.daw...@cloudapt.com wrote: We just

Re: [ceph-users] Ceph health checkup

2013-10-31 Thread Mike Dawson
Narendra, This is an issue. You really want your cluster to he HEALTH_OK with all PGs active+clean. Some exceptions apply (like scrub / deep-scrub). What do 'ceph health detail' and 'ceph osd tree' show? Thanks, Mike Dawson Co-Founder Director of Cloud Architecture Cloudapt LLC 6330 East

Re: [ceph-users] How can I check the image's IO ?

2013-10-30 Thread Mike Dawson
Vernon, You can use the rbd command bench-write documented here: http://ceph.com/docs/next/man/8/rbd/#commands The command might looks something like: rbd --pool test-pool bench-write --io-size 4096 --io-threads 16 --io-total 1GB test-image Some other interesting flags are --rbd-cache,

Re: [ceph-users] Ceph monitor problems

2013-10-30 Thread Mike Dawson
Aaron, Don't mistake valid for advisable. For documentation purposes, three monitors is the advisable initial configuration for multi-node ceph clusters. If there is a valid need for more than three monitors, it is advisable to add them two at a time to maintain an odd number of total

Re: [ceph-users] About use same SSD for OS and Journal

2013-10-25 Thread Mike Dawson
were you seeing on the cluster during the periods where things got laggy due to backfills, etc? Last, did you attempt to throttle using ceph config setting in the old setup? Do you need to throttle in your current setup? Thanks, Mike Dawson On 10/24/2013 10:40 AM, Kurt Bauer wrote: Hi, we

Re: [ceph-users] saucy salamander support?

2013-10-22 Thread Mike Dawson
For the time being, you can install the Raring debs on Saucy without issue. echo deb http://ceph.com/debian-dumpling/ raring main | sudo tee /etc/apt/sources.list.d/ceph.list I'd also like to register a +1 request for official builds targeted at Saucy. Cheers, Mike On 10/22/2013 11:42

Re: [ceph-users] Multiply OSDs per host strategy ?

2013-10-16 Thread Mike Dawson
Andrija, You can use a single pool and the proper CRUSH rule step chooseleaf firstn 0 type host to accomplish your goal. http://ceph.com/docs/master/rados/operations/crush-map/ Cheers, Mike Dawson On 10/16/2013 5:16 PM, Andrija Panic wrote: Hi, I have 2 x 2TB disks, in 3 servers, so

Re: [ceph-users] Ceph and RAID

2013-10-03 Thread Mike Dawson
/01Planning/02Blueprints/Emperor/Erasure_coded_storage_backend_%28step_2%29 Initial release is scheduled for Ceph's Firefly release in February 2014. Thanks, Mike Dawson Co-Founder Director of Cloud Architecture Cloudapt LLC On 10/3/2013 2:44 PM, Aronesty, Erik wrote: Does Ceph really halve your

Re: [ceph-users] RBD Snap removal priority

2013-09-27 Thread Mike Dawson
/issues/6333 I think this family of issues speak to the need for Ceph to have more visibility into the underlying storage's limitations (especially spindle contention) when performing known expensive maintainance operations. Thanks, Mike Dawson On 9/27/2013 12:25 PM, Travis Rhoden wrote: Hello

Re: [ceph-users] Pause i/o from time to time

2013-09-17 Thread Mike Dawson
the cause. To re-enable scrub and deep-scrub: # ceph osd unset noscrub # ceph osd unset nodeep-scrub Because you seem to only have two OSDs, you may also be saturating your disks even without scrub or deep-scrub. http://tracker.ceph.com/issues/6278 Cheers, Mike Dawson On 9/16/2013 12:30 PM

Re: [ceph-users] status of glance/cinder/nova integration in openstack grizzly

2013-09-10 Thread Mike Dawson
Darren, I can confirm Copy on Write (show_image_direct_url = True) does work in Grizzly. It sounds like you are close. To check permissions, run 'ceph auth list', and reply with client.images and client.volumes (or whatever keys you use in Glance and Cinder). Cheers, Mike Dawson On 9/10

Re: [ceph-users] status of glance/cinder/nova integration in openstack grizzly

2013-09-10 Thread Mike Dawson
rbd_children client.volumes key: AQAnAy9ScPB4IRAAtxD/V1rDciqFiT9AMPPr+A== caps: [mon] allow r caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=volumes Thanks Darren On 10 September 2013 20:08, Mike Dawson mike.daw...@cloudapt.com

Re: [ceph-users] Significant slowdown of osds since v0.67 Dumpling

2013-08-29 Thread Mike Dawson
appear very promising. Thanks for your work! I'll report back tomorrow if I have any new results. Thanks, Mike Dawson Co-Founder Director of Cloud Architecture Cloudapt LLC 6330 East 75th Street, Suite 170 Indianapolis, IN 46250 On 8/29/2013 2:52 PM, Oliver Daudey wrote: Hey Mark and list, FYI

Re: [ceph-users] Openstack glance ceph rbd_store_user authentification problem

2013-08-08 Thread Mike Dawson
Steffan, It works for me. I have: user@node:/etc/ceph# cat /etc/glance/glance-api.conf | grep rbd default_store = rbd # glance.store.rbd.Store, rbd_store_ceph_conf = /etc/ceph/ceph.conf rbd_store_user = images rbd_store_pool = images rbd_store_chunk_size = 4 Thanks, Mike Dawson

Re: [ceph-users] how to recover the osd.

2013-08-08 Thread Mike Dawson
Looks like you didn't get osd.0 deployed properly. Can you show: - ls /var/lib/ceph/osd/ceph-0 - cat /etc/ceph/ceph.conf Thanks, Mike Dawson Co-Founder Director of Cloud Architecture Cloudapt LLC 6330 East 75th Street, Suite 170 Indianapolis, IN 46250 On 8/8/2013 9:13 AM, Suresh Sadhu wrote

Re: [ceph-users] Large storage nodes - best practices

2013-08-05 Thread Mike Dawson
On 8/5/2013 12:51 PM, Brian Candler wrote: On 05/08/2013 17:15, Mike Dawson wrote: Short answer: Ceph generally is used with multiple OSDs per node. One OSD per storage drive with no RAID is the most common setup. At 24- or 36-drives per chassis, there are several potential bottlenecks

Re: [ceph-users] qemu-1.4.0 and onwards, linux kernel 3.2.x, ceph-RBD, heavy I/O leads to kernel_hung_tasks_timout_secs message and unresponsive qemu-process, [Qemu-devel] [Bug 1207686]

2013-08-05 Thread Mike Dawson
cache = true and cache=writeback - qemu 1.4.0 1.4.0+dfsg-1expubuntu4 - Ubuntu Raring with 3.8.0-25-generic This issue is reproducible in my environment, and I'm willing to run any wip branch you need. What else can I provide to help? Thanks, Mike Dawson On 8/5/2013 3:48 AM, Stefan Hajnoczi wrote

Re: [ceph-users] qemu-1.4.0 and onwards, linux kernel 3.2.x, ceph-RBD, heavy I/O leads to kernel_hung_tasks_timout_secs message and unresponsive qemu-process

2013-08-02 Thread Mike Dawson
We'll do that over the weekend. If you could as well, we'd love the help! [1] http://www.gammacode.com/kvm/wedged-with-timestamps.txt [2] http://www.gammacode.com/kvm/not-wedged.txt Thanks, Mike Dawson Co-Founder Director of Cloud Architecture Cloudapt LLC 6330 East 75th Street, Suite 170

Re: [ceph-users] Why is my mon store.db is 220GB?

2013-08-01 Thread Mike Dawson
though. See some history here: http://tracker.ceph.com/issues/4895 Thanks, Mike Dawson Co-Founder Director of Cloud Architecture Cloudapt LLC 6330 East 75th Street, Suite 170 Indianapolis, IN 46250 On 8/1/2013 6:52 PM, Jeppesen, Nelson wrote: My Mon store.db has been at 220GB for a few months now

Re: [ceph-users] Defective ceph startup script

2013-07-31 Thread Mike Dawson
-daemon /var/run/ceph/ceph-osd.0.asok version {version:0.61.7} Also, I use 'service ceph restart' on Ubuntu 13.04 running a mkcephfs deployment. It may be different when using ceph-deploy. Thanks, Mike Dawson Co-Founder Director of Cloud Architecture Cloudapt LLC 6330 East 75th Street, Suite 170

Re: [ceph-users] Production/Non-production segmentation

2013-07-31 Thread Mike Dawson
production services. A separate non-production cluster will allow you to test and validate new versions (including point releases within a stable series) before you attempt to upgrade your production cluster. Cheers, Mike Dawson Co-Founder Director of Cloud Architecture Cloudapt LLC 6330 East 75th

Re: [ceph-users] Production/Non-production segmentation

2013-07-31 Thread Mike Dawson
On 7/31/2013 3:34 PM, Greg Poirier wrote: On Wed, Jul 31, 2013 at 12:19 PM, Mike Dawson mike.daw...@cloudapt.com mailto:mike.daw...@cloudapt.com wrote: Due to the speed of releases in the Ceph project, I feel having separate physical hardware is the safer way to go, especially

Re: [ceph-users] Cinder volume creation issues

2013-07-26 Thread Mike Dawson
You can specify the uuid in the secret.xml file like: secret ephemeral='no' private='no' uuidbdf77f5d-bf0b-1053-5f56-cd76b32520dc/uuid usage type='ceph' nameclient.volumes secret/name /usage /secret Then use that same uuid on all machines in cinder.conf:

Re: [ceph-users] One monitor won't start after upgrade from 6.1.3 to 6.1.4

2013-06-25 Thread Mike Dawson
Darryl, I've seen this issue a few times recently. I believe Joao was looking into it at one point, but I don't know if it has been resolved (Any news Joao?). Others have run into it too. Look closely at: http://tracker.ceph.com/issues/4999

Re: [ceph-users] One monitor won't start after upgrade from 6.1.3 to 6.1.4

2013-06-25 Thread Mike Dawson
: Thanks for your prompt response. Given that my mon.c /var/lib/ceph/mon/ceph-c is currently populated, should I delete it's contents after removing the monitor and before re-adding it? Darryl On 06/26/13 12:50, Mike Dawson wrote: Darryl, I've seen this issue a few times recently. I believe

Re: [ceph-users] Multi Rack Reference architecture

2013-06-04 Thread Mike Dawson
Behind a registration form, but iirc, this is likely what you are looking for: http://www.inktank.com/resource/dreamcompute-architecture-blueprint/ - Mike On 5/31/2013 3:26 AM, Gandalf Corvotempesta wrote: In reference architecture PDF, downloadable from your website, there was some

Re: [ceph-users] mon IO usage

2013-05-21 Thread Mike Dawson
Sylvain, I can confirm I see a similar traffic pattern. Any time I have lots of writes going to my cluster (like heavy writes from RBD or remapping/backfilling after losing an OSD), I see all sorts of monitor issues. If my monitor leveldb store.db directories grow past some unknown point

Re: [ceph-users] Running Ceph issues: HEALTH_WARN, unknown auth protocol, others

2013-05-01 Thread Mike Dawson
= ceph On Wed, May 1, 2013 at 12:14 PM, Mike Dawson mike.daw...@scholarstack.com mailto:mike.daw...@scholarstack.com wrote: Wyatt, Please post your ceph.conf. - mike On 5/1/2013 12:06 PM, Wyatt Gorman wrote: Hi everyone, I'm setting up a test ceph cluster

Re: [ceph-users] cuttlefish countdown -- OSD doesn't get marked out

2013-04-26 Thread Mike Dawson
Sage, I confirm this issue. The requested info is listed below. *Note that due to the pre-Cuttlefish monitor sync issues, this deployment has been running three monitors (mon.b and mon.c working properly in quorum. mon.a stuck forever synchronizing). For the past two hours, no OSD processes

Re: [ceph-users] Crushmap doesn't match osd tree

2013-04-25 Thread Mike Dawson
Mike, I use a process like: crushtool -c new-crushmap.txt -o new-crushmap ceph osd setcrushmap -i new-crushmap I did not attempt to validate your crush map. If that command fails, I would scrutinize your crushmap for validity/correctness. Once you have the new crushmap injected, you can

Re: [ceph-users] Monitor Access Denied message to itself?

2013-04-18 Thread Mike Dawson
Greg, Looks like Sage has a fix for this problem. In case it matters, I have seen a few cases that conflict with your notes in this thread and the bug report. I have seen the bug exclusively on new Ceph installs (without upgrading from bobtail), so it is not isolated to upgrades. Further,

Re: [ceph-users] Monitor Access Denied message to itself?

2013-04-08 Thread Mike Dawson
Matthew, I have seen the same behavior on 0.59. Ran through some troubleshooting with Dan and Joao on March 21st and 22nd, but I haven't looked at it since then. If you look at running processes, I believe you'll see an instance of ceph-create-keys start each time you start a Monitor. So,