Re: [ceph-users] help me turn off "many more objects that average"

2018-09-12 Thread Chad William Seys
Hi Paul, Yes, all monitors have been restarted. Chad. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] help me turn off "many more objects that average"

2018-09-12 Thread Chad William Seys
Hi all, I'm having trouble turning off the warning "1 pools have many more objects per pg than average". I've tried a lot of variations on the below, my current ceph.conf: #... [mon] #... mon_pg_warn_max_object_skew = 0 All of my monitors have been restarted. Seems like I'm missing

Re: [ceph-users] how can time machine know difference between cephfs fuse and kernel client?

2018-08-23 Thread Chad William Seys
Hi All, I think my problem was that I had quotas set at multiple levels of a subtree, and maybe some were conflicting. (E.g. Parent said quota=1GB, child said quota=200GB.) I could not reproduce the problem, but setting quotas only on the user's subdirectory and not elsewhere along the way

Re: [ceph-users] luminous ceph-fuse with quotas breaks 'mount' and 'df'

2018-08-17 Thread Chad William Seys
-08-17 14:34:55.967114 7f0e298a6700 10 client.18814183 handle_client_session client_session(renewcaps seq 2) v1 from mds.0 ceph-fuse[30502]: fuse finished with error 0 and tester_r 0 *** Caught signal (Segmentation fault) ** On 07/09/2018 08:48 AM, John Spray wrote: On Fri, Jul 6, 2018 at 6:30 PM Chad W

Re: [ceph-users] how can time machine know difference between cephfs fuse and kernel client?

2018-08-17 Thread Chad William Seys
Also, when using cephfs fuse client, Windows File History reports no space free. Free Space: 0 bytes, Total Space: 186 GB. C. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] how can time machine know difference between cephfs fuse and kernel client?

2018-08-17 Thread Chad William Seys
Hello all, I have used cephfs served over Samba to set up a "time capsule" server. However, I could only get this to work using the cephfs kernel module. Time machine would give errors if cephfs were mounted with fuse. (Sorry, I didn't write down the error messages!) Anyone have an idea

[ceph-users] cephfs fuse versus kernel performance

2018-08-15 Thread Chad William Seys
Hi all, Anyone know of benchmarks of cephfs through fuse versus kernel? Thanks! Chad. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] luminous ceph-fuse with quotas breaks 'mount' and 'df'

2018-07-09 Thread Chad William Seys
Hi Greg, Am i reading this right that you've got a 1-*byte* quota but have gigabytes of data in the tree? I have no idea what that might do to the system, but it wouldn't totally surprise me if that was messing something up. Since <10KB definitely rounds towards 0... Yeah, that

[ceph-users] luminous ceph-fuse with quotas breaks 'mount' and 'df'

2018-07-06 Thread Chad William Seys
Hi all, I'm having a problem that when I mount cephfs with a quota in the root mount point, no ceph-fuse appears in 'mount' and df reports: Filesystem 1K-blocks Used Available Use% Mounted on ceph-fuse 0 0 0- /srv/smb If I 'ls' I see the expected files: #

[ceph-users] osds with different disk sizes may killing, > performance (?? ?)

2018-04-18 Thread Chad William Seys
You'll find it said time and time agin on the ML... avoid disks of different sizes in the same cluster. It's a headache that sucks. It's not impossible, it's not even overly hard to pull off... but it's very easy to cause a mess and a lot of headaches. It will also make it harder to diagnose

[ceph-users] osds with different disk sizes may killing, > performance (?? ?)

2018-04-16 Thread Chad William Seys
You'll find it said time and time agin on the ML... avoid disks of different sizes in the same cluster. It's a headache that sucks. It's not impossible, it's not even overly hard to pull off... but it's very easy to cause a mess and a lot of headaches. It will also make it harder to diagnose

Re: [ceph-users] osds with different disk sizes may killing performance (?? ?)

2018-04-12 Thread Chad William Seys
Hello, I think your observations suggest that, to a first approximation, filling drives with bytes to the same absolute level is better for performance than filling drives to the same percentage full. Assuming random distribution of PGs, this would cause the smallest drives to be as active

[ceph-users] rbd and cephfs (data) in one pool?

2017-12-27 Thread Chad William Seys
Hello, Is it possible to place rbd and cephfs data in the same pool? Thanks! Chad. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] what does associating ceph pool to application do?

2017-10-06 Thread Chad William Seys
Thanks John! I see that a pool can have more than one "application". Should I feel free to combine uses (e.g. cephfs,rbd) or is this counterindicated? Thanks! Chad. Just to stern this up a bit... In the future, you may find that things stop working if you remove the application tags. For

Re: [ceph-users] what does associating ceph pool to application do?

2017-10-06 Thread Chad William Seys
Scrolled down a bit and found this blog post: https://ceph.com/community/new-luminous-pool-tags/ If things haven't changed:   Could someone tell me / link to what associating a ceph pool to an application does? ATM it's a tag and does nothing to the pool/PG/etc structure   I hope this

[ceph-users] what does associating ceph pool to application do?

2017-10-06 Thread Chad William Seys
Hi All, Could someone tell me / link to what associating a ceph pool to an application does? I hope this info includes why "Disabling an application within a pool might result in loss of application functionality" when running 'ceph osd application disable ' Thanks! Chad.

Re: [ceph-users] erasure-coded with overwrites versus erasure-coded with cache tiering

2017-10-05 Thread Chad William Seys
are. On Sat, Sep 30, 2017, 8:10 PM Chad William Seys <cws...@physics.wisc.edu <mailto:cws...@physics.wisc.edu>> wrote: Hi David,    Thanks for the clarification.  Reminded me of some details I forgot to mention.    In my case, the replica-3 and k2m2 are stored on th

Re: [ceph-users] erasure-coded with overwrites versus erasure-coded with cache tiering

2017-09-30 Thread Chad William Seys
utilize that faster storage in the rest of the osd stack either as journals for filestore or Wal/DB partitions for bluestore. On Sat, Sep 30, 2017, 12:56 PM Chad William Seys <cws...@physics.wisc.edu <mailto:cws...@physics.wisc.edu>> wrote: Hi all,    Now that Lumino

[ceph-users] erasure-coded with overwrites versus erasure-coded with cache tiering

2017-09-30 Thread Chad William Seys
Hi all, Now that Luminous supports direct writing to EC pools I was wondering if one can get more performance out of an erasure-coded pool with overwrites or an erasure-coded pool with a cache tier? I currently have a 3 replica pool in front of a k2m2 erasure coded pool. Luminous

[ceph-users] mds fails to start after upgrading to 10.2.6

2017-03-16 Thread Chad William Seys
Hi All, After upgrading to 10.2.6 on Debian Jessie, the MDS server fails to start. Below is what is written to the log file from attempted start to failure: Any ideas? I'll probably try rolling back to 10.2.5 in the meantime. Thanks! C. On 03/16/2017 12:48 PM, r...@mds01.hep.wisc.edu

Re: [ceph-users] removing ceph.quota.max_bytes

2017-02-20 Thread Chad William Seys
Thanks! Seems non-standard, but it works. :) C. Anyone know what's wrong? You can clear these by setting them to zero. John Everything is Jewel 10.2.5. Thanks! Chad. ___ ceph-users mailing list ceph-users@lists.ceph.com

[ceph-users] removing ceph.quota.max_bytes

2017-02-16 Thread Chad William Seys
Hi All, I'm trying to remove the extended attribute "ceph.quota.max_bytes" on a cephfs directory. I've fuse mounted a subdirectory of a cephfs filesystem under /ceph/cephfs . Next I set "ceph.quota.max_bytes" setfattr -n ceph.quota.max_bytes -v 123456 /ceph/cephfs And check the

Re: [ceph-users] 10.2.5 on Jessie?

2016-12-21 Thread Chad William Seys
Thanks ceph@jack and Alexandre for the reassurance! C. On 12/20/2016 08:37 PM, Alexandre DERUMIER wrote: I have upgrade 3 jewel cluster on jessie to last 10.2.5, works fine. - Mail original - De: "Chad William Seys" <cws...@physics.wisc.edu> À: "ceph-users&

[ceph-users] 10.2.5 on Jessie?

2016-12-20 Thread Chad William Seys
Hi all, Has anyone had success/problems with 10.2.5 on Jessie? I'm being a little cautious before updating. ;) Thanks! Chad. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] new feature: auto removal of osds causing "stuck inactive"

2016-10-28 Thread Chad William Seys
Hi all, I recently encountered a situation where some partially removed OSDs caused my cluster to enter a "stuck inactive" state. The eventually solution was to tell ceph the OSDs were "lost". Because all the PGs were replicated elsewhere on the cluster, no data was lost. Would it make

Re: [ceph-users] Blocked ops, OSD consuming memory, hammer

2016-05-27 Thread Chad William Seys
Hi Heath, My OSDs do the exact same thing - consume lots of RAM when the cluster is reshuffling OSDs. Try ceph tell osd.* heap release as a cron job. Here's a bug: http://tracker.ceph.com/issues/12681 Chad ___ ceph-users mailing list

Re: [ceph-users] is 0.94.7 packaged well for Debian Jessie

2016-05-24 Thread Chad William Seys
Thanks! Hammer don't use systemd unit files, so it's working fine. (jewel/infernalis still missing systemd .target files) ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] is 0.94.7 packaged well for Debian Jessie

2016-05-24 Thread Chad William Seys
Hi All, Has anyone tested 0.94.7 on Debian Jessie? I've heard that the most recent Jewel releases for Jessie were missing pieces (systemd files) so I am a little more hesitant than usual. Thanks! Chad. ___ ceph-users mailing list

[ceph-users] RE; upgraded to Ubuntu 16.04, getting assert failure

2016-04-11 Thread Chad William Seys
Hi Don, I had a similar problem starting a mon. In my case a computer failed and I removed and recreated the 3rd mon on a new computer. It would start but never get added to the other mon's lists. Restarting the other two mons caused them to add the third to their monmap .

[ceph-users] ceph-deploy not in debian repo?

2015-11-09 Thread Chad William Seys
Hi all, I cannot find ceph-deploy in the debian catalogs. I have these in my sources: deb http://ceph.com/debian-hammer/ jessie main # ceph-deploy not yet in jessie repo deb http://ceph.com/debian-hammer wheezy main I also see ceph-deploy in the repo.

[ceph-users] copying files from one pool to another results in more free space?

2015-10-26 Thread Chad William Seys
Hi All I'm observing some weird behavior in the amount of space ceph reports while copying files from an rbd image in one pool to an rbd image in another. The AVAIL number reported by 'ceph df' goes up as the copy proceeds rather than goes down! The output of 'ceph df' shows

Re: [ceph-users] Correct method to deploy on jessie

2015-10-06 Thread Chad William Seys
> Most users in the apt family have deployed on Ubuntu > though, and that's what our tests run on, fyi. That is good to know - I wouldn't be surprised if the same packages could be used in Ubuntu and Debian. Especially if the release dates of the Ubuntu and Debian versions were similar.

Re: [ceph-users] Correct method to deploy on jessie

2015-10-01 Thread Chad William Seys
Hi Dmitry, You might try using the wheezy repos on jessie. Often this will work. (I'm using wheezy for most of my ceph nodes, but not two of the three monitor nodes, which are jessie with wheezy repos.) # Wheezy repos on Jessie deb http://ceph.com/debian-hammer/ wheezy main

Re: [ceph-users] Huge memory usage spike in OSD on hammer/giant

2015-09-11 Thread Chad William Seys
> note that I've only did it after most of pg were recovered My guess / hope is that heap free would also help during the recovery process. Recovery causing failures does not seem like the best outcome. :) C. ___ ceph-users mailing list

Re: [ceph-users] Huge memory usage spike in OSD on hammer/giant

2015-09-09 Thread Chad William Seys
> Going from 2GB to 8GB is not normal, although some slight bloating is > expected. If I recall correctly, Mariusz's cluster had a period of flapping OSDs? I experienced a a similar situation using hammer. My OSDs went from 10GB in RAM in a Healthy state to 24GB RAM + 10GB swap in a

Re: [ceph-users] Huge memory usage spike in OSD on hammer/giant

2015-09-09 Thread Chad William Seys
On Tuesday, September 08, 2015 18:28:48 Shinobu Kinjo wrote: > Have you ever? > > http://ceph.com/docs/master/rados/troubleshooting/memory-profiling/ No. But the command 'ceph tell osd.* heap release' did cause my OSDs to consume the "normal" amount of RAM. ("normal" in this case means the

Re: [ceph-users] RAM usage only very slowly decreases after cluster recovery

2015-09-09 Thread Chad William Seys
Thanks Somnath! I found a bug in the tracker to follow: http://tracker.ceph.com/issues/12681 Chad. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Huge memory usage spike in OSD on hammer/giant

2015-09-08 Thread Chad William Seys
Does 'ceph tell osd.* heap release' help with OSD RAM usage? From http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-August/003932.html Chad. ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] RAM usage only very slowly decreases after cluster recovery

2015-08-28 Thread Chad William Seys
Thanks! 'ceph tell osd.* heap release' seems to have worked! Guess I'll sprinkle it around my maintenance scripts. Somnath Is there a plan to make jemalloc standard in Ceph in the future? Thanks! Chad. ___ ceph-users mailing list

[ceph-users] RAM usage only very slowly decreases after cluster recovery

2015-08-27 Thread Chad William Seys
Hi all, It appears that OSD daemons only very slowly free RAM after an extended period of an unhealthy cluster (shuffling PGs around). Prior to a power outage (and recovery) around July 25th, the amount of RAM used was fairly constant, at most 10GB (out of 24GB). You can see in the attached

Re: [ceph-users] TRIM / DISCARD run at low priority by the OSDs?

2015-08-24 Thread Chad William Seys
- De: Chad William Seys cws...@physics.wisc.edu À: ceph-users ceph-us...@ceph.com Envoyé: Samedi 22 Août 2015 04:26:38 Objet: [ceph-users] TRIM / DISCARD run at low priority by the OSDs? Hi All, Is it possible to give TRIM / DISCARD initiated by krbd low priority on the OSDs? I

[ceph-users] TRIM / DISCARD run at low priority by the OSDs?

2015-08-21 Thread Chad William Seys
Hi All, Is it possible to give TRIM / DISCARD initiated by krbd low priority on the OSDs? I know it is possible to run fstrim at Idle priority on the rbd mount point, e.g. ionice -c Idle fstrim -v $MOUNT . But this Idle priority (it appears) only is within the context of the node executing

Re: [ceph-users] why are there degraded PGs when adding OSDs?

2015-07-27 Thread Chad William Seys
6726 active+recovery_wait+degraded 2081 active+remapped+wait_backfill 17 active+recovering+degraded 2 active+recovery_wait+degraded+remapped recovery io 24861 kB/s, 6 objects/s Chad. - Original Message - From: Chad William Seys

[ceph-users] why are there degraded PGs when adding OSDs?

2015-07-27 Thread Chad William Seys
Hi All, I recently added some OSDs to the Ceph cluster (0.94.2). I noticed that 'ceph -s' reported both misplaced AND degraded PGs. Why should any PGs become degraded? Seems as though Ceph should only be reporting misplaced PGs? From the Giant release notes: Degraded vs misplaced: the Ceph

Re: [ceph-users] why are there degraded PGs when adding OSDs?

2015-07-27 Thread Chad William Seys
), then the osdmap and ceph pg dump afterwards? -Sam - Original Message - From: Chad William Seys cws...@physics.wisc.edu To: Samuel Just sj...@redhat.com, ceph-users ceph-us...@ceph.com Sent: Monday, July 27, 2015 12:57:23 PM Subject: Re: [ceph-users] why are there degraded PGs when adding

Re: [ceph-users] why are there degraded PGs when adding OSDs?

2015-07-27 Thread Chad William Seys
pgs active+clean), then the osdmap and ceph pg dump afterwards? -Sam - Original Message - From: Chad William Seys cws...@physics.wisc.edu To: Samuel Just sj...@redhat.com, ceph-users ceph-us...@ceph.com Sent: Monday, July 27, 2015 12:57:23 PM Subject: Re: [ceph-users] why

Re: [ceph-users] kernel version for rbd client and hammer tunables

2015-05-12 Thread Chad William Seys
Hi Ilya and all, Thanks for explaining. I'm confused about what building a crushmap means. After running #ceph osd crush tunables hammer data migrated around the cluster, so something changed. I was expecting that 'straw' would be replaced by 'straw2'.

Re: [ceph-users] kernel version for rbd client and hammer tunables

2015-05-12 Thread Chad William Seys
No, pools use crush rulesets. straw and straw2 are bucket types (or algorithms). As an example, if you do ceph osd crush add-bucket foo rack on a cluster with firefly tunables, you will get a new straw bucket. The same after doing ceph osd crush tunables hammer will get you a new straw2

[ceph-users] kernel version for rbd client and hammer tunables

2015-05-12 Thread Chad William Seys
Hi Ilya and all, Is it safe to use kernel 3.16.7 rbd with Hammer tunables? I've tried this on a test Hammer cluster and the client seems to work fine. I've also mounted cephfs on a Hammer cluster (and Hammer tunables) using kernel 3.16. It seems to work fine (but not much

[ceph-users] how to display client io in hammer

2015-05-04 Thread Chad William Seys
Hi all, Looks like in Hammer 'ceph -s' no longer displays client IO and ops. How does one display that these days? Thanks, C. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] how to display client io in hammer

2015-05-04 Thread Chad William Seys
Ooops! Turns out I forgot to mount the ceph rbd, so no client IO displayed! C. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Kernel version for CephFS client ?

2015-05-04 Thread Chad William Seys
Hi Florent, Most likely Debian will release backported kernels for Jessie, as they have for Wheezy. E.g. Wheezy has had kernel 3.16 backported to it: https://packages.debian.org/search?suite=wheezy-backportssearchon=nameskeywords=linux-image-amd64 C.

[ceph-users] ceph.com documentation suggestions

2015-04-21 Thread Chad William Seys
Hi, I've recently seen some confusion over the number of PGs per pool versus per cluster on the mailing list. I also set too many PGs per pool b/c of this confusion. IMO, it is fairly confusing to talk about PGs on the Pool page, but only vaguely talk about the number of PGs for the

[ceph-users] advantages of multiple pools?

2015-04-17 Thread Chad William Seys
Hi All, What are the advantages of having multiple ceph pools (if they use the whole cluster)? Thanks! C. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] ceph on Debian Jessie stopped working

2015-04-17 Thread Chad William Seys
Hi Greg, Thanks for the reply. After looking more closely at /etc/ceph/rbdmap I discovered it was corrupted. That was the only problem. I think the dmesg line 'rbd: no image name provided' is also a clue to this! Hope that helps any other newbies! :) Thanks again, Chad.

Re: [ceph-users] Upgrade from Giant 0.87-1 to Hammer 0.94-1

2015-04-17 Thread Chad William Seys
Now I also know I have too many PGs! It is fairly confusing to talk about PGs on the Pool page, but only vaguely talk about the number of PGs for the cluster. Here are some examples of confusing statements with suggested alternatives from the online docs:

[ceph-users] ceph on Debian Jessie stopped working

2015-04-15 Thread Chad William Seys
Hi All, Earlier ceph on Debian Jessie was working. Jessie is running 3.16.7 . Now when I modprobe rbd , no /dev/rbd appear. # dmesg | grep -e rbd -e ceph [ 15.814423] Key type ceph registered [ 15.814461] libceph: loaded (mon/osd proto 15/24) [ 15.831092] rbd: loaded [ 22.084573] rbd:

Re: [ceph-users] adding a new pool causes old pool warning pool x has too few pgs

2015-03-27 Thread Chad William Seys
Weird: After a few hours, health check comes back OK without changing the number of PGS for any pools ! Hi All, To a Healthy cluster I recently added two pools to ceph, 1 replicated and 1 ecpool. Then I made the replicated pool into a cache for the ecpool. Afterwards ceph

[ceph-users] PG stuck unclean for long time

2015-02-05 Thread Chad William Seys
Anyone know what is going on with this PG? # ceph health detail HEALTH_WARN 1 pgs stuck unclean; recovery 735/4844641 objects degraded (0.015%); 245/1296706 unfound (0.019%) pg 21.fd is stuck unclean for 349777.229468, current state active, last acting [19,5,15,25] recovery 735/4844641 objects

[ceph-users] PG to pool mapping?

2015-02-04 Thread Chad William Seys
Hi all, How do I determine which pool a PG belongs to? (Also, is it the case that all objects in a PG belong to one pool?) Thanks! C. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] verifying tiered pool functioning

2015-01-27 Thread Chad William Seys
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Chad William Seys Sent: Thursday, January 22, 2015 5:40 AM To: ceph-users@lists.ceph.com Subject: [ceph-users] verifying tiered pool functioning Hello, Could anyone provide a howto verify that a tiered pool is working correctly? E.g

[ceph-users] cache pool and storage pool: possible to remove storage pool?

2015-01-27 Thread Chad William Seys
Hi all, Documentation explains how to remove the cache pool: http://ceph.com/docs/master/rados/operations/cache-tiering/ Anyone know how to remove the storage pool instead? (E.g. the storage pool has wrong parameters.) I was hoping to push all the objects into the cache pool and then

Re: [ceph-users] erasure coded pool why ever k1?

2015-01-22 Thread Chad William Seys
Hi Loic, The size of each chunk is object size / K. If you have K=1 and M=2 it will be the same as 3 replicas with none of the advantages ;-) Interesting! I did not see this explained so explicitly. So is the general explanation of k and m something like: k, m: fault tolerance of m+1

[ceph-users] how to remove storage tier

2015-01-22 Thread Chad William Seys
Hi all, I've got a tiered pool arrangement with a replicated pool and an erasure pool. I set it up such that the replicated pool is in front of the erasure coded pool. I know want to change the properties of the erasure coded pool. Is there a way of changing switching which erasure profile

[ceph-users] erasure coded pool why ever k1?

2015-01-21 Thread Chad William Seys
Hello all, What reasons would one want k1? I read that m determines the number of OSD which can fail before loss. But I don't see explained how to choose k. Any benefits for choosing k1? Thanks! Chad. ___ ceph-users mailing list

[ceph-users] verifying tiered pool functioning

2015-01-21 Thread Chad William Seys
Hello, Could anyone provide a howto verify that a tiered pool is working correctly? E.g. Command to watch as PG migrate from one pool to another? (Or determine which pool a PG is currently in.) Command to see how much data is in each pool (global view of number of PGs I guess)? Thanks!

Re: [ceph-users] Ceph PG Incomplete = Cluster unusable

2014-12-29 Thread Chad William Seys
Hi Christian, I had a similar problem about a month ago. After trying lots of helpful suggestions, I found none of it worked and I could only delete the affected pools and start over. I opened a feature request in the tracker: http://tracker.ceph.com/issues/10098 If you find a way, let

Re: [ceph-users] emperor - firefly 0.80.7 upgrade problem

2014-11-06 Thread Chad William Seys
Hi Sam, Sounds like you needed osd 20. You can mark osd 20 lost. -Sam Does not work: # ceph osd lost 20 --yes-i-really-mean-it osd.20 is not down or doesn't exist Also, here is an interesting post which I will follow from October:

Re: [ceph-users] create multiple OSDs without changing CRUSH until one last step

2014-04-10 Thread Chad William Seys
Hi Greg, Looks promising... I added [global] ... mon osd auto mark new in = false then pushed config to monitor ceph-deploy --overwrite-conf config push mon01 then restart monitor /etc/init.d/ceph restart mon then tried ceph-deploy --overwrite-conf disk prepare --zap-disk osd02:sde

Re: [ceph-users] fuse or kernel to mount rbd?

2014-04-05 Thread Chad William Seys
Not to 3.2. I would recommend running a more recent ubuntu kernel (which I *think* the support on 12.04 still) like 3.8 or 3.11. Those kernels should be pretty stable provided the ubuntu kernel guys are keeping up with the mainline stable kernels at kernel.org (they generally do). Thanks!