Re: [ceph-users] Anyone tested Samsung 860 DCT SSDs?

2018-10-12 Thread Corin Langosch
Hi It has only TBW of 349 TB, so might die quite soon. But what about the "Seagate Nytro 1551 DuraWrite 3DWPD Mainstream Endurance 960GB, SATA"? Seems really cheap too and has TBW 5.25PB. Anybody tested that? What about (RBD) performance? Cheers Corin On Fri, 2018-10-12 at 13:53 +, Kenneth

[ceph-users] ceph mon eating lots of memory after upgrade0.94.2 to 0.94.9

2016-11-18 Thread Corin Langosch
Hi, about 2 weeks ago I upgraded a rather small cluster from ceph 0.94.2 to 0.94.9. The upgrade went fine, the cluster is running stable. But I just noticed that one monitor is already eating 20 GB of memory, growing slowly over time. The other 2 mons look fine. The disk space used by the

[ceph-users] confusing release notes

2016-01-22 Thread Corin Langosch
Hi, http://docs.ceph.com/docs/master/releases/ states: Infernalis Stable First release November 2015 9.2.0 however http://docs.ceph.com/docs/master/release-notes/#v9-2-0-infernalis states: V9.2.0 INFERNALIS This major release will be the foundation for the next stable series. ... So I wonder

Re: [ceph-users] why was osd pool default size changed from 2 to 3.

2015-10-23 Thread Corin Langosch
Am 23.10.2015 um 20:53 schrieb Gregory Farnum: > On Fri, Oct 23, 2015 at 8:17 AM, Stefan Eriksson wrote: > > Nothing changed to make two copies less secure. 3 copies is just so > much more secure and is the number that all the companies providing > support recommend, so we

[ceph-users] disable cephx signing

2015-10-21 Thread Corin Langosch
Hi, we have cephx authentication and signing enabled. For performance reasons we'd like to keep auth but disabled signing. Is this possible without service interruption and without having to restart the qemu rbd clients? Just adapt the ceph.conf, restart mons and then osds? Thanks Corin

Re: [ceph-users] download.ceph.com unreachable IPv6 [was: v9.1.0 Infernalis release candidate released]

2015-10-16 Thread Corin Langosch
download.ceph.com resolves to 2607:f298:6050:51f3:f816:3eff:fe50:5ec here. Ping seems to be blocked. Connect to port 80 works every few requests, probably 50%. So I assume there's some load-balancer there with a dead backend, which the load-balancer didn't detect/ kick...just guessing. Best

[ceph-users] how to get cow usage of a clone

2015-10-09 Thread Corin Langosch
Hi, to get the real usage of an image I can run: rbd diff image1 | awk '{ SUM += $2 } END { print SUM/1024/1024 " MB" }' To get the cow usage of a snapshot: rbd diff image1 --from-snap snap1 | awk '{ SUM += $2 } END { print SUM/1024/1024 " MB" }' But I wonder how I can get the cow usage of a

[ceph-users] rbd and exclusive lock feature

2015-09-22 Thread Corin Langosch
Hi guys, when creating an rbd with feature "exclusive-lock" (and object-map, fast-diff, ..), do I have to pass any special arguments to qemu to activate it? How does this feature work with resize, snapshot creation, etc.? Form my work on ceph-ruby I know you have to call "rbd_open" and then

Re: [ceph-users] benefit of using stripingv2

2015-09-17 Thread Corin Langosch
Hi Greg, Am 17.09.2015 um 16:42 schrieb Gregory Farnum: > Briefly, if you do a lot of small direct IOs (for instance, a database > journal) then striping lets you send each sequential write to a > separate object. This means they don't pile up behind each other > grabbing write locks and can

[ceph-users] benefit of using stripingv2

2015-09-16 Thread Corin Langosch
Hi guys, afaik rbd always splits the image into chunks of size 2^order (2^22 = 4MB by default). What's the benefit of specifying the feature flag "STRIPINGV2"? I couldn't find any documenation about it except http://ceph.com/docs/master/man/8/rbd/#striping which doesn't explain the benefits (or

Re: [ceph-users] Ruby bindings for Librados

2015-07-13 Thread Corin Langosch
Hi Wido, I'm the dev of https://github.com/netskin/ceph-ruby and still use it in production on some systems. It has everything I need so I didn't develop any further. If you find any bugs or need new features, just open an issue and I'm happy to have a look. Best Corin Am 13.07.2015 um 21:24

Re: [ceph-users] old osds take much longer to start than newer osd

2015-03-02 Thread Corin Langosch
if it’s incredible high. On 27 Feb 2015, at 14:02, Corin Langosch corin.lango...@netskin.com mailto:corin.lango...@netskin.com wrote: Hi guys, I'm using ceph for a long time now, since bobtail. I always upgraded every few weeks/ months to the latest stable release. Of course I also

[ceph-users] old osds take much longer to start than newer osd

2015-02-27 Thread Corin Langosch
Hi guys, I'm using ceph for a long time now, since bobtail. I always upgraded every few weeks/ months to the latest stable release. Of course I also removed some osds and added new ones. Now during the last few upgrades (I just upgraded from 80.6 to 80.8) I noticed that old osds take much

Re: [ceph-users] old osds take much longer to start than newer osd

2015-02-27 Thread Corin Langosch
I'd guess so, but that's not what I want to do ;) Am 27.02.2015 um 18:43 schrieb Robert LeBlanc: Does deleting/reformatting the old osds improve the performance? On Fri, Feb 27, 2015 at 6:02 AM, Corin Langosch corin.lango...@netskin.com wrote: Hi guys, I'm using ceph for a long time now

Re: [ceph-users] repair incosistent pg using emperor

2014-01-07 Thread Corin Langosch
Hi David, Am 07.01.2014 01:19, schrieb David Zafman: Did the inconsistent flag eventually get cleared? It might have been you didn’t wait long enough for the repair to get through the pg. No, the flag did not clear automatically. But after restarting a few osds the issue was resolved.

[ceph-users] repair incosistent pg using emperor

2013-12-28 Thread Corin Langosch
Hi guys, I got an incosistent pg and found it was due to a broken hdd. I marked this osd out and the cluster rebalanced without any problems. But the pg is still reported as incosistent. Before marking osd 2 out: HEALTH_ERR 1 pgs inconsistent; 1 scrub errors; noout flag(s) set pg 6.29f is

Re: [ceph-users] repair incosistent pg using emperor

2013-12-28 Thread Corin Langosch
Hi Sage, Am 28.12.2013 19:18, schrieb Sage Weil: ceph pg scrub 6.29f ...and see if it comes back with errors or not. If it doesn't, you can What do you mean with comnes back with error or not? ~# ceph pg scrub 6.29f instructing pg 6.29f on osd.8 to scrub But the logs don't show any

Re: [ceph-users] HDD bad sector, pg inconsistent, no object remapping

2013-11-13 Thread Corin Langosch
Am 13.11.2013 09:34, schrieb Martin B Nielsen: Probably common sense but I was bitten by this once in a likewise situation.. If you run 3x replica and distribute them over 3x hosts (is that default now?) make sure that the disks on the host with the failed disk have space for it - the

[ceph-users] urgent help needed after upgrade to emporer

2013-11-13 Thread Corin Langosch
Hi guys, all my systems run ubuntu 12.10. I was running dumpling for a few months without any errors. I just upgraded all my monitors (3) and one osd (total 14) to emporer. The cluster is healthy and seems to be running fine. A few minutes after upgrading a few of my qemu (kvm) machines

[ceph-users] rgw bucket creation fails

2013-11-04 Thread Corin Langosch
Hi, using ceph 0.67.4 I followed http://ceph.com/docs/master/radosgw/. I can connect using s3cmd (test configuration succeeds), so the user credentials and everything else seems to be running as it should. But when doing a s3cmd mb s3://test the radosgw returns a 405 Method Not Allowed

Re: [ceph-users] rgw bucket creation fails

2013-11-04 Thread Corin Langosch
04.11.2013 19:56, schrieb Yehuda Sadeh: This was answered off list on irc, but for the sake of completeness I'll answer here too. The issue is that s3cmd uses a virtual bucket host name. E.g., instead of http://host/bucket, it sends request to http://bucket.host, so in order for the gateway to

Re: [ceph-users] upgrade from bobtail to dumpling

2013-10-08 Thread Corin Langosch
http://ceph.com/docs/master/release-notes/ Am 08.10.2013 07:37, schrieb Dominik Mostowiec: hi, It is possible to (safe) upgrade directly from bobtail (0.56.6) to dumpling (latest)? Is there any instruction? ___ ceph-users mailing list

Re: [ceph-users] Ceph with high disk densities?

2013-10-07 Thread Corin Langosch
Am 07.10.2013 18:23, schrieb Gregory Farnum: There are a few tradeoffs you can make to reduce memory usage (I believe the big one is maintaining a shorter PG log, which lets nodes catch up without going through a full backfill), and there is also a I wonder why this log has to be fully kept in

[ceph-users] how to set flag on pool

2013-09-24 Thread Corin Langosch
Hi there, I want to set the flag hashpspool on an existing pool. ceph osd pool set {pool-name} {field} {value} does not seem to work. So I wonder how I can set/ unset flags on pools? Corin ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] how to set flag on pool

2013-09-24 Thread Corin Langosch
Am 24.09.2013 12:24, schrieb Joao Eduardo Luis: I believe that at the moment you'll only be able to have that flag set on a pool at creation time, if 'osd pool default flag hashpspool = true' on your conf. I just updated my config like this: [osd] osd journal size = 100 filestore xattr

[ceph-users] performance and disk usage of snapshots

2013-09-24 Thread Corin Langosch
Hi there, do snapshots have an impact on write performance? I assume on each write all snapshots have to get updated (cow) so the more snapshots exist the worse write performance will get? Is there any way to see how much disk space a snapshot occupies? I assume because of cow snapshots

Re: [ceph-users] OSD and Journal Files

2013-09-18 Thread Corin Langosch
Am 18.09.2013 17:03, schrieb Mike Dawson: I think you'll be OK on CPU and RAM. I'm running latest dumpling here and with default settings each osd consumes more than 3 GB RAM peak. So with 48 GB RAM it would not be possible to run the desired 18 osds. I filed a bug report for this here

Re: [ceph-users] ceph-mon runs on 6800 not 6789.

2013-09-03 Thread Corin Langosch
Am 03.09.2013 14:56, schrieb Joao Eduardo Luis: On 09/03/2013 02:02 AM, 이주헌 wrote: Hi all. I have 1 MDS and 3 OSDs. I installed them via ceph-deploy. (dumpling 0.67.2 version) At first, It works perfectly. But, after I reboot one of OSD, ceph-mon launched on port 6800 not 6789. This has been

Re: [ceph-users] Best way to reformat OSD drives?

2013-09-02 Thread Corin Langosch
Am 02.09.2013 11:37, schrieb Jens-Christian Fischer: we have a Ceph Cluster with 64 OSD drives in 10 servers. We originally formatted the OSDs with btrfs but have had numerous problems (server kernel panics) that we could point back to btrfs. We are therefore in the process of reformatting our

Re: [ceph-users] OSD to OSD Communication

2013-08-30 Thread Corin Langosch
Am 30.08.2013 23:11, schrieb Geraint Jones: Oh the Machines are 128gb :) How many PGs in total do you have? (128 gb was minimum for 8192 pgs) How many to you plan to have in the near future? Is the cluster already under load? What's the current memory usage of the ods? What's the usage

[ceph-users] rbd striping

2013-08-29 Thread Corin Langosch
Hi there, I read about how striping of rbd works at http://ceph.com/docs/next/man/8/rbd/ and it seems rather complex to me. As the individual objects are placed randomly over all osds taking crush into account anyway, what's the benefit over simply calculating object_id = (position /