[ceph-users] Using Crucial MX100 for journals or cache pool

2014-08-01 Thread Andrei Mikhailovsky
Hello guys, Was wondering if anyone has tried using the Crucial MX100 ssds either for osd journals or cache pool? It seems like a good cost effective alternative to the more expensive drives and read/write performance is very good as well. Thanks -- Andrei Mikhailovsky Director Arhont

Re: [ceph-users] Using Crucial MX100 for journals or cache pool

2014-08-01 Thread David
Performance seems quite low on those. I’d really step it up to intel s3700s. Check the performance benchmarks here and compare between them: http://www.anandtech.com/show/8066/crucial-mx100-256gb-512gb-review/3 http://www.anandtech.com/show/6433/intel-ssd-dc-s3700-200gb-review/3 If you’re

Re: [ceph-users] cache pool osds crashing when data is evicting to underlying storage pool

2014-08-01 Thread Kenneth Waegeman
- Message from Sage Weil sw...@redhat.com - Date: Thu, 31 Jul 2014 08:51:34 -0700 (PDT) From: Sage Weil sw...@redhat.com Subject: Re: [ceph-users] cache pool osds crashing when data is evicting to underlying storage pool To: Kenneth Waegeman kenneth.waege...@ugent.be

Re: [ceph-users] Using Crucial MX100 for journals or cache pool

2014-08-01 Thread Christian Balzer
On Fri, 1 Aug 2014 09:38:34 +0100 (BST) Andrei Mikhailovsky wrote: Hello guys, Was wondering if anyone has tried using the Crucial MX100 ssds either for osd journals or cache pool? It seems like a good cost effective alternative to the more expensive drives and read/write performance is

Re: [ceph-users] Persistent Error on osd activation

2014-08-01 Thread debian Only
i have meet the same issue , when i want to use prepare . when i use --zap-disk , it is ok. but if use prepare to define journal device, failed ceph-disk-prepare --zap-disk --fs-type btrfs --cluster ceph -- /dev/sdb /dev/sdc 2014-07-01 1:00 GMT+07:00 Iban Cabrillo cabri...@ifca.unican.es:

Re: [ceph-users] Using Crucial MX100 for journals or cache pool

2014-08-01 Thread Andrei Mikhailovsky
Thanks for your comments. Andrei -- Andrei Mikhailovsky Director Arhont Information Security Web: http://www.arhont.com http://www.wi-foo.com Tel: +44 (0)870 4431337 Fax: +44 (0)208 429 3111 PGP: Key ID - 0x2B3438DE PGP: Server - keyserver.pgp.com DISCLAIMER The information

Re: [ceph-users] Using Ramdisk wi

2014-08-01 Thread debian Only
i am looking for the method how to ramdisk with Ceph , just for test environment, i do not have enough SSD for each osd. but do not how to move osd journal to a tmpfs or ramdisk. hope some one can give some guide. 2014-07-31 8:58 GMT+07:00 Christian Balzer ch...@gol.com: On Wed, 30 Jul

Re: [ceph-users] 0.80.5-1precise Not Able to Map RBD CephFS

2014-08-01 Thread Ilya Dryomov
On Fri, Aug 1, 2014 at 12:29 AM, German Anders gand...@despegar.com wrote: Hi Ilya, I think you need to upgrade the kernel version of that ubuntu server, I've a similar problem and after upgrade the kernel to 3.13 the problem was resolved successfully. Ilya doesn't need to upgrade

Re: [ceph-users] 0.80.5-1precise Not Able to Map RBD CephFS

2014-08-01 Thread German Anders
Ilya how are you? Thats cool tell me if changing the tunable and disabling the hashpspool works for u. I've done those things but didn't work either so that's why i went for the kern upgrade Best regards Enviado desde mi Personal Samsung GT-i8190L Original message From:

Re: [ceph-users] 0.80.5-1precise Not Able to Map RBD CephFS

2014-08-01 Thread Gregory Farnum
We appear to have solved this and then immediately re-broken it by ensuring that the userspace daemons will set a new required feature bit if there are any EC rules in the OSDMap. I was going to say there's a ticket open for it, but I can't find one... -Greg On Fri, Aug 1, 2014 at 7:22 AM, Ilya

Re: [ceph-users] 0.80.5-1precise Not Able to Map RBD CephFS

2014-08-01 Thread Ilya Dryomov
On Fri, Aug 1, 2014 at 4:05 PM, Gregory Farnum g...@inktank.com wrote: We appear to have solved this and then immediately re-broken it by ensuring that the userspace daemons will set a new required feature bit if there are any EC rules in the OSDMap. I was going to say there's a ticket open

Re: [ceph-users] 0.80.5-1precise Not Able to Map RBD CephFS

2014-08-01 Thread Christopher O'Connell
I'm having the exact same problem. I'll try solving it without upgrading the kernel. On Aug 1, 2014 4:22 AM, Ilya Dryomov ilya.dryo...@inktank.com wrote: On Fri, Aug 1, 2014 at 12:29 AM, German Anders gand...@despegar.com wrote: Hi Ilya, I think you need to upgrade the kernel version

[ceph-users] [ANN] ceph-deploy 1.5.10 released

2014-08-01 Thread Alfredo Deza
Hi All, There is a new release of ceph-deploy, the easy deployment tool for Ceph. This release comes with a few improvements towards better usage of ceph-disk on remote nodes, with more verbosity so things are a bit more clear when they execute. The full list of fixes for this release can be

Re: [ceph-users] 0.80.5-1precise Not Able to Map RBD CephFS

2014-08-01 Thread Larry Liu
IIlya, Sorry for my delayed reply. It happens on a new cluster I just created. I'm just testing right out of the default rbd pool. On Aug 1, 2014, at 5:22 AM, Ilya Dryomov ilya.dryo...@inktank.com wrote: On Fri, Aug 1, 2014 at 4:05 PM, Gregory Farnum g...@inktank.com wrote: We appear to have

Re: [ceph-users] 0.80.5-1precise Not Able to Map RBD CephFS

2014-08-01 Thread Larry Liu
Looking forward your solution. On Aug 1, 2014, at 5:28 AM, Christopher O'Connell c...@sendfaster.com wrote: I'm having the exact same problem. I'll try solving it without upgrading the kernel. On Aug 1, 2014 4:22 AM, Ilya Dryomov ilya.dryo...@inktank.com wrote: On Fri, Aug 1, 2014

Re: [ceph-users] cache pool osds crashing when data is evicting to underlying storage pool

2014-08-01 Thread Sage Weil
On Fri, 1 Aug 2014, Kenneth Waegeman wrote: On Thu, 31 Jul 2014, Kenneth Waegeman wrote: Hi all, We have a erasure coded pool 'ecdata' and a replicated pool 'cache' acting as writeback cache upon it. When running 'rados -p ecdata bench 1000 write', it starts filling up the

[ceph-users] Ιnstrumenting RADOS with Zipkin + LTTng

2014-08-01 Thread Marios-Evaggelos Kogias
Hello all, my name is Marios Kogias and I am a student at the National Technical University of Athens. As part of my diploma thesis and my participation in Google Summer of Code 2014 (in the LTTng organization) I am working on a low-overhead tracing infrastructure for distributed systems. I am

Re: [ceph-users] 0.80.5-1precise Not Able to Map RBD CephFS

2014-08-01 Thread Larry Liu
oot@u12ceph02:~# rbd map foo --pool rbd --name client.admin -m u12ceph01 -k /etc/ceph/ceph.client.admin.keyring rbd: add failed: (5) Input/output error dmesg shows these right away after the IO error: [ 461.010895] libceph: mon0 10.190.10.13:6789 feature set mismatch, my 4a042aca server's

[ceph-users] Some questions of radosgw

2014-08-01 Thread Osier Yang
Hi, list, I managed to setup radosgw in testing environment to see if it's stable/mature enough for production use these several days. In the meanwhile, I tried to read the source code of radosgw to understand how it actually manages the underlying storage. The testing result shows the the

Re: [ceph-users] Some questions of radosgw

2014-08-01 Thread Osier Yang
[ correct the URL ] On 2014年08月02日 00:42, Osier Yang wrote: Hi, list, I managed to setup radosgw in testing environment to see if it's stable/mature enough for production use these several days. In the meanwhile, I tried to read the source code of radosgw to understand how it actually

[ceph-users] Ceph writes stall for long perioids with no disk/network activity

2014-08-01 Thread Mariusz Gronczewski
Hi, when I am running rados bench -p benchmark 300 write --run-name bench --no-cleanup I got weird stalling during writes, sometimes I got same write speed for few minutes and after some time it starts stalling with 0 MB/s for minutes My configuration: ceph 0.80.5 pool 0 'data'

Re: [ceph-users] 0.80.5-1precise Not Able to Map RBD CephFS

2014-08-01 Thread Ilya Dryomov
On Fri, Aug 1, 2014 at 8:34 PM, Larry Liu larryliu...@gmail.com wrote: oot@u12ceph02:~# rbd map foo --pool rbd --name client.admin -m u12ceph01 -k /etc/ceph/ceph.client.admin.keyring rbd: add failed: (5) Input/output error dmesg shows these right away after the IO error: [ 461.010895]

Re: [ceph-users] 0.80.5-1precise Not Able to Map RBD CephFS

2014-08-01 Thread Ilya Dryomov
On Fri, Aug 1, 2014 at 4:22 PM, Ilya Dryomov ilya.dryo...@inktank.com wrote: On Fri, Aug 1, 2014 at 4:05 PM, Gregory Farnum g...@inktank.com wrote: We appear to have solved this and then immediately re-broken it by ensuring that the userspace daemons will set a new required feature bit if

Re: [ceph-users] 0.80.5-1precise Not Able to Map RBD CephFS

2014-08-01 Thread Ilya Dryomov
On Fri, Aug 1, 2014 at 10:06 PM, Ilya Dryomov ilya.dryo...@inktank.com wrote: On Fri, Aug 1, 2014 at 4:22 PM, Ilya Dryomov ilya.dryo...@inktank.com wrote: On Fri, Aug 1, 2014 at 4:05 PM, Gregory Farnum g...@inktank.com wrote: We appear to have solved this and then immediately re-broken it by

Re: [ceph-users] 0.80.5-1precise Not Able to Map RBD CephFS

2014-08-01 Thread Sage Weil
On Fri, 1 Aug 2014, Ilya Dryomov wrote: On Fri, Aug 1, 2014 at 10:06 PM, Ilya Dryomov ilya.dryo...@inktank.com wrote: On Fri, Aug 1, 2014 at 4:22 PM, Ilya Dryomov ilya.dryo...@inktank.com wrote: On Fri, Aug 1, 2014 at 4:05 PM, Gregory Farnum g...@inktank.com wrote: We appear to have

[ceph-users] Ceph runs great then falters

2014-08-01 Thread Chris Kitzmiller
I have 3 nodes each running a MON and 30 OSDs. When I test my cluster with either rados bench or with fio via a 10GbE client using RBD I get great initial speeds 900MBps and I max out my 10GbE links for a while. Then, something goes wrong the performance falters and the cluster stops responding

Re: [ceph-users] 0.80.5-1precise Not Able to Map RBD CephFS

2014-08-01 Thread Larry Liu
cruhmap file is attached. I'm running kernel 3.13.0-29-generic after another person suggested. But the kernel upgrade didn't fix anything for me. Thanks! crush Description: Binary data On Aug 1, 2014, at 10:38 AM, Ilya Dryomov ilya.dryo...@inktank.com wrote: On Fri, Aug 1, 2014 at 8:34 PM,

[ceph-users] Free LinuxCon/CloudOpen Pass

2014-08-01 Thread Patrick McGarry
Hey cephers, Now that OSCON is in our rearview mirror we have started looking to LinuxCon/CloudOpen, which is looming just over two weeks away. If you haven't arranged tickets yet, and would like to go, let us know! We have an extra ticket (maybe two) and we'd love to have you attend and hang

Re: [ceph-users] 0.80.5-1precise Not Able to Map RBD CephFS

2014-08-01 Thread Ilya Dryomov
On Fri, Aug 1, 2014 at 10:32 PM, Larry Liu larryliu...@gmail.com wrote: cruhmap file is attached. I'm running kernel 3.13.0-29-generic after another person suggested. But the kernel upgrade didn't fix anything for me. Thanks! So there are two problems. First, you either have erasure pools or

Re: [ceph-users] 0.80.5-1precise Not Able to Map RBD CephFS

2014-08-01 Thread Larry Liu
Hi Ilya, thank you so much! I didn't know my crush map was all messed up. Now all is working! I guess it would have worked even without upgrading the kernel from 3.2 to 3.13. On Aug 1, 2014, at 12:48 PM, Ilya Dryomov ilya.dryo...@inktank.com wrote: On Fri, Aug 1, 2014 at 10:32 PM, Larry

Re: [ceph-users] 0.80.5-1precise Not Able to Map RBD CephFS

2014-08-01 Thread Christopher O'Connell
So I've been having a seemingly similar problem and while trying to follow the steps in this thread, things have gone very south for me. Kernal on OSDs and MONs: 2.6.32-431.20.3.0.1.el6.centos.plus.x86_64 #1 SMP Wed Jul 16 21:27:52 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Kernal on RBD host:

Re: [ceph-users] Firefly OSDs stuck in creating state forever

2014-08-01 Thread Brian Rak
Why do you have a MDS active? I'd suggest getting rid of that at least until you have everything else working. I see you've set nodown on the OSDs, did you have problems with the OSDs flapping? Do the OSDs have broken connectivity between themselves? Do you have some kind of firewall

Re: [ceph-users] Firefly OSDs stuck in creating state forever

2014-08-01 Thread Bruce McFarland
MDS: I assumed that I'd need to bring up a ceph-mds for my cluster at initial bringup. We also intended to modify the CRUSH map such that it's pool is resident to SSD(s). It is one of the areas of the online docs there doesn't seem to be a lot of info on and I haven't spent a lot of time

Re: [ceph-users] Ceph runs great then falters

2014-08-01 Thread Christian Balzer
Hello, On Fri, 1 Aug 2014 14:23:28 -0400 Chris Kitzmiller wrote: I have 3 nodes each running a MON and 30 OSDs. Given the HW you list below, that might be a tall order, particular CPU wise in certain situations. What is your OS running off, HDDs or SSDs? The leveldbs, for the MONs in