[ceph-users] ceph online meeting

2014-06-02 Thread eric mourgaya
Hi All, First, thanks for your welcome messages and your votes. Note that the next online meeting is on the 6th June: https://wiki.ceph.com/Community/Meetings . Note also the next openstack meetup that will speak about storage and ceph :

Re: [ceph-users] rados benchmark is fast, but dd result on guest vm still slow?

2014-06-02 Thread Christian Balzer
Hello, On Mon, 2 Jun 2014 16:15:22 +0800 Indra Pramana wrote: Dear all, I have managed to identify some slow OSDs and journals and have since replaced them. RADOS benchmark of the whole cluster is now fast, much improved from last time, showing the cluster can go up to 700+ MB/s. =

Re: [ceph-users] Calamari Goes Open Source

2014-06-02 Thread John Spray
I'd like to encourage anyone who would like to carry on the conversation to join us on the new calamari mailing list: http://lists.ceph.com/listinfo.cgi/ceph-calamari-ceph.com Thanks, John On Mon, Jun 2, 2014 at 10:10 AM, John Spray john.sp...@inktank.com wrote: Tim, Right now it's purely a

Re: [ceph-users] Expanding pg's of an erasure coded pool

2014-06-02 Thread Kenneth Waegeman
- Message from Guang Yang yguan...@yahoo.com - Date: Fri, 30 May 2014 08:56:37 +0800 From: Guang Yang yguan...@yahoo.com Subject: Re: [ceph-users] Expanding pg's of an erasure coded pool To: Gregory Farnum g...@inktank.com Cc: Kenneth Waegeman

Re: [ceph-users] OSD suffers problems after filesystem crashed and recovered.

2014-06-02 Thread Felix Lee
Hi, Craig, Many thanks for your reply. The disk was completely recovered, the filesystem error was caused by fiber connection broke(cable issue), the disk/RAID itself is health, so, there is no physical disk error but filesystem corruption in our case. The file system itself was recovered by

Re: [ceph-users] OSD suffers problems after filesystem crashed and recovered.

2014-06-02 Thread Wido den Hollander
On 06/02/2014 12:41 PM, Felix Lee wrote: Hi, Craig, Many thanks for your reply. The disk was completely recovered, the filesystem error was caused by fiber connection broke(cable issue), the disk/RAID itself is health, so, there is no physical disk error but filesystem corruption in our case.

[ceph-users] Nagios Check for Ceph-Dash

2014-06-02 Thread Christian Eichelmann
Hi Folks! For those of you, who are using ceph-dash (https://github.com/Crapworks/ceph-dash), I've created a Nagios-Plugin, that uses the json endpoint to monitor your cluster remotely: * https://github.com/Crapworks/check_ceph_dash I think this can be easily adopted to use the ceph-rest-api as

Re: [ceph-users] OSD suffers problems after filesystem crashed and recovered.

2014-06-02 Thread Felix Lee
Hi, Wido, Why even try to recover the XFS filesystem? Well, basically, our intention was to fix D process, and, yes, I have to admit, at some point, to recover filesystem was kind of reflex action while suffering storage error, it's a part of standard procedures for traditional storage system

[ceph-users] [ANN] ceph-deploy 1.5.3 released

2014-06-02 Thread Alfredo Deza
Hi All, There is a new bug-fix release of ceph-deploy, the easy deployment tool for Ceph. The full list of fixes for this release can be found in the changelog: http://ceph.com/ceph-deploy/docs/changelog.html#id1 Make sure you update! -Alfredo ___

[ceph-users] Firefly RPMs broken on CentOS 6

2014-06-02 Thread Brian Rak
Did the python-ceph package go away or something? Upgrading from 0.80.1-0.el6 to 0.80.1-2.el6 does not work. # yum install ceph python-ceph Package python-ceph-0.80.1-0.el6.x86_64 already installed and latest version Resolving Dependencies -- Running transaction check --- Package ceph.x86_64

Re: [ceph-users] Firefly RPMs broken on CentOS 6

2014-06-02 Thread Brian Rak
Also the 0.80.1-2.el6 ceph-radosgw RPM no longer includes an init script. Where is the proper place to report issues with the RPMs? On 6/2/2014 9:53 AM, Brian Rak wrote: Did the python-ceph package go away or something? Upgrading from 0.80.1-0.el6 to 0.80.1-2.el6 does not work. # yum

[ceph-users] btrfs + cache tier = disaster

2014-06-02 Thread Scott Laird
I found a fun failure mode this weekend. I have 6 SSDs in my 6-node Ceph cluster at home. The SSDs are partitioned; about half of the SSD is used for journal space for other OSDs, and half holds an OSD for a cache tier. I finally turned it on the cache late last week, and everything was great,

Re: [ceph-users] btrfs + cache tier = disaster

2014-06-02 Thread Thorwald Lundqvist
I'm sorry to hear about that. I'd say don't use btrfs at all, it has proven unstable for us in production even without cache. It's just not ready for production use. On Mon, Jun 2, 2014 at 5:20 PM, Scott Laird sc...@sigkill.org wrote: I found a fun failure mode this weekend. I have 6 SSDs

[ceph-users] Recommended way to use Ceph as storage for file server

2014-06-02 Thread Erik Logtenberg
Hi, In march 2013 Greg wrote an excellent blog posting regarding the (then) current status of MDS/CephFS and the plans for going forward with development. http://ceph.com/dev-notes/cephfs-mds-status-discussion/ Since then, I understand progress has been slow, and Greg confirmed that he didn't

Re: [ceph-users] Recommended way to use Ceph as storage for file server

2014-06-02 Thread Mark Nelson
On 06/02/2014 10:54 AM, Erik Logtenberg wrote: Hi, In march 2013 Greg wrote an excellent blog posting regarding the (then) current status of MDS/CephFS and the plans for going forward with development. http://ceph.com/dev-notes/cephfs-mds-status-discussion/ Since then, I understand progress

Re: [ceph-users] btrfs + cache tier = disaster

2014-06-02 Thread Scott Laird
I can cope with single-FS failures, within reason. It's the coordinated failures across multiple servers that really freak me out. On Mon, Jun 2, 2014 at 8:47 AM, Thorwald Lundqvist thorw...@jumpstarter.io wrote: I'm sorry to hear about that. I'd say don't use btrfs at all, it has proven

[ceph-users] mellanox SX1012 ethernet|infiniband switch, somebody use it for ceph ?

2014-06-02 Thread Alexandre DERUMIER
Hi, I'm looking for a fast and cheap 10gbe ethernet switch. I just found this: Mellanox SX1012 http://www.mellanox.com/page/products_dyn?product_family=163mtag=sx1012 48port 10Gbe | 12ports 46gbit for around 5000€. Seem that infiniband (roce) is also available. Does somebody use it for

Re: [ceph-users] btrfs + cache tier = disaster

2014-06-02 Thread Dmitry Smirnov
On Mon, 2 Jun 2014 17:47:57 Thorwald Lundqvist wrote: I'd say don't use btrfs at all, it has proven unstable for us in production even without cache. It's just not ready for production use. Perception of stability depends on experience. For instance some consider XFS to be ready for production

Re: [ceph-users] btrfs + cache tier = disaster

2014-06-02 Thread Scott Laird
Oh, and thanks for the filestore btrfs snap = false pointer. In ceph.conf, under [osd], I assume? On Mon, Jun 2, 2014 at 10:07 AM, Scott Laird sc...@sigkill.org wrote: FWIW, I figured out the ceph out of memory error that was keeping me from recovering one FS: # ls -l /mnt ls: cannot

Re: [ceph-users] Firefly RPMs broken on CentOS 6

2014-06-02 Thread Brian Rak
They're from that link. They were definitely present in the repository a couple hours ago. Maybe this got reverted? On 6/2/2014 1:08 PM, Alfredo Deza wrote: Brian Where is that ceph repo coming from? I don't see any 0.80.1-2 in http://ceph.com/rpm-firefly/el6/x86_64/ On Mon, Jun 2, 2014

Re: [ceph-users] Firefly RPMs broken on CentOS 6

2014-06-02 Thread Alfredo Deza
Oh I see that it is coming from EPEL. We have not packaged that, not sure why suddenly EPEL is serving those :/ Officially we have not built a 0.80.1-2. A possible workaround for this would be to up the priority on the repo file for ceph in /etc/yum/repos.d/ceph.repo but you would need to

Re: [ceph-users] Recommended way to use Ceph as storage for file server

2014-06-02 Thread Dimitri Maziuk
On 06/02/2014 11:24 AM, Mark Nelson wrote: A more or less obvious alternative for CephFS would be to simply create a huge RBD and have a separate file server (running NFS / Samba / whatever) use that block device as backend. Just put a regular FS on top of the RBD and use it that way.

Re: [ceph-users] Experiences with Ceph at the June'14 issue of USENIX ; login:

2014-06-02 Thread Patrick McGarry
This is great. Thanks for sharing Filippos! Best Regards, Patrick McGarry Director, Community || Inktank http://ceph.com || http://inktank.com @scuttlemonkey || @ceph || @inktank On Mon, Jun 2, 2014 at 2:32 PM, Filippos Giannakos philipg...@grnet.gr wrote: Hello all, As you may already

Re: [ceph-users] Experiences with Ceph at the June'14 issue of USENIX ; login:

2014-06-02 Thread Ian Colle
Thanks, Filippos! Very interesting reading. Are you comfortable enough yet to remove the RAID-1 from your architecture and get all that space back? Ian R. Colle Global Director of Software Engineering Red Hat (Inktank is now part of Red Hat!) http://www.linkedin.com/in/ircolle

Re: [ceph-users] Problem with radosgw and some file name characters

2014-06-02 Thread Andrei Mikhailovsky
Yehuda, sorry for the delay in reply, I was away for a week or so. The problem happens regardless of the client. I've tried a few. You are right, I've got a load balancer and a reverse proxy behind the radosgw service. My setup is as follows: Internet --- Load Balancer --

Re: [ceph-users] Problem with radosgw and some file name characters

2014-06-02 Thread Yehuda Sadeh
I think your proxy or load balancer rewrites the requests, translates the spaces and other special characters, which in turn clobbers the authentication signatures. You can try disabling this functionality. Yehuda On Mon, Jun 2, 2014 at 3:49 PM, Andrei Mikhailovsky and...@arhont.com wrote:

[ceph-users] Failed - InvalidArgument 400 When changing objcet's ACL

2014-06-02 Thread wsnote
Hi, everyone! I have installed a ceph cluster with object storage.Now I meet a question. I can use S3 Client or SDK to upload or delete a object, but can't change the ACL of objects. When I try to change the ACL, the error info is Failed - InvalidArgument 400 What's the config about this? Thanks!

Re: [ceph-users] mellanox SX1012 ethernet|infiniband switch, somebody use it for ceph ?

2014-06-02 Thread Alexandre DERUMIER
Thanks Carlos How about: Infiniband - Voltaire 4036 - Dual power 36 QDR port for about $1300 (ebay) - pay attention to fan module for air flow direction.

Re: [ceph-users] mellanox SX1012 ethernet|infiniband switch, somebody use it for ceph ?

2014-06-02 Thread Alexandre DERUMIER
I just found this: http://www.mellanox.com/related-docs/whitepapers/WP_Deploying_Ceph_over_High_Performance_Networks.pdf Good to see than ceph begin to be tested by hardware vendor :) Whitepaper include radosbench and fio results - Mail original - De: Alexandre DERUMIER

Re: [ceph-users] [Annonce]The progress of KeyValueStore in Firely

2014-06-02 Thread Sushma R
Hi Haomai, I tried to compare the READ performance of FileStore and KeyValueStore using the internal tool ceph_smalliobench and I see KeyValueStore's performance is approx half that of FileStore. I'm using similar conf file as yours. Is this the expected behavior or am I missing something?

[ceph-users] v0.81 released

2014-06-02 Thread Sage Weil
This is the first development release since Firefly. It includes a lot of work that we delayed merging while stabilizing things. Lots of new functionality, as well as several fixes that are baking a bit before getting backported. Upgrading - * CephFS support for the legacy anchor table