Re: [ceph-users] Frequent Crashes on rbd to nfs gateway Server

2014-09-30 Thread Micha Krause
Hi, That's strange. 3.13 is way before any changes that could have had any such effect. Can you by any chance try with older kernels to see where it starts misbehaving for you? 3.12? 3.10? 3.8? If I have to compile Kernels anyway I will test 3.16.3 as well :-/. Debian has

Re: [ceph-users] Frequent Crashes on rbd to nfs gateway Server

2014-09-30 Thread Ilya Dryomov
On Tue, Sep 30, 2014 at 1:30 PM, Micha Krause mi...@krausam.de wrote: Hi, That's strange. 3.13 is way before any changes that could have had any such effect. Can you by any chance try with older kernels to see where it starts misbehaving for you? 3.12? 3.10? 3.8? If I have to

Re: [ceph-users] SSD MTBF

2014-09-30 Thread Kingsley Tart
On Tue, 2014-09-30 at 00:30 +0900, Christian Balzer wrote: On Mon, 29 Sep 2014 11:15:21 +0200 Emmanuel Lacour wrote: On Mon, Sep 29, 2014 at 05:57:12PM +0900, Christian Balzer wrote: Given your SSDs, are they failing after more than 150TB have been written? between 30 and 40 TB

[ceph-users] Can you assign ACLs to a virtual directory using Object Gateway's S3 API?

2014-09-30 Thread Steve Kingsland
Using the S3 API to Object Gateway, let's say that I create an object named /some/path/foo.bar. When I browse this object in Ceph using a graphical S3 client, some and path show up as directories. I realize that they're not *actually* directories, but can I set an ACL on them, just like I can with

[ceph-users] [radosgw] Admin REST API wrong results

2014-09-30 Thread Patrycja Szabłowska
Hi, I'm using the radosgw REST API (via Python's boto library and also using some radosgw-agent methods) to fetch some data from Ceph (version 0.85). When I try to get the admin log for some specific dates, the radosgw seems to give me bad results. For example when I try to get entries since

[ceph-users] Node maintenance : Very slow IO operations when i stop node

2014-09-30 Thread Thomas Bernard
Hi all, I have very slow IO operations on my cluster when i stop a node for maintenance. I know my cluster can support the charge with only 2 on my 4 nodes. I set the noout flag with ceph osd set noout From my understanding, stop only one CEPH node should not have big issue. BUT when i down

Re: [ceph-users] Status of snapshots in CephFS

2014-09-30 Thread Florian Haas
On Fri, Sep 19, 2014 at 5:25 PM, Sage Weil sw...@redhat.com wrote: On Fri, 19 Sep 2014, Florian Haas wrote: Hello everyone, Just thought I'd circle back on some discussions I've had with people earlier in the year: Shortly before firefly, snapshot support for CephFS clients was effectively

Re: [ceph-users] SSD MTBF

2014-09-30 Thread Christian Balzer
On Tue, 30 Sep 2014 15:26:31 +0100 Kingsley Tart wrote: On Tue, 2014-09-30 at 00:30 +0900, Christian Balzer wrote: On Mon, 29 Sep 2014 11:15:21 +0200 Emmanuel Lacour wrote: On Mon, Sep 29, 2014 at 05:57:12PM +0900, Christian Balzer wrote: Given your SSDs, are they failing after

[ceph-users] PG stuck creating

2014-09-30 Thread Robert LeBlanc
On our dev cluster, I've got a PG that won't create. We had a host fail with 10 OSDs that needed to be rebuilt. A number of other OSDs were down for a few days (did I mention this was a dev cluster?). The other OSDs eventually came up once the OSD maps caught up on them. I rebuilt the OSDs on all

Re: [ceph-users] PG stuck creating

2014-09-30 Thread Gregory Farnum
Yeah, the last acting set there is probably from prior to your lost data and forced pg creation, so it might not have any bearing on what's happening now. Software Engineer #42 @ http://inktank.com | http://ceph.com On Tue, Sep 30, 2014 at 10:07 AM, Robert LeBlanc rob...@leblancnet.us wrote: I

[ceph-users] Why performance of benchmarks with small blocks is extremely small?

2014-09-30 Thread Timur Nurlygayanov
Hello all, I installed OpenStack with Glance + Ceph OSD with replication factor 2 and now I can see the write operations are extremly slow. For example, I can see only 0.04 MB/s write speed when I run rados bench with 512b blocks: rados bench -p test 60 write --no-cleanup -t 1 -b 512

[ceph-users] [radosgw] Admin REST API wrong results

2014-09-30 Thread Patrycja Szabłowska
Sending this again, because it wasn't published to the mailing list (I think because I wasn't a subscriber). Hi, I'm using the radosgw REST API (via Python's boto library and also using some radosgw-agent methods) to fetch some data from Ceph (version 0.85). When I try to get the admin log for

Re: [ceph-users] SSD MTBF

2014-09-30 Thread Mark Nelson
On 09/29/2014 03:58 AM, Dan Van Der Ster wrote: Hi Emmanuel, This is interesting, because we’ve had sales guys telling us that those Samsung drives are definitely the best for a Ceph journal O_o ! Our sales guys or Samsung sales guys? :) If it was ours, let me know. The conventional

Re: [ceph-users] PG stuck creating

2014-09-30 Thread Gregory Farnum
On Tuesday, September 30, 2014, Robert LeBlanc rob...@leblancnet.us wrote: On our dev cluster, I've got a PG that won't create. We had a host fail with 10 OSDs that needed to be rebuilt. A number of other OSDs were down for a few days (did I mention this was a dev cluster?). The other OSDs

Re: [ceph-users] PG stuck creating

2014-09-30 Thread Robert LeBlanc
I rebuilt the primary OSD (29) in the hopes it would unblock whatever it was, but no luck. I'll check the admin socket and see if there is anything I can find there. On Tue, Sep 30, 2014 at 10:36 AM, Gregory Farnum g...@inktank.com wrote: On Tuesday, September 30, 2014, Robert LeBlanc

Re: [ceph-users] Can you assign ACLs to a virtual directory using Object Gateway's S3 API?

2014-09-30 Thread Yehuda Sadeh
On Tue, Sep 23, 2014 at 9:20 AM, Steve Kingsland steve.kingsl...@opower.com wrote: Using the S3 API to Object Gateway, let's say that I create an object named /some/path/foo.bar. When I browse this object in Ceph using a graphical S3 client, some and path show up as directories. I realize that

Re: [ceph-users] Ceph Developer Summit: Hammer

2014-09-30 Thread Yann Dupont
Le 30/09/2014 22:55, Patrick McGarry a écrit : Hey cephers, The schedule and call for blueprints is now up for our next CDS as we aim for the Hammer release: http://ceph.com/community/ceph-developer-summit-hammer/ If you have any work that you plan on submitting through the end of the year,

[ceph-users] Ceph Developer Summit: Hammer

2014-09-30 Thread Patrick McGarry
Hey cephers, The schedule and call for blueprints is now up for our next CDS as we aim for the Hammer release: http://ceph.com/community/ceph-developer-summit-hammer/ If you have any work that you plan on submitting through the end of the year, please fill out a Blueprint (the big red button)

[ceph-users] Fwd: images have no owner

2014-09-30 Thread Mick S
Hello i'm new user with ceph, maybe someone can help, i was install 0.85 ceph version, setup mon osd and radosgw for s3 api. When i upload object like *.mp3 *.exe *.psd alright, objects have a owner, but if i upload any image like jpg png gif, object have no owner, and i couldn't set acl

[ceph-users] Firefly maintenance release schedule

2014-09-30 Thread Dmitry Borodaenko
Hi Ceph developers, Last stable Firefly release (v0.80.5) was tagged on July 29 (over 2 months ago). Since then, there were twice as many commits merged into the firefly branch than there existed on the branch before v0.80.5: $ git log --oneline --no-merges v0.80..v0.80.5|wc -l 122 $ git log

Re: [ceph-users] Why performance of benchmarks with small blocks is extremely small?

2014-09-30 Thread Christian Balzer
Hello, [reduced to ceph-users] On Sat, 27 Sep 2014 19:17:22 +0400 Timur Nurlygayanov wrote: Hello all, I installed OpenStack with Glance + Ceph OSD with replication factor 2 and now I can see the write operations are extremly slow. For example, I can see only 0.04 MB/s write speed when I

[ceph-users] rbd + openstack nova instance snapshots?

2014-09-30 Thread Jonathan Proulx
Hi All, I'm working on integrating our new Ceph cluster with our older OpenStack infrastructure. It's going pretty well so far but looking to check my expectations. We're running Firefly on the ceph side and Icehouse on the OpenStack side. I've pulled the recommnded nova branch from