Hi,
That's strange. 3.13 is way before any changes that could have had any
such effect. Can you by any chance try with older kernels to see where
it starts misbehaving for you? 3.12? 3.10? 3.8?
If I have to compile Kernels anyway I will test 3.16.3 as well :-/.
Debian has
On Tue, Sep 30, 2014 at 1:30 PM, Micha Krause mi...@krausam.de wrote:
Hi,
That's strange. 3.13 is way before any changes that could have had
any
such effect. Can you by any chance try with older kernels to see where
it starts misbehaving for you? 3.12? 3.10? 3.8?
If I have to
On Tue, 2014-09-30 at 00:30 +0900, Christian Balzer wrote:
On Mon, 29 Sep 2014 11:15:21 +0200 Emmanuel Lacour wrote:
On Mon, Sep 29, 2014 at 05:57:12PM +0900, Christian Balzer wrote:
Given your SSDs, are they failing after more than 150TB have been
written?
between 30 and 40 TB
Using the S3 API to Object Gateway, let's say that I create an object named
/some/path/foo.bar. When I browse this object in Ceph using a graphical
S3 client, some and path show up as directories. I realize that they're
not *actually* directories, but can I set an ACL on them, just like I can
with
Hi,
I'm using the radosgw REST API (via Python's boto library and also using
some radosgw-agent methods) to fetch some data from Ceph (version 0.85).
When I try to get the admin log for some specific dates, the radosgw seems
to give me bad results.
For example when I try to get entries since
Hi all,
I have very slow IO operations on my cluster when i stop a node for
maintenance.
I know my cluster can support the charge with only 2 on my 4 nodes.
I set the noout flag with ceph osd set noout
From my understanding, stop only one CEPH node should not have big issue.
BUT when i down
On Fri, Sep 19, 2014 at 5:25 PM, Sage Weil sw...@redhat.com wrote:
On Fri, 19 Sep 2014, Florian Haas wrote:
Hello everyone,
Just thought I'd circle back on some discussions I've had with people
earlier in the year:
Shortly before firefly, snapshot support for CephFS clients was
effectively
On Tue, 30 Sep 2014 15:26:31 +0100 Kingsley Tart wrote:
On Tue, 2014-09-30 at 00:30 +0900, Christian Balzer wrote:
On Mon, 29 Sep 2014 11:15:21 +0200 Emmanuel Lacour wrote:
On Mon, Sep 29, 2014 at 05:57:12PM +0900, Christian Balzer wrote:
Given your SSDs, are they failing after
On our dev cluster, I've got a PG that won't create. We had a host fail
with 10 OSDs that needed to be rebuilt. A number of other OSDs were down
for a few days (did I mention this was a dev cluster?). The other OSDs
eventually came up once the OSD maps caught up on them. I rebuilt the OSDs
on all
Yeah, the last acting set there is probably from prior to your lost
data and forced pg creation, so it might not have any bearing on
what's happening now.
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Tue, Sep 30, 2014 at 10:07 AM, Robert LeBlanc rob...@leblancnet.us wrote:
I
Hello all,
I installed OpenStack with Glance + Ceph OSD with replication factor 2 and
now I can see the write operations are extremly slow.
For example, I can see only 0.04 MB/s write speed when I run rados bench
with 512b blocks:
rados bench -p test 60 write --no-cleanup -t 1 -b 512
Sending this again, because it wasn't published to the mailing list (I
think because I wasn't a subscriber).
Hi,
I'm using the radosgw REST API (via Python's boto library and also using
some radosgw-agent methods) to fetch some data from Ceph (version 0.85).
When I try to get the admin log for
On 09/29/2014 03:58 AM, Dan Van Der Ster wrote:
Hi Emmanuel,
This is interesting, because we’ve had sales guys telling us that those Samsung
drives are definitely the best for a Ceph journal O_o !
Our sales guys or Samsung sales guys? :) If it was ours, let me know.
The conventional
On Tuesday, September 30, 2014, Robert LeBlanc rob...@leblancnet.us wrote:
On our dev cluster, I've got a PG that won't create. We had a host fail
with 10 OSDs that needed to be rebuilt. A number of other OSDs were down
for a few days (did I mention this was a dev cluster?). The other OSDs
I rebuilt the primary OSD (29) in the hopes it would unblock whatever it
was, but no luck. I'll check the admin socket and see if there is anything
I can find there.
On Tue, Sep 30, 2014 at 10:36 AM, Gregory Farnum g...@inktank.com wrote:
On Tuesday, September 30, 2014, Robert LeBlanc
On Tue, Sep 23, 2014 at 9:20 AM, Steve Kingsland
steve.kingsl...@opower.com wrote:
Using the S3 API to Object Gateway, let's say that I create an object named
/some/path/foo.bar. When I browse this object in Ceph using a graphical S3
client, some and path show up as directories. I realize that
Le 30/09/2014 22:55, Patrick McGarry a écrit :
Hey cephers,
The schedule and call for blueprints is now up for our next CDS as we
aim for the Hammer release:
http://ceph.com/community/ceph-developer-summit-hammer/
If you have any work that you plan on submitting through the end of
the year,
Hey cephers,
The schedule and call for blueprints is now up for our next CDS as we
aim for the Hammer release:
http://ceph.com/community/ceph-developer-summit-hammer/
If you have any work that you plan on submitting through the end of
the year, please fill out a Blueprint (the big red button)
Hello
i'm new user with ceph, maybe someone can help, i was install 0.85 ceph
version,
setup mon osd and radosgw for s3 api.
When i upload object like *.mp3 *.exe *.psd alright, objects have a owner,
but if i upload any image like jpg png gif, object have no owner, and i
couldn't set acl
Hi Ceph developers,
Last stable Firefly release (v0.80.5) was tagged on July 29 (over 2
months ago). Since then, there were twice as many commits merged into
the firefly branch than there existed on the branch before v0.80.5:
$ git log --oneline --no-merges v0.80..v0.80.5|wc -l
122
$ git log
Hello,
[reduced to ceph-users]
On Sat, 27 Sep 2014 19:17:22 +0400 Timur Nurlygayanov wrote:
Hello all,
I installed OpenStack with Glance + Ceph OSD with replication factor 2
and now I can see the write operations are extremly slow.
For example, I can see only 0.04 MB/s write speed when I
Hi All,
I'm working on integrating our new Ceph cluster with our older
OpenStack infrastructure. It's going pretty well so far but looking
to check my expectations.
We're running Firefly on the ceph side and Icehouse on the OpenStack
side. I've pulled the recommnded nova branch from
22 matches
Mail list logo