Re: [ceph-users] Large LOG like files on monitor

2015-10-08 Thread Christian Balzer
Hello, On Thu, 8 Oct 2015 10:27:16 +0200 Erwin Lubbers wrote: > Christian, > > Still running Dumpling (I know I have to start upgrading). Cluster has > 66 OSD’s and a total size close to 100 GB. Cluster is running for around > 2 years now and the monitor server has an uptime of 258 days. > >

Re: [ceph-users] O_DIRECT on deep-scrub read

2015-10-08 Thread Paweł Sadowski
On 10/07/2015 10:52 PM, Sage Weil wrote: > On Wed, 7 Oct 2015, David Zafman wrote: >> There would be a benefit to doing fadvise POSIX_FADV_DONTNEED after >> deep-scrub reads for objects not recently accessed by clients. > Yeah, it's the 'except for stuff already in cache' part that we don't do >

Re: [ceph-users] Large LOG like files on monitor

2015-10-08 Thread Erwin Lubbers
Christian, Still running Dumpling (I know I have to start upgrading). Cluster has 66 OSD’s and a total size close to 100 GB. Cluster is running for around 2 years now and the monitor server has an uptime of 258 days. The LOG file is 1.2 GB in size and ls shows the current time for it. The

[ceph-users] "stray" objects in empty cephfs data pool

2015-10-08 Thread Burkhard Linke
Hi, I've moved all files from a CephFS data pool (EC pool with frontend cache tier) in order to remove the pool completely. Some objects are left in the pools ('ceph df' output of the affected pools): cephfs_ec_data 19 7565k 0 66288G 13 Listing the

[ceph-users] Large LOG like files on monitor

2015-10-08 Thread Erwin Lubbers
Hi, In the /var/lib/ceph/mon/ceph-l16-s01/store.db/ directory there are two very large files LOG and LOG.OLD (multiple GB's) and my diskspace is running low. Can I safely delete those files? Regards, Erwin ___ ceph-users mailing list

Re: [ceph-users] proxmox 4.0 release : lxc with krbd support and qemu librbd improvements

2015-10-08 Thread Irek Fasikhov
Hi, Alexandre. Very Very Good! Thank you for your work! :) С уважением, Фасихов Ирек Нургаязович Моб.: +79229045757 2015-10-07 7:25 GMT+03:00 Alexandre DERUMIER : > Hi, > > proxmox 4.0 has been released: > > http://forum.proxmox.com/threads/23780-Proxmox-VE-4-0-released! >

[ceph-users] input / output error

2015-10-08 Thread gjprabu
Hi All, We have CEPH RBD with OCFS2 mounted servers. we are facing i/o errors simultaneously while move the data's in the same disk (Copying is not having any problem). Temporary we remount the partition and the issue get resolved but after sometime problem again reproduced. If

Re: [ceph-users] leveldb compaction error

2015-10-08 Thread Selcuk TUNC
Hi Narendra, we upgraded from (0.80.9)Firefly to Hammer. On Thu, Oct 8, 2015 at 2:49 AM, Narendra Trivedi (natrived) < natri...@cisco.com> wrote: > Hi Selcuk, > > > > Which version of ceph did you upgrade from to Hammer (0.94)? > > > > --Narendra > > > > *From:* ceph-users

Re: [ceph-users] Placement rule not resolved

2015-10-08 Thread ghislain.chevalier
HI all, I didn't notice that osd reweight for ssd was curiously set to a low value. I don't know how and when these values were set so low. Our environment is Mirantis-driven and the installation was powered by fuel and puppet. (the installation was run by the openstack team and I checked the

[ceph-users] How to improve 'rbd ls [pool]' response time

2015-10-08 Thread WD_Hwang
Hi, all: If the Ceph cluster health status is HEALTH_OK, the execution time of 'sudo rbd ls rbd' is very short, like the following results. $ time sudo rbd ls rbd real0m0.096s user0m0.014s sys 0m0.028s But if there are several warnings (eg: 1 pgs degraded; 6 pgs incomplete; 1650

Re: [ceph-users] Large LOG like files on monitor

2015-10-08 Thread Christian Balzer
Hello, On Thu, 8 Oct 2015 09:38:02 +0200 Erwin Lubbers wrote: > Hi, > > In the /var/lib/ceph/mon/ceph-l16-s01/store.db/ directory there are two > very large files LOG and LOG.OLD (multiple GB's) and my diskspace is > running low. Can I safely delete those files? > That sounds odd, what

[ceph-users] get user list via rados-rest: {code: 403, message: Forbidden}

2015-10-08 Thread Klaus Franken
Hi, I’m trying to get a list of all users from the rados-rest-gateway analog to "radosgw-admin metadata list user“. I can retrieve a user info for a specified user from https://rgw01.XXX.de/admin/user?uid=klaus=json. http://docs.ceph.com/docs/master/radosgw/adminops/#get-user-info say "If no

Re: [ceph-users] Potential OSD deadlock?

2015-10-08 Thread Dzianis Kahanovich
I have probably similar situation on latest hammer & 4.1+ kernels on spinning OSDs (journal - leased partition on same HDD): evential slow requests, etc. Try: 1) even on leased partition journal - "journal aio = false"; 2) single-queue "noop" scheduler (OSDs); 3) reduce nr_requests to 32 (OSDs);

Re: [ceph-users] O_DIRECT on deep-scrub read

2015-10-08 Thread Lionel Bouton
Le 07/10/2015 13:44, Paweł Sadowski a écrit : > Hi, > > Can anyone tell if deep scrub is done using O_DIRECT flag or not? I'm > not able to verify that in source code. > > If not would it be possible to add such feature (maybe config option) to > help keeping Linux page cache in better shape?

Re: [ceph-users] "stray" objects in empty cephfs data pool

2015-10-08 Thread John Spray
On Thu, Oct 8, 2015 at 11:41 AM, Burkhard Linke wrote: > Hi John, > > On 10/08/2015 12:05 PM, John Spray wrote: >> >> On Thu, Oct 8, 2015 at 10:21 AM, Burkhard Linke >> wrote: >>> >>> Hi, > >

Re: [ceph-users] "stray" objects in empty cephfs data pool

2015-10-08 Thread John Spray
On Thu, Oct 8, 2015 at 10:21 AM, Burkhard Linke wrote: > Hi, > > I've moved all files from a CephFS data pool (EC pool with frontend cache > tier) in order to remove the pool completely. > > Some objects are left in the pools ('ceph df' output of

Re: [ceph-users] "stray" objects in empty cephfs data pool

2015-10-08 Thread Burkhard Linke
Hi John, On 10/08/2015 12:05 PM, John Spray wrote: On Thu, Oct 8, 2015 at 10:21 AM, Burkhard Linke wrote: Hi, *snipsnap* I've moved all files from a CephFS data pool (EC pool with frontend cache tier) in order to remove the pool completely.

Re: [ceph-users] "stray" objects in empty cephfs data pool

2015-10-08 Thread Burkhard Linke
Hi John, On 10/08/2015 01:03 PM, John Spray wrote: On Thu, Oct 8, 2015 at 11:41 AM, Burkhard Linke wrote: *snipsnap* Thanks for the fast reply. During the transfer of all files from the EC pool to a standard replicated pool I've copied the

Re: [ceph-users] How to improve 'rbd ls [pool]' response time

2015-10-08 Thread Wido den Hollander
On 10/08/2015 10:46 AM, wd_hw...@wistron.com wrote: > Hi, all: > If the Ceph cluster health status is HEALTH_OK, the execution time of 'sudo > rbd ls rbd' is very short, like the following results. > $ time sudo rbd ls rbd > real0m0.096s > user0m0.014s > sys 0m0.028s > > But if

Re: [ceph-users] How to improve 'rbd ls [pool]' response time

2015-10-08 Thread WD_Hwang
Hi Wido: According to your reply, if I add/remove OSDs from Ceph cluster, I have to wait all PGs moving action are completed. Then 'rbd ls' operation may works well. Is there any way to speed up PGs action of adding/removing OSDs ? Thanks a lot. Best Regards, WD -Original

Re: [ceph-users] How to improve 'rbd ls [pool]' response time

2015-10-08 Thread Wido den Hollander
On 10/08/2015 04:28 PM, wd_hw...@wistron.com wrote: > Hi Wido: > According to your reply, if I add/remove OSDs from Ceph cluster, I have to > wait all PGs moving action are completed. > Then 'rbd ls' operation may works well. > Is there any way to speed up PGs action of adding/removing OSDs

Re: [ceph-users] Ceph-deploy error

2015-10-08 Thread Ken Dreyer
This issue with the conflicts between Firefly and EPEL is tracked at http://tracker.ceph.com/issues/11104 On Sun, Aug 30, 2015 at 4:11 PM, pavana bhat wrote: > In case someone else runs into the same issue in future: > > I came out of this issue by installing

Re: [ceph-users] Annoying libust warning on ceph reload

2015-10-08 Thread Ken Dreyer
On Wed, Sep 30, 2015 at 7:46 PM, Goncalo Borges wrote: > - Each time logrotate is executed, we received a daily notice with the > message > > ibust[8241/8241]: Warning: HOME environment variable not set. Disabling > LTTng-UST per-user tracing. (in setup_local_apps()

Re: [ceph-users] CephFS "corruption" -- Nulled bytes

2015-10-08 Thread Lincoln Bryant
Hi Sage, Will this patch be in 0.94.4? We've got the same problem here. -Lincoln > On Oct 8, 2015, at 12:11 AM, Sage Weil wrote: > > On Wed, 7 Oct 2015, Adam Tygart wrote: >> Does this patch fix files that have been corrupted in this manner? > > Nope, it'll only prevent it

Re: [ceph-users] Annoying libust warning on ceph reload

2015-10-08 Thread Jason Dillaman
Somewhat related to this, I have a pending pull request to dynamically load LTTng-UST via your ceph.conf or via the admin socket [1]. While it won't solve this particular issue if you have manually enabled tracing, it will prevent these messages in the new default case where tracing isn't

Re: [ceph-users] "stray" objects in empty cephfs data pool

2015-10-08 Thread John Spray
On Thu, Oct 8, 2015 at 7:23 PM, Gregory Farnum wrote: > On Thu, Oct 8, 2015 at 6:29 AM, Burkhard Linke > wrote: >> Hammer 0.94.3 does not support a 'dump cache' mds command. >> 'dump_ops_in_flight' does not list any pending

Re: [ceph-users] CephFS file to rados object mapping

2015-10-08 Thread Gregory Farnum
On Tue, Sep 29, 2015 at 7:24 AM, Andras Pataki wrote: > Thanks, that makes a lot of sense. > One more question about checksumming objects in rados. Our cluster uses > two copies per object, and I have some where the checkums mismatch between > the two copies (that

Re: [ceph-users] Peering algorithm questions

2015-10-08 Thread Gregory Farnum
On Tue, Sep 29, 2015 at 12:08 AM, Balázs Kossovics wrote: > Hey! > > I'm trying to understand the peering algorithm based on [1] and [2]. There > are things that aren't really clear or I'm not entirely sure if I understood > them correctly, so I'd like to ask some

Re: [ceph-users] How to setup Ceph radosgw to support multi-tenancy?

2015-10-08 Thread Christian Sarrasin
After discovering this excellent blog post [1], I thought that taking advantage of users' "default_placement" feature would be a preferable way to achieve my multi-tenancy requirements (see previous post). Alas I seem to be hitting a snag. Any attempt to create a bucket with a user setup with

Re: [ceph-users] How to setup Ceph radosgw to support multi-tenancy?

2015-10-08 Thread Yehuda Sadeh-Weinraub
On Thu, Oct 8, 2015 at 1:55 PM, Christian Sarrasin wrote: > After discovering this excellent blog post [1], I thought that taking > advantage of users' "default_placement" feature would be a preferable way to > achieve my multi-tenancy requirements (see previous post).

Re: [ceph-users] OSD reaching file open limit - known issues?

2015-10-08 Thread Gregory Farnum
On Fri, Sep 25, 2015 at 10:04 AM, Jan Schermer wrote: > I get that, even though I think it should be handled more gracefuly. > But is it expected to also lead to consistency issues like this? I don't think it's expected, but obviously we never reproduced it in the lab. Given

Re: [ceph-users] How to setup Ceph radosgw to support multi-tenancy?

2015-10-08 Thread Christian Sarrasin
Hi Yehuda, Yes I did run "radosgw-admin regionmap update" and the regionmap appears to know about my custom placement_target. Any other idea? Thanks a lot Christian radosgw-admin region-map get { "regions": [ { "key": "default", "val": { "name": "default",

Re: [ceph-users] How to setup Ceph radosgw to support multi-tenancy?

2015-10-08 Thread Yehuda Sadeh-Weinraub
When you start radosgw, do you explicitly state the name of the region that gateway belongs to? On Thu, Oct 8, 2015 at 2:19 PM, Christian Sarrasin wrote: > Hi Yehuda, > > Yes I did run "radosgw-admin regionmap update" and the regionmap appears to > know about my

Re: [ceph-users] Rados python library missing functions

2015-10-08 Thread Rumen Telbizov
Sounds good. We'll try to work on this. On Thu, Oct 8, 2015 at 5:06 PM, Gregory Farnum wrote: > On Thu, Oct 8, 2015 at 5:01 PM, Rumen Telbizov wrote: > > Hello everyone, > > > > I am very new to Ceph so, please excuse me if this has already been > >

Re: [ceph-users] CephFS file to rados object mapping

2015-10-08 Thread Gregory Farnum
On Thu, Oct 8, 2015 at 6:45 PM, Francois Lafont wrote: > Hi, > > On 08/10/2015 22:25, Gregory Farnum wrote: > >> So that means there's no automated way to guarantee the right copy of >> an object when scrubbing. If you have 3+ copies I'd recommend checking >> each of them and

Re: [ceph-users] Rados python library missing functions

2015-10-08 Thread Gregory Farnum
On Thu, Oct 8, 2015 at 5:01 PM, Rumen Telbizov wrote: > Hello everyone, > > I am very new to Ceph so, please excuse me if this has already been > discussed. I couldn't find anything on the web. > > We are interested in using Ceph and access it directly via its native rados >

[ceph-users] Rados python library missing functions

2015-10-08 Thread Rumen Telbizov
Hello everyone, I am very new to Ceph so, please excuse me if this has already been discussed. I couldn't find anything on the web. We are interested in using Ceph and access it directly via its native rados API with python. We noticed that certain functions that are available in the C library

Re: [ceph-users] Potential OSD deadlock?

2015-10-08 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Sage, After trying to bisect this issue (all test moved the bisect towards Infernalis) and eventually testing the Infernalis branch again, it looks like the problem still exists although it is handled a tad better in Infernalis. I'm going to test

Re: [ceph-users] "stray" objects in empty cephfs data pool

2015-10-08 Thread Gregory Farnum
On Thu, Oct 8, 2015 at 6:29 AM, Burkhard Linke wrote: > Hammer 0.94.3 does not support a 'dump cache' mds command. > 'dump_ops_in_flight' does not list any pending operations. Is there any > other way to access the cache? "dumpcache", it looks