Re: [ceph-users] rados rm objects, still appear in rados ls

2018-09-28 Thread Frank de Bot (lists)
John Spray wrote: > On Fri, Sep 28, 2018 at 2:25 PM Frank (lists) wrote: >> >> Hi, >> >> On my cluster I tried to clear all objects from a pool. I used the >> command "rados -p bench ls | xargs rados -p bench rm". (rados -p bench >> cleanup

[ceph-users] rados rm objects, still appear in rados ls

2018-09-28 Thread Frank (lists)
Hi, On my cluster I tried to clear all objects from a pool. I used the command "rados -p bench ls | xargs rados -p bench rm". (rados -p bench cleanup doesn't clean everything, because there was a lot of other testing going on here). Now 'rados -p bench ls' returns a list of objects, which

Re: [ceph-users] FreeBSD Initiator with Ceph iscsi

2018-06-28 Thread Frank (lists)
, 2018 at 6:06 PM Frank de Bot (lists) mailto:li...@searchy.net>> wrote: Hi, In my test setup I have a ceph iscsi gateway (configured as in http://docs.ceph.com/docs/luminous/rbd/iscsi-overview/ ) I would like to use thie with a FreeBSD (11.1) initiator, but I

[ceph-users] FreeBSD Initiator with Ceph iscsi

2018-06-26 Thread Frank de Bot (lists)
Hi, In my test setup I have a ceph iscsi gateway (configured as in http://docs.ceph.com/docs/luminous/rbd/iscsi-overview/ ) I would like to use thie with a FreeBSD (11.1) initiator, but I fail to make a working setup in FreeBSD. Is it known if the FreeBSD initiator (with gmultipath) can work

Re: [ceph-users] Frequent slow requests

2018-06-19 Thread Frank de Bot (lists)
Frank (lists) wrote: > Hi, > > On a small cluster (3 nodes) I frequently have slow requests. When > dumping the inflight ops from the hanging OSD, it seems it doesn't get a > 'response' for one of the subops. The events always look like: > I've done some further testing

[ceph-users] Frequent slow requests

2018-06-14 Thread Frank (lists)
Hi, On a small cluster (3 nodes) I frequently have slow requests. When dumping the inflight ops from the hanging OSD, it seems it doesn't get a 'response' for one of the subops. The events always look like:     "events": [     {     "time":

Re: [ceph-users] Adding additional disks to the production cluster without performance impacts on the existing

2018-06-12 Thread lists
Hi Pardhiv, Thanks for sharing! MJ On 11-6-2018 22:30, Pardhiv Karri wrote: Hi MJ, Here are the links to the script and config file. Modify the config file as you wish, values in config file can be modified while the script execution is in progress. The script can be run from any monitor

[ceph-users] Expected performane with Ceph iSCSI gateway

2018-05-28 Thread Frank (lists)
Hi, I an test cluster (3 nodes, 24 osd's) I'm testing the ceph iscsi gateway (with http://docs.ceph.com/docs/master/rbd/iscsi-targets/). For a client I used a seperate server, everything runs Centos 7.5. The iscsi gateway are located on 2 of the existing nodes in the cluster. How does iscsi

Re: [ceph-users] slow requests on a specific osd

2018-01-15 Thread lists
Hi Wes, On 15-1-2018 20:57, Wes Dillingham wrote: My understanding is that the exact same objects would move back to the OSD if weight went 1 -> 0 -> 1 given the same Cluster state and same object names, CRUSH is deterministic so that would be the almost certain result. Ok, thanks! So

Re: [ceph-users] slow requests on a specific osd

2018-01-15 Thread lists
Hi Wes, On 15-1-2018 20:32, Wes Dillingham wrote: I dont hear a lot of people discuss using xfs_fsr on OSDs and going over the mailing list history it seems to have been brought up very infrequently and never as a suggestion for regular maintenance. Perhaps its not needed. True, it's just

[ceph-users] slow requests on a specific osd

2018-01-15 Thread lists
Hi, On our three-node 24 OSDs ceph 10.2.10 cluster, we have started seeing slow requests on a specific OSD, during the the two-hour nightly xfs_fsr run from 05:00 - 07:00. This started after we applied the meltdown patches. The specific osd.10 also has the highest space utilization of all

Re: [ceph-users] why sudden (and brief) HEALTH_ERR

2017-10-04 Thread lists
ok, thanks for the feedback Piotr and Dan! MJ On 4-10-2017 9:38, Dan van der Ster wrote: Since Jewel (AFAIR), when (re)starting OSDs, pg status is reset to "never contacted", resulting in "pgs are stuck inactive for more than 300 seconds" being reported until osds regain connections between

[ceph-users] why sudden (and brief) HEALTH_ERR

2017-10-04 Thread lists
Hi, Yesterday I chowned our /var/lib/ceph ceph, to completely finalize our jewel migration, and noticed something interesting. After I brought back up the OSDs I just chowned, the system had some recovery to do. During that recovery, the system went to HEALTH_ERR for a short moment: See

Re: [ceph-users] tunable question

2017-10-03 Thread lists
Thanks Jake, for your extensive reply. :-) MJ On 3-10-2017 15:21, Jake Young wrote: On Tue, Oct 3, 2017 at 8:38 AM lists <li...@merit.unu.edu <mailto:li...@merit.unu.edu>> wrote: Hi, What would make the decision easier: if we knew that we could easily revert the

Re: [ceph-users] tunable question

2017-10-03 Thread lists
Hi, What would make the decision easier: if we knew that we could easily revert the > "ceph osd crush tunables optimal" once it has begun rebalancing data? Meaning: if we notice that impact is too high, or it will take too long, that we could simply again say > "ceph osd crush tunables

Re: [ceph-users] clock skew

2017-04-06 Thread lists
Hi Dan, did you mean "we have not yet..."? Yes! That's what I meant. Chrony does much better a job than NTP, at least here :-) MJ ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] clock skew

2017-04-04 Thread lists
Hi John, list, On 1-4-2017 16:18, John Petrini wrote: Just ntp. Just to follow-up on this: we have yet experienced a clock skew since we starting using chrony. Just three days ago, I know, bit still... Perhaps you should try it too, and report if it (seems to) work better for you as well.

Re: [ceph-users] rados bench output question

2016-09-06 Thread lists
Hi Christian, Thanks for your reply. What SSD model (be precise)? Samsung 480GB PM863 SSD Only one SSD? Yes. With a 5GB partition based journal for each osd. During the 0 MB/sec, there is NO increased cpu usage: it is usually around 15 - 20% for the four ceph-osd processes. Watch your

[ceph-users] rados bench output question

2016-09-06 Thread lists
Hi all, We're pretty new to ceph, but loving it so far. We have a three-node cluster, four 4TB OSDs per node, journal (5GB) on SSD, 10G ethernet cluster network, 64GB ram on the nodes, total 12 OSDs. We noticed the following output when using ceph bench: root@ceph1:~# rados bench -p

[ceph-users] radula - radosgw(s3) cli tool

2015-09-09 Thread Andrew Bibby (lists)
Hey cephers, Just wanted to briefly announce the release of a radosgw CLI tool that solves some of our team's minor annoyances. Called radula, a nod to the patron animal, this utility acts a lot like s3cmd with some tweaks to meet the expectations of our researchers.

[ceph-users] Replacing a failed OSD disk drive (or replace XFS with BTRFS)

2015-03-21 Thread Datatone Lists
I have been experimenting with Ceph, and have some OSDs with drives containing XFS filesystems which I want to change to BTRFS. (I started with BTRFS, then started again from scratch with XFS [currently recommended] in order to eleminate that as a potential cause of some issues, now with further

Re: [ceph-users] Ceph User Teething Problems

2015-03-05 Thread Datatone Lists
. Thank you again for all for the previous prompt and invaluable advice and information. David On Wed, 4 Mar 2015 20:27:51 + Datatone Lists li...@datatone.co.uk wrote: I have been following ceph for a long time. I have yet to put it into service, and I keep coming back as btrfs improves

[ceph-users] Ceph User Teething Problems

2015-03-04 Thread Datatone Lists
I have been following ceph for a long time. I have yet to put it into service, and I keep coming back as btrfs improves and ceph reaches higher version numbers. I am now trying ceph 0.93 and kernel 4.0-rc1. Q1) Is it still considered that btrfs is not robust enough, and that xfs should be used

Re: [ceph-users] radosgw issues

2014-07-08 Thread lists+ceph
Guess I'll try again. I gave this another shot, following the documentation, and still end up with basically a fork bomb rather than the nice ListAllMyBucketsResult output that the docs say I should get. Everything else about the cluster works fine, and I see others talking about the gateway

Re: [ceph-users] radosgw issues

2014-06-30 Thread lists+ceph
On 2014-06-16 13:16, lists+c...@deksai.com wrote: I've just tried setting up the radosgw on centos6 according to http://ceph.com/docs/master/radosgw/config/ While I can run the admin commands just fine to create users etc., making a simple wget request to the domain I set up returns a 500 due

Re: [ceph-users] radosgw issues

2014-06-16 Thread lists+ceph
On 2014-06-17 07:30, John Wilkins wrote: You followed this intallation guide: http://ceph.com/docs/master/install/install-ceph-gateway/ [16] An then you, followed this http://ceph.com/docs/master/radosgw/config/ [1] configuration guide and then you executed: sudo /etc/init.d/ceph-radosgw start

[ceph-users] radosgw issues

2014-06-15 Thread lists+ceph
I've just tried setting up the radosgw on centos6 according to http://ceph.com/docs/master/radosgw/config/ There didn't seem to be an init script in the rpm I installed, so I copied the one from here:

[ceph-users] What exactly is the kernel rbd on osd issue?

2014-06-12 Thread lists+ceph
I remember reading somewhere that the kernel ceph clients (rbd/fs) could not run on the same host as the OSD. I tried finding where I saw that, and could only come up with some irc chat logs. The issue stated there is that there can be some kind of deadlock. Is this true, and if so, would you

[ceph-users] rbd: add failed: (34) Numerical result out of range

2014-06-09 Thread lists+ceph
I was building a small test cluster and noticed a difference with trying to rbd map depending on whether the cluster was built using fedora or CentOS. When I used CentOS osds, and tried to rbd map from arch linux or fedora, I would get rbd: add failed: (34) Numerical result out of range. It