Re: [ceph-users] Slow requests during ceph osd boot

2015-08-06 Thread Nathan O'Sullivan
I'm seeing the same sort of issue. Any suggestions on how to get Ceph to not start the ceph-osd processes on host boot? It does not seem to be as simple as just disabling the service Regards Nathan On 15/07/2015 7:15 PM, Jan Schermer wrote: We have the same problems, we need to start the

Re: [ceph-users] HAproxy for RADOSGW

2015-08-06 Thread Kobi Laredo
Why are you using cookies? Try without and see if it works. Kobi Laredo Cloud Systems Engineer | (408) 409-KOBI On Aug 5, 2015 8:42 AM, "Ray Sun" wrote: > Cephers, > I try to use haproxy as a load balancer for my radosgw, but I always got > 405 not allowed when I run s3cmd md s3://mys3 on my hap

Re: [ceph-users] ceph tell not persistent through reboots?

2015-08-06 Thread Steve Dainard
That would make sense.. Thanks! On Thu, Aug 6, 2015 at 6:29 PM, Wang, Warren wrote: > Injecting args into the running procs is not meant to be persistent. You'll > need to modify /etc/ceph/ceph.conf for that. > > Warren > > -Original Message- > From: ceph-users [mailto:ceph-users-boun..

Re: [ceph-users] Removing data from SSD takes too long for 4k object

2015-08-06 Thread Christian Balzer
Hello, On Thu, 6 Aug 2015 21:41:00 + Sai Srinath Sundar-SSI wrote: > Hi, > I was using RADOS bench to test on a single node ceph cluster with a > dedicated SSD as storage for my OSD. I created a pool to do the same and > filled up my ssd until maximum capacity using RADOS bench with my objec

Re: [ceph-users] Setting up a proper mirror system for Ceph

2015-08-06 Thread 张冬卯
hi,wido, We would love to provide a ceph mirror in china mainland and Hongkong. Hosting a site in main land of china is a bit complicated, you have to subscribe to china goverment on http://www.miitbeian.gov.cn which is totally chinese. And it may take quit a while to prepare applicant form. we

[ceph-users] Direct IO tests on RBD device vary significantly

2015-08-06 Thread Steve Dainard
Trying to get an understanding why direct IO would be so slow on my cluster. Ceph 0.94.1 1 Gig public network 10 Gig public network 10 Gig cluster network 100 OSD's, 4T disk sizes, 5G SSD journal. As of this morning I had no SSD journal and was finding direct IO was sub 10MB/s so I decided to ad

Re: [ceph-users] ceph tell not persistent through reboots?

2015-08-06 Thread Wang, Warren
Injecting args into the running procs is not meant to be persistent. You'll need to modify /etc/ceph/ceph.conf for that. Warren -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Steve Dainard Sent: Thursday, August 06, 2015 9:16 PM To: ceph-user

[ceph-users] ceph tell not persistent through reboots?

2015-08-06 Thread Steve Dainard
Hello, Version 0.94.1 I'm passing settings to the admin socket ie: ceph tell osd.* injectargs '--osd_deep_scrub_begin_hour 20' ceph tell osd.* injectargs '--osd_deep_scrub_end_hour 4' ceph tell osd.* injectargs '--osd_deep_scrub_interval 1209600' Then I check to see if they're in the configs now

Re: [ceph-users] migrating cephfs metadata pool from spinning disk to SSD.

2015-08-06 Thread Bob Ababurko
@John, Can you clarify which values would suggest that my metadata pool is too slow? I have added a link that includes values for the "op_active" & "handle_client_request"gathered in a crude fashion but should hopefully give enough data to paint a picture of what is happening. http://pasteb

[ceph-users] Removing data from SSD takes too long for 4k object

2015-08-06 Thread Sai Srinath Sundar-SSI
Hi, I was using RADOS bench to test on a single node ceph cluster with a dedicated SSD as storage for my OSD. I created a pool to do the same and filled up my ssd until maximum capacity using RADOS bench with my object size as 4k. On removing the pool, I noticed that it seems to take a really lo

Re: [ceph-users] Setting up a proper mirror system for Ceph

2015-08-06 Thread Wido den Hollander
On 08/05/2015 04:48 PM, David Moreau Simard wrote: > Would love to be a part of this Wido, we currently have a mirror at > ceph.mirror.iweb.ca based on the script you provided me a while back. It is > already available over http, rsync, IPv4 and IPv6. > Great! > > The way we currently mirror i

Re: [ceph-users] Warning regarding LTTng while checking status or restarting service

2015-08-06 Thread Josh Durgin
On 08/06/2015 03:10 AM, Daleep Bais wrote: Hi, Whenever I restart or check the logs for OSD, MON, I get below warning message.. I am running a test cluster of 09 OSD's and 03 MON nodes. [ceph-node1][WARNIN] libust[3549/3549]: Warning: HOME environment variable not set. Disabling LTTng-UST per-

Re: [ceph-users] radosgw + civetweb latency issue on Hammer

2015-08-06 Thread Mark Nelson
Hi Srikanth, Can you make a ticket on tracker.ceph.com for this? We'd like to not loose track of it. Thanks! Mark On 08/05/2015 07:01 PM, Srikanth Madugundi wrote: Hi, After upgrading to Hammer and moving from apache to civetweb. We started seeing high PUT latency in the order of 2 sec for

Re: [ceph-users] which kernel version can help avoid kernel client deadlock

2015-08-06 Thread Z Zhang
Hi Ilya, We just tried the 3.10.83 kernel with more rbd fixes back-ported from higher kernel version. At this time, we tried again to run rbd and 3 OSD deamons on the same node, but rbd IO will still hang and OSD filestore thread will time out to suicide when the memory becomes very low under h

Re: [ceph-users] Unable to start libvirt VM when using cache tiering.

2015-08-06 Thread Pieter Koorts
Hi Burkhard, I found my problem and it makes me feel like I need to slap myself awake now. I will let you see my mistake. What I had client.libvirt   caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=rbd, allow rwx pool=ssd What I have now client.libvirt   caps: [mon] al

Re: [ceph-users] mount error: ceph filesystem not supported by the system

2015-08-06 Thread Jiri Kanicky
Hi, I can answer this myself. It was a kernel. After upgrade to lates Debian Jessie 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt11-1+deb8u2 (2015-07-17) x86_64 GNU/Linux. Everything started to work as normal. Thanks :) On 6/08/2015 22:38, Jiri Kanicky wrote: Hi, I am trying to mount my CephFS a

[ceph-users] mount error: ceph filesystem not supported by the system

2015-08-06 Thread Jiri Kanicky
Hi, I am trying to mount my CephFS and getting the following message. It was all working previously, but after power failure I am not able to mount it anymore (Debian Jessie). cephadmin@maverick:/etc/ceph$ sudo mount -t ceph ceph1.allsupp.corp,ceph2.allsupp.corp:6789:/ /mnt/cephdata/ -o nam

Re: [ceph-users] pg_num docs conflict with Hammer PG count warning

2015-08-06 Thread Abhishek L
On Thu, Aug 6, 2015 at 1:55 PM, Hector Martin wrote: > On 2015-08-06 17:18, Wido den Hollander wrote: >> >> The mount of PGs is cluster wide and not per pool. So if you have 48 >> OSDs the rule of thumb is: 48 * 100 / 3 = 1600 PGs cluster wide. >> >> Now, with enough memory you can easily have 100

[ceph-users] Warning regarding LTTng while checking status or restarting service

2015-08-06 Thread Daleep Bais
Hi, Whenever I restart or check the logs for OSD, MON, I get below warning message.. I am running a test cluster of 09 OSD's and 03 MON nodes. [ceph-node1][WARNIN] libust[3549/3549]: Warning: HOME environment variable not set. Disabling LTTng-UST per-user tracing. (in setup_local_apps() at lttng

Re: [ceph-users] Setting up a proper mirror system for Ceph

2015-08-06 Thread deanraccoon
hi, wido, we would love to provide such mirror in china(cn.ceph.com). we are using ceph heavily in our system while we also want to give feedback to the community. I am now consulting our IDC operators how we can do this. I will see what I can do for ceph community very soon. cheers,

Re: [ceph-users] pg_num docs conflict with Hammer PG count warning

2015-08-06 Thread Hector Martin
On 2015-08-06 17:18, Wido den Hollander wrote: The mount of PGs is cluster wide and not per pool. So if you have 48 OSDs the rule of thumb is: 48 * 100 / 3 = 1600 PGs cluster wide. Now, with enough memory you can easily have 100 PGs per OSD, but keep in mind that the PG count is cluster-wide and

Re: [ceph-users] migrating cephfs metadata pool from spinning disk to SSD.

2015-08-06 Thread Bob Ababurko
I should have probably condensed my finding over the course of the day into one post but, I guess that just not how i'm built. Another data point. I ran the `ceph daemon mds.cephmds02 perf dump` in a while loop w/ 1 second sleep and grepping out the stats John mentioned and at times(~every 10

Re: [ceph-users] pg_num docs conflict with Hammer PG count warning

2015-08-06 Thread Wido den Hollander
On 06-08-15 10:16, Hector Martin wrote: > We have 48 OSDs (on 12 boxes, 4T per OSD) and 4 pools: > - 3 replicated pools (3x) > - 1 RS pool (5+2, size 7) > > The docs say: > http://ceph.com/docs/master/rados/operations/placement-groups/ > "Between 10 and 50 OSDs set pg_num to 4096" > > Which is

[ceph-users] pg_num docs conflict with Hammer PG count warning

2015-08-06 Thread Hector Martin
We have 48 OSDs (on 12 boxes, 4T per OSD) and 4 pools: - 3 replicated pools (3x) - 1 RS pool (5+2, size 7) The docs say: http://ceph.com/docs/master/rados/operations/placement-groups/ "Between 10 and 50 OSDs set pg_num to 4096" Which is what we did when creating those pools. This yields 16384 PG