Re: [ceph-users] osds crashing during hit_set_trim and hit_set_remove_all

2017-03-06 Thread kefu chai
On Fri, Mar 3, 2017 at 11:40 PM, Sage Weil wrote: > On Fri, 3 Mar 2017, Mike Lovell wrote: >> i started an upgrade process to go from 0.94.7 to 10.2.5 on a production >> cluster that is using cache tiering. this cluster has 3 monitors, 28 storage >> nodes, around 370 osds. the

Re: [ceph-users] hammer to jewel upgrade experiences? cache tier experience?

2017-03-06 Thread Christian Balzer
On Mon, 6 Mar 2017 19:57:11 -0700 Mike Lovell wrote: > has anyone on the list done an upgrade from hammer (something later than > 0.94.6) to jewel with a cache tier configured? i tried doing one last week > and had a hiccup with it. i'm curious if others have been able to > successfully do the

[ceph-users] hammer to jewel upgrade experiences? cache tier experience?

2017-03-06 Thread Mike Lovell
has anyone on the list done an upgrade from hammer (something later than 0.94.6) to jewel with a cache tier configured? i tried doing one last week and had a hiccup with it. i'm curious if others have been able to successfully do the upgrade and, if so, did they take any extra steps related to the

Re: [ceph-users] Mix HDDs and SSDs togheter

2017-03-06 Thread Christian Balzer
Hello, On Mon, 6 Mar 2017 16:06:51 +0700 Vy Nguyen Tan wrote: > Hi Jiajia zhong, > > I'm using mixed SSD and HDD on the same node and I did it from url > https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/, > I don't get any problems when run SSD and HDD on

Re: [ceph-users] A Jewel in the rough? (cache tier bugs and documentation omissions)

2017-03-06 Thread Christian Balzer
On Tue, 7 Mar 2017 01:44:53 + John Spray wrote: > On Tue, Mar 7, 2017 at 12:28 AM, Christian Balzer wrote: > > > > > > Hello, > > > > It's now 10 months after this thread: > > > > http://www.spinics.net/lists/ceph-users/msg27497.html (plus next message) > > > > and we're at

[ceph-users] Erasure Code Library Symbols

2017-03-06 Thread Garg, Pankaj
Hi, I'm building Ceph 10.2.5 and doing some benchmarking with Erasure Coding. However I notice that perf can't find any symbols in Erasure Coding libraries. It seems those have been stripped, whereas most other stuff has the symbols intact. How can I build with symbols or make sure they don't

Re: [ceph-users] A Jewel in the rough? (cache tier bugs and documentation omissions)

2017-03-06 Thread John Spray
On Tue, Mar 7, 2017 at 12:28 AM, Christian Balzer wrote: > > > Hello, > > It's now 10 months after this thread: > > http://www.spinics.net/lists/ceph-users/msg27497.html (plus next message) > > and we're at the fifth iteration of Jewel and still > > osd_tier_promote_max_objects_sec

[ceph-users] A Jewel in the rough? (cache tier bugs and documentation omissions)

2017-03-06 Thread Christian Balzer
Hello, It's now 10 months after this thread: http://www.spinics.net/lists/ceph-users/msg27497.html (plus next message) and we're at the fifth iteration of Jewel and still osd_tier_promote_max_objects_sec and osd_tier_promote_max_bytes_sec are neither documented (master or jewel), nor

Re: [ceph-users] ceph/hammer - debian7/wheezy repository doesnt work correctly

2017-03-06 Thread Smart Weblications GmbH - Florian Wiessner
Am 28.02.2017 um 09:48 schrieb linux...@boku.ac.at: > /Hi,/ > > / > actually i can´t install hammer on wheezy:/ > > /~# cat /etc/apt/sources.list.d/ceph.list > deb http://download.ceph.com/debian-hammer/ wheezy main > > ~# cat /etc/issue > Debian GNU/Linux 7 \n \l > / > > /~# apt-cache search

[ceph-users] Outages Next Week

2017-03-06 Thread Patrick McGarry
Hey cephers, Just as a heads up, there may be some temporary outages next week (13-16 Mar) of git.ceph.com and drop.ceph.com and we migrate some infrastructure. Please plan accordingly. If you have any questions please feel free to reach out to me in the meantime. Thanks. -- Best Regards,

Re: [ceph-users] radosgw. Strange behavior in 2 zone configuration

2017-03-06 Thread Casey Bodley
On 03/03/2017 07:40 AM, K K wrote: Hello, all! I have successfully create 2 zone cluster(se and se2). But my radosgw machines are sending many GET /admin/log requests to each other after put 10k items to cluster via radosgw. It's look like: 2017-03-03 17:31:17.897872 7f21b9083700 1

[ceph-users] Basic file replication and redundancy...

2017-03-06 Thread Erik Brakkee
Hi, I am new to Ceph and just trying to get to grips with all the different concepts. What I would like to achieve is the following: 1. We have two sites, a main and a backup site. The main site is used actively for production, and the backup site is there for disaster recovery but is also used

[ceph-users] can a OSD affect performance from pool X when blocking/slow requests PGs from pool Y ?

2017-03-06 Thread Alejandro Comisario
Hi, we have a 7 nodes ubuntu ceph hammer pool (78 OSD to be exact). This weekend we'be experienced a huge outage from our customers vms (located on pool CUSTOMERS, replica size 3 ) when lots of OSD's started to slow request/block PG's on pool PRIVATE ( replica size 1 ) basically all PG's blocked

Re: [ceph-users] purging strays faster

2017-03-06 Thread John Spray
On Mon, Mar 6, 2017 at 3:03 PM, Daniel Davidson wrote: > Thanks for the suggestion, however I think my more immediate problem is the > ms_handle_reset messages. I do not think the mds are getting the updates > when I send them. I wouldn't assume that. You can check the

Re: [ceph-users] purging strays faster

2017-03-06 Thread Daniel Davidson
Thanks for the suggestion, however I think my more immediate problem is the ms_handle_reset messages. I do not think the mds are getting the updates when I send them. Dan On 03/04/2017 09:08 AM, John Spray wrote: On Fri, Mar 3, 2017 at 9:48 PM, Daniel Davidson

Re: [ceph-users] Current CPU recommendations for storage nodes with multiple HDDs

2017-03-06 Thread Andreas Gerstmayr
2017-03-06 14:08 GMT+01:00 Nick Fisk : > > I can happily run 12 disks on a 4 core 3.6Ghz Xeon E3. I've never seen > average CPU usage over 15-20%. The only time CPU hits 100% is for the ~10 > seconds when the OSD boots up. Running Jewel BTW. > > So, I would say that during normal

Re: [ceph-users] Experience with 5k RPM/archive HDDs

2017-03-06 Thread RDS
Maxime I forgot to mention a couple more things that you can try when using SMR HDD. You could try to use ext4 with the “lazy” initialization. Another option is specifying the “lazytime” ext4 mount option. Depending on your workload, you could possibly see some big improvements. Rick > On Feb

Re: [ceph-users] Current CPU recommendations for storage nodes with multiple HDDs

2017-03-06 Thread Nick Fisk
> -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Andreas Gerstmayr > Sent: 06 March 2017 12:58 > To: Ceph Users > Subject: [ceph-users] Current CPU recommendations for storage nodes with > multiple HDDs > > Hi, >

[ceph-users] Current CPU recommendations for storage nodes with multiple HDDs

2017-03-06 Thread Andreas Gerstmayr
Hi, what is the current CPU recommendation for storage nodes with multiple HDDs attached? In the hardware recommendations [1] it says "Therefore, OSDs should have a reasonable amount of processing power (e.g., dual core processors).", but I guess this is for servers with a single OSD. How many

Re: [ceph-users] Mix HDDs and SSDs togheter

2017-03-06 Thread Vy Nguyen Tan
Hi Jiajia zhong, I'm using mixed SSD and HDD on the same node and I did it from url https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/, I don't get any problems when run SSD and HDD on the same node. Now I want to increase Ceph thoughput by increase network

Re: [ceph-users] Unable to start rgw after upgrade from

2017-03-06 Thread Малков Петр Викторович
Rgw hammer -> jewel Next method helped me After upgrading newly remake rgw on jewel ceph auth del client.rgw.ceph403 rm /var/lib/ceph/radosgw/ceph-rgw.ceph403/ ceph-deploy --overwrite-conf rgw create ceph403 systemctl stop ceph-radosgw.target systemctl start ceph-radosgw.target systemctl status