[ceph-users] IO-500 now accepting submissions

2017-10-27 Thread John Bent
Hello Ceph community, After BoFs at last year's SC and the last two ISC's, the IO-500 is formalized and is now accepting submissions in preparation for our first IO-500 list at this year's SC BoF: http://sc17.supercomputing.org/presentation/?id=bof108=sess319 The goal of the IO-500 is simple: to

Re: [ceph-users] How to increase the size of requests written to a ceph image

2017-10-27 Thread Russell Glaue
Yes, several have recommended the fio test now. I cannot perform a fio test at this time. Because the post referred to directs us to write the fio test data directly to the disk device, e.g. /dev/sdj. I'd have to take an OSD completely out in order to perform the test. And I am not ready to do

Re: [ceph-users] How to increase the size of requests written to a ceph image

2017-10-27 Thread Brian Andrus
I would be interested in seeing the results from the post mentioned by an earlier contributor: https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/ Test an "old" M500 and a "new" M500 and see if the performance is A) acceptable and B)

Re: [ceph-users] How to increase the size of requests written to a ceph image

2017-10-27 Thread Russell Glaue
Yes, all the MD500s we use are both journal and OSD, even the older ones. We have a 3 year lifecycle and move older nodes from one ceph cluster to another. On old systems with 3 year old MD500s, they run as RAID0, and run faster than our current problem system with 1 year old MD500s, ran as

Re: [ceph-users] How to increase the size of requests written to a ceph image

2017-10-27 Thread Brian Andrus
@Russel, are your "older Crucial M500"s being used as journals? Crucial M500s are not to be used as a Ceph journal in my last experience with them. They make good OSDs with an NVMe in front of them perhaps, but not much else. Ceph uses O_DSYNC for journal writes and these drives do not handle

Re: [ceph-users] Kernel version recommendation

2017-10-27 Thread David Turner
If you can do an ssh session to the IPMI console and then do that inside of a screen, you can save the output of the screen to a file and look at what was happening on the console when the server locked up. That's how I track kernel panics. On Fri, Oct 27, 2017 at 1:53 PM Bogdan SOLGA

Re: [ceph-users] (no subject)

2017-10-27 Thread David Turner
Your client needs to tell the cluster that the objects have been deleted. '-o discard' is my goto because I'm lazy and it works well enough for me. If you're in need of more performance, then fstrim is the other option. Nothing on the Ceph side can be configured to know when a client no longer

[ceph-users] (no subject)

2017-10-27 Thread nigel davies
Hay all I am new to ceph and made an test ceph cluster that supports s3 and rbd's (rbd's are linked using iscsi) I been looking about and notice that the space is not decreasing when i delete a file and in turn filled up my cluster osd's I have been doing some reading and see people recomand

Re: [ceph-users] Kernel version recommendation

2017-10-27 Thread Bogdan SOLGA
Thank you very much for the reply, Ilya! The server was completely frozen / hard lockup, we had to restart it via IPMI. We grepped the logs trying to find the culprit, but to no avail. Any hint on how to troubleshoot the (eventual) freezes is highly appreciated. Understood on the kernel

Re: [ceph-users] Kernel version recommendation

2017-10-27 Thread Ilya Dryomov
On Fri, Oct 27, 2017 at 6:33 PM, Bogdan SOLGA wrote: > Hello, everyone! > > We have recently upgraded our Ceph pool to the latest Luminous release. On > one of the servers that we used as Ceph clients we had several freeze > issues, which we empirically linked to the

Re: [ceph-users] crush optimize does not work

2017-10-27 Thread David Turner
What does your crush map look like? Also a `ceph df` output. You're optimizing your map for pool #5, if there are other pools with a significant amount of data, then your going to be off on your cluster balance. A big question for balancing a cluster is how big are your PGs? If your primary

Re: [ceph-users] Kernel version recommendation

2017-10-27 Thread Bogdan SOLGA
Thanks a lot for the recommendation, David! Are you aware of any drawbacks and / or known issues with using rbd-nbd? On Fri, Oct 27, 2017 at 7:47 PM, David Turner wrote: > rbd-nbd is gaining a lot of followers for use as mapping rbds. The kernel > driver for RBD's has

Re: [ceph-users] Kernel version recommendation

2017-10-27 Thread David Turner
rbd-nbd is gaining a lot of followers for use as mapping rbds. The kernel driver for RBD's has taken a while to support features of current ceph versions. The nice thing with rbd-nbd is that it has feature parity with the version of ceph you are using and can enable all of the rbd features you

[ceph-users] Kernel version recommendation

2017-10-27 Thread Bogdan SOLGA
Hello, everyone! We have recently upgraded our Ceph pool to the latest Luminous release. On one of the servers that we used as Ceph clients we had several freeze issues, which we empirically linked to the concurrent usage of some I/O operations - writing in an LXD container (backed by Ceph) while

[ceph-users] rbd map hangs when using systemd-automount

2017-10-27 Thread Bjoern Laessig
Hi Cephers, i have multiple rbds to map and mount and the bootup hangs forever while running rbdmap.service script. This was my mount-entry for /etc/fstab: /dev/rbd/ptxdev/WORK_CEPH_BLA /ptx/work/ceph/bla xfs  noauto,x-systemd.automount,defaults,noatime,_netdev,logbsize=256k,nofail  0  0 (the

Re: [ceph-users] How to increase the size of requests written to a ceph image

2017-10-27 Thread Russell Glaue
We have older crucial M500 disks operating without such problems. So, I have to believe it is a hardware firmware issue. And its peculiar seeing performance boost slightly, even 24 hours later, when I stop then start the OSDs. Our actual writes are low, as most of our Ceph Cluster based images

Re: [ceph-users] How to enable jumbo frames on IPv6 only cluster?

2017-10-27 Thread Ian Bobbitt
On 10/27/17 8:22 AM, Félix Barbeira wrote: > root@ceph-node01:~# ping6 -c 3 -M do -s 8972 ceph-node02 You're specifying the ICMP payload size. IPv6 has larger headers than IPv4, so you'll need to decrease the payload to fit in a standard jumbo frame. Try 8952 instead of 8972. -- Ian smime.p7s

Re: [ceph-users] Speeding up garbage collection in RGW

2017-10-27 Thread David Turner
I had the exact same error when using --bypass-gc. We too decided to destroy this realm and start it fresh. For us, 95% of the data in this realm is backups for other systems and they're find rebuilding it. So our plan is to migrate the 5% of the data to a temporary s3 location and then rebuild

Re: [ceph-users] Speeding up garbage collection in RGW

2017-10-27 Thread Bryan Stillwell
On Wed, Oct 25, 2017 at 4:02 PM, Yehuda Sadeh-Weinraub wrote: > > On Wed, Oct 25, 2017 at 2:32 PM, Bryan Stillwell > wrote: > > That helps a little bit, but overall the process would take years at this > > rate: > > > > # for i in {1..3600}; do ceph

Re: [ceph-users] How to enable jumbo frames on IPv6 only cluster?

2017-10-27 Thread Nico Schottelius
Hello, we are running everything IPv6 only. You just need to setup the MTU on your devices (nics, switches) correctly, nothing ceph or IPv6 specific required. If you are using SLAAC (like we do), you can also announce the MTU via RA. Best, Nico Jack writes: > Or

Re: [ceph-users] How to enable jumbo frames on IPv6 only cluster?

2017-10-27 Thread Jack
Or maybe you reach that ipv4 directly, and that ipv6 via a router, somehow Check your routing table and neighbor table On 27/10/2017 16:02, Wido den Hollander wrote: > >> Op 27 oktober 2017 om 14:22 schreef Félix Barbeira : >> >> >> Hi, >> >> I'm trying to configure a ceph

Re: [ceph-users] How to enable jumbo frames on IPv6 only cluster?

2017-10-27 Thread Wido den Hollander
> Op 27 oktober 2017 om 14:22 schreef Félix Barbeira : > > > Hi, > > I'm trying to configure a ceph cluster using IPv6 only but I can't enable > jumbo frames. I made the definition on the > 'interfaces' file and it seems like the value is applied but when I test it > looks

Re: [ceph-users] How to enable jumbo frames on IPv6 only cluster?

2017-10-27 Thread Ronny Aasen
On 27. okt. 2017 14:22, Félix Barbeira wrote: Hi, I'm trying to configure a ceph cluster using IPv6 only but I can't enable jumbo frames. I made the definition on the 'interfaces' file and it seems like the value is applied but when I test it looks like only works on IPv4, not IPv6. It

[ceph-users] How to enable jumbo frames on IPv6 only cluster?

2017-10-27 Thread Félix Barbeira
Hi, I'm trying to configure a ceph cluster using IPv6 only but I can't enable jumbo frames. I made the definition on the 'interfaces' file and it seems like the value is applied but when I test it looks like only works on IPv4, not IPv6. It works on IPv4: root@ceph-node01:~# ping -c 3 -M do -s

Re: [ceph-users] How to increase the size of requests written to a ceph image

2017-10-27 Thread Maged Mokhtar
It is quiet likely related, things are pointing to bad disks. Probably the best thing is to plan for disk replacement, the sooner the better as it could get worse. On 2017-10-27 02:22, Christian Wuerdig wrote: > Hm, no necessarily directly related to your performance problem, > however: These

Re: [ceph-users] ceph zstd not for bluestor due to performance reasons

2017-10-27 Thread Haomai Wang
On Fri, Oct 27, 2017 at 5:03 PM, Ragan, Tj (Dr.) wrote: > Hi Haomai, > > According to the documentation, and a brief test to confirm, the lz4 > compression plugin isn’t distributed in the official release. I’ve tried > asking google how to add it back to no avail, so

Re: [ceph-users] ceph zstd not for bluestor due to performance reasons

2017-10-27 Thread Ragan, Tj (Dr.)
Hi Haomai, According to the documentation, and a brief test to confirm, the lz4 compression plugin isn’t distributed in the official release. I’ve tried asking google how to add it back to no avail, so how have you added the plugin? Is it simply a matter of putting a symlink in the right

Re: [ceph-users] Install Ceph on Fedora 26

2017-10-27 Thread GiangCoi Mr
Dear Denes I searched in the Internet, one way to resolve it: http://m.it610.com/article/5024659.htm 2017-10-27 14:23 GMT+07:00 Denes Dolhay : > Hi, > > According to the documentation It should have been created by "ceph-deploy > new". Maybe a small problem due to the fedora

Re: [ceph-users] Install Ceph on Fedora 26

2017-10-27 Thread Denes Dolhay
Hi, According to the documentation It should have been created by "ceph-deploy new". Maybe a small problem due to the fedora is not on the recommended os list(?) Either way, there is a document on manual deployment, and it contains the step to generate ceph.client.admin.keyring too: