Re: [ceph-users] S3 object notifications

2017-11-28 Thread Yehuda Sadeh-Weinraub
On Wed, Nov 29, 2017 at 12:43 AM, wrote: > Hi Yehuda. > > Are there any examples (doc's, blog posts, ...): > - how to use that "framework" and especially for the "callbacks" There's a minimal sync module implementation that does nothing other than write a debug log for each sync event: https://

Re: [ceph-users] Cephfs Hadoop Plugin and CEPH integration

2017-11-28 Thread Orit Wasserman
On Tue, Nov 28, 2017 at 7:26 PM, Aristeu Gil Alves Jr wrote: > Greg and Donny, > > Thanks for the answers. It helped a lot! > > I just watched the swifta presentation and it looks quite good! > I would highly recommend using s3a and not swifta as it is much more mature and is more used. Cheers,

Re: [ceph-users] force scrubbing

2017-11-28 Thread David Turner
I personally set max_scrubs to 0 on the cluster and then set it to 1 only on the osds involved in the PG you want to scrub. Setting the cluster to max_scrubs of 1 and then upping the involved osds to 2 might help, but is not a guarantee. On Tue, Nov 28, 2017 at 7:25 PM Gregory Farnum wrote: > O

Re: [ceph-users] force scrubbing

2017-11-28 Thread Gregory Farnum
On Mon, Nov 13, 2017 at 1:01 AM Kenneth Waegeman wrote: > Hi all, > > > Is there a way to force scrub a pg of an erasure coded pool? > > I tried ceph pg deep-scrub 5.4c7, but after a week it still hasn't > scrubbed the pg (last scrub timestamp not changed) > Much to my surprise, it appears you

Re: [ceph-users] Broken upgrade from Hammer to Luminous

2017-11-28 Thread Gregory Farnum
I thought somebody else was going to contact you about this, but in case it didn't happen off-list: This appears to be an embarrassing issue on our end where we alter the disk state despite not being able to start up all the way, and rely on our users to read release notes carefully. ;) :/ At thi

Re: [ceph-users] CephFS - Mounting a second Ceph file system

2017-11-28 Thread Nigel Williams
On 29 November 2017 at 01:51, Daniel Baumann wrote: > On 11/28/17 15:09, Geoffrey Rhodes wrote: >> I'd like to run more than one Ceph file system in the same cluster. Are their opinions on how stable multiple filesystems per single Ceph cluster is in practice? is anyone using it actively with a s

Re: [ceph-users] S3 object notifications

2017-11-28 Thread ceph . novice
Hi Yehuda.   Are there any examples (doc's, blog posts, ...): - how to use that "framework" and especially for the "callbacks" - for the latest "Metasearch" feature / usage with a S3 client/tools like CyberDuck, s3cmd, AWSCLI or at least boto3?   - i.e. is an external ELK still needed or is this s

Re: [ceph-users] ceph all-nvme mysql performance tuning

2017-11-28 Thread German Anders
Don't know if there's any statistics available really, but Im running some sysbench tests with mysql before the changes and the idea is to run those tests again after the 'tuning' and see if numbers get better in any way, also I'm gathering numbers from some collectd and statsd collectors running o

Re: [ceph-users] ceph all-nvme mysql performance tuning

2017-11-28 Thread Marc Roos
I was wondering if there are any statistics available that show the performance increase of doing such things? -Original Message- From: German Anders [mailto:gand...@despegar.com] Sent: dinsdag 28 november 2017 19:34 To: Luis Periquito Cc: ceph-users Subject: Re: [ceph-users] ceph

Re: [ceph-users] ceph-disk is now deprecated

2017-11-28 Thread Vasu Kulkarni
On Tue, Nov 28, 2017 at 9:22 AM, David Turner wrote: > Isn't marking something as deprecated meaning that there is a better option > that we want you to use and you should switch to it sooner than later? I > don't understand how this is ready to be marked as such if ceph-volume can't > be switched

Re: [ceph-users] ceph all-nvme mysql performance tuning

2017-11-28 Thread German Anders
Thanks a lot Luis, I agree with you regarding the CPUs, but unfortunately those were the best CPU model that we can afford :S For the NUMA part, I manage to pinned the OSDs by changing the /usr/lib/systemd/system/ceph-osd@.service file and adding the CPUAffinity list to it. But, this is for ALL th

[ceph-users] Cache tier or RocksDB

2017-11-28 Thread Jorge Pinilla López
Hey! I have a setup with blueStore with 8 hdd and 1 nmve of 400GB por host, how would I get more performance, by using the nmve as equal RocksDB partitions (50GB each), setting it all as a cache tier osd for the rest of hdd or doing a mix with less DB space like 15-25GB partitions and the rest for

Re: [ceph-users] Cephfs Hadoop Plugin and CEPH integration

2017-11-28 Thread Aristeu Gil Alves Jr
Greg and Donny, Thanks for the answers. It helped a lot! I just watched the swifta presentation and it looks quite good! Due the lack of updates/development, and the fact that we can choose spark also, I think maybe swift/swifta with ceph is a good strategy too. I need to study it more, tho. Ca

Re: [ceph-users] ceph-disk is now deprecated

2017-11-28 Thread David Turner
Isn't marking something as deprecated meaning that there is a better option that we want you to use and you should switch to it sooner than later? I don't understand how this is ready to be marked as such if ceph-volume can't be switched to for all supported use cases. If ZFS, encryption, FreeBSD,

Re: [ceph-users] S3 object notifications

2017-11-28 Thread Sean Purdy
On Tue, 28 Nov 2017, Yehuda Sadeh-Weinraub said: > rgw has a sync modules framework that allows you to write your own > sync plugins. The system identifies objects changes and triggers I am not a C++ developer though. http://ceph.com/rgw/new-luminous-rgw-metadata-search/ says "Stay tuned in futu

Re: [ceph-users] monitor crash issue

2017-11-28 Thread Joao Eduardo Luis
Hi Zhongyan, On 11/28/2017 02:25 PM, Zhongyan Gu wrote: Hi There, We hit a monitor crash bug in our production clusters during adding more nodes into one of clusters. Thanks for reporting this. Can you please share the log resulting from the crash? I'll be looking into this. -Joao ___

[ceph-users] Transparent huge pages

2017-11-28 Thread Nigel Williams
Given that memory is a key resource for Ceph, this advice about switching Transparent Huge Pages kernel setting to madvise would be worth testing to see if THP is helping or hindering. Article: https://blog.nelhage.com/post/transparent-hugepages/ Discussion: https://news.ycombinator.com/item?id=1

Re: [ceph-users] CephFS - Mounting a second Ceph file system

2017-11-28 Thread Daniel Baumann
On 11/28/17 15:09, Geoffrey Rhodes wrote: > I'd like to run more than one Ceph file system in the same cluster. > Can anybody point me in the right direction to explain how to mount the > second file system? if you use the kernel client, you can use the mds_namespace option, i.e.: mount -t ceph

Re: [ceph-users] CephFS - Mounting a second Ceph file system

2017-11-28 Thread Geoffrey Rhodes
Thanks John for the assistance. Geoff On 28 November 2017 at 16:30, John Spray wrote: > On Tue, Nov 28, 2017 at 2:09 PM, Geoffrey Rhodes > wrote: > > Good day, > > > > I'd like to run more than one Ceph file system in the same cluster. > > Can anybody point me in the right direction to explain

Re: [ceph-users] CephFS - Mounting a second Ceph file system

2017-11-28 Thread John Spray
On Tue, Nov 28, 2017 at 2:09 PM, Geoffrey Rhodes wrote: > Good day, > > I'd like to run more than one Ceph file system in the same cluster. > Can anybody point me in the right direction to explain how to mount the > second file system? With the kernel mount you can use "-o mds_namespace=" to spec

Re: [ceph-users] ceph-disk is now deprecated

2017-11-28 Thread Willem Jan Withagen
On 28-11-2017 13:32, Alfredo Deza wrote: I understand that this would involve a significant effort to fully port over and drop ceph-disk entirely, and I don't think that dropping ceph-disk in Mimic is set in stone (yet). Alfredo, When I expressed my concers about deprecating ceph-disk, I was

[ceph-users] monitor crash issue

2017-11-28 Thread Zhongyan Gu
Hi There, We hit a monitor crash bug in our production clusters during adding more nodes into one of clusters. The stack trace looks like below: lc 25431444 0> 2017-11-23 15:41:16.688046 7f93883f2700 -1 error_msg mon/OSDMonitor.cc: In function 'MOSDMap* OSDMonitor::build_incremental(epoch_t, e

Re: [ceph-users] CRUSH rule seems to work fine not for all PGs in erasure coded pools

2017-11-28 Thread Jakub Jaszewski
Hi David, thanks for quick feedback. Then why some PGs were remapped and some were not? # LOOKS THAT 338 PGs IN ERASURE CODED POOLS HAVE BEEN REMAPPED # I DONT GET WHY 540 PGs STILL ENCOUNTER active+undersized+degraded STATE root at host01

Re: [ceph-users] ceph-disk is now deprecated

2017-11-28 Thread Joao Eduardo Luis
On 11/28/2017 12:52 PM, Alfredo Deza wrote: On Tue, Nov 28, 2017 at 7:38 AM, Joao Eduardo Luis wrote: On 11/28/2017 11:54 AM, Alfredo Deza wrote: On Tue, Nov 28, 2017 at 3:12 AM, Wido den Hollander wrote: Op 27 november 2017 om 14:36 schreef Alfredo Deza : For the upcoming Luminous rel

[ceph-users] CephFS - Mounting a second Ceph file system

2017-11-28 Thread Geoffrey Rhodes
Good day, I'd like to run more than one Ceph file system in the same cluster. Can anybody point me in the right direction to explain how to mount the second file system? Thanks OS: Ubuntu 16.04.3 LTS Ceph version: 12.2.1 - Luminous Kind regards Geoffrey Rhodes _

Re: [ceph-users] CRUSH rule seems to work fine not for all PGs in erasure coded pools

2017-11-28 Thread David Turner
Your EC profile requires 5 servers to be healthy. When you remove 1 OSD from the cluster, it recovers by moving all of the copies on that OSD to other OSDs in the same host. However when you remove an entire host, it cannot store 5 copies of the data on the 4 remaining servers with your crush rul

Re: [ceph-users] "failed to open ino"

2017-11-28 Thread Jens-U. Mozdzen
Hi David, Zitat von David C : On 27 Nov 2017 1:06 p.m., "Jens-U. Mozdzen" wrote: Hi David, Zitat von David C : Hi Jens We also see these messages quite frequently, mainly the "replicating dir...". Only seen "failed to open ino" a few times so didn't do any real investigation. Our set up is

[ceph-users] CRUSH rule seems to work fine not for all PGs in erasure coded pools

2017-11-28 Thread Jakub Jaszewski
Hi, I'm trying to understand erasure coded pools and why CRUSH rules seem to work for only part of PGs in EC pools. Basically what I'm trying to do is to check erasure coded pool recovering behaviour after the single OSD or single HOST failure. I noticed that in case of HOST failure only part of P

Re: [ceph-users] ceph-disk is now deprecated

2017-11-28 Thread Andreas Calminder
Thanks! I'll start looking into rebuilding my roles once 12.2.2 is out then. On 28 November 2017 at 13:37, Alfredo Deza wrote: > On Tue, Nov 28, 2017 at 7:22 AM, Andreas Calminder > wrote: >>> For the `simple` sub-command there is no prepare/activate, it is just >>> a way of taking over manageme

Re: [ceph-users] S3 object notifications

2017-11-28 Thread Yehuda Sadeh-Weinraub
rgw has a sync modules framework that allows you to write your own sync plugins. The system identifies objects changes and triggers callbacks that can then act on those changes. For example, the metadata search feature that was added recently is using this to send objects metadata into elasticsearc

Re: [ceph-users] ceph-disk is now deprecated

2017-11-28 Thread Alfredo Deza
On Tue, Nov 28, 2017 at 7:38 AM, Joao Eduardo Luis wrote: > On 11/28/2017 11:54 AM, Alfredo Deza wrote: >> >> On Tue, Nov 28, 2017 at 3:12 AM, Wido den Hollander wrote: >>> >>> Op 27 november 2017 om 14:36 schreef Alfredo Deza : For the upcoming Luminous release (12.2.2), ceph

Re: [ceph-users] ceph-disk is now deprecated

2017-11-28 Thread Joao Eduardo Luis
On 11/28/2017 11:54 AM, Alfredo Deza wrote: On Tue, Nov 28, 2017 at 3:12 AM, Wido den Hollander wrote: Op 27 november 2017 om 14:36 schreef Alfredo Deza : For the upcoming Luminous release (12.2.2), ceph-disk will be officially in 'deprecated' mode (bug fixes only). A large banner with depr

Re: [ceph-users] ceph-disk is now deprecated

2017-11-28 Thread Alfredo Deza
On Tue, Nov 28, 2017 at 7:22 AM, Andreas Calminder wrote: >> For the `simple` sub-command there is no prepare/activate, it is just >> a way of taking over management of an already deployed OSD. For *new* >> OSDs, yes, we are implying that we are going only with Logical Volumes >> for data devices.

Re: [ceph-users] ceph-disk is now deprecated

2017-11-28 Thread Alfredo Deza
On Tue, Nov 28, 2017 at 3:39 AM, Piotr Dałek wrote: > On 17-11-28 09:12 AM, Wido den Hollander wrote: >> >> >>> Op 27 november 2017 om 14:36 schreef Alfredo Deza : >>> >>> >>> For the upcoming Luminous release (12.2.2), ceph-disk will be >>> officially in 'deprecated' mode (bug fixes only). A larg

Re: [ceph-users] ceph-disk is now deprecated

2017-11-28 Thread Wido den Hollander
> Op 28 november 2017 om 12:54 schreef Alfredo Deza : > > > On Tue, Nov 28, 2017 at 3:12 AM, Wido den Hollander wrote: > > > >> Op 27 november 2017 om 14:36 schreef Alfredo Deza : > >> > >> > >> For the upcoming Luminous release (12.2.2), ceph-disk will be > >> officially in 'deprecated' mode (

[ceph-users] S3 object notifications

2017-11-28 Thread Sean Purdy
Hi, http://docs.ceph.com/docs/master/radosgw/s3/ says that S3 object notifications are not supported. I'd like something like object notifications so that we can backup new objects in realtime, instead of trawling the whole object list for what's changed. Is there anything similar I can use?

Re: [ceph-users] ceph-disk is now deprecated

2017-11-28 Thread Andreas Calminder
> For the `simple` sub-command there is no prepare/activate, it is just > a way of taking over management of an already deployed OSD. For *new* > OSDs, yes, we are implying that we are going only with Logical Volumes > for data devices. It is a bit more flexible for Journals, block.db, > and block.

Re: [ceph-users] ceph-disk is now deprecated

2017-11-28 Thread Maged Mokhtar
I tend to agree with Wido. May of us still reply on ceph-disk and hope to see it live a little longer. Maged On 2017-11-28 13:54, Alfredo Deza wrote: > On Tue, Nov 28, 2017 at 3:12 AM, Wido den Hollander wrote: > Op 27 november 2017 om 14:36 schreef Alfredo Deza : > > For the upcoming Lumin

Re: [ceph-users] ceph-disk is now deprecated

2017-11-28 Thread Alfredo Deza
On Tue, Nov 28, 2017 at 3:12 AM, Wido den Hollander wrote: > >> Op 27 november 2017 om 14:36 schreef Alfredo Deza : >> >> >> For the upcoming Luminous release (12.2.2), ceph-disk will be >> officially in 'deprecated' mode (bug fixes only). A large banner with >> deprecation information has been ad

Re: [ceph-users] ceph-disk is now deprecated

2017-11-28 Thread Alfredo Deza
On Tue, Nov 28, 2017 at 1:56 AM, Andreas Calminder wrote: > Hello, > Thanks for the heads-up. As someone who's currently maintaining a > Jewel cluster and are in the process of setting up a shiny new > Luminous cluster and writing Ansible roles in the process to make > setup reproducible. I immedi

Re: [ceph-users] "failed to open ino"

2017-11-28 Thread Jens-U. Mozdzen
Hi, Zitat von "Yan, Zheng" : On Sat, Nov 25, 2017 at 2:27 AM, Jens-U. Mozdzen wrote: Hi all, [...] In the log of the active MDS, we currently see the following two inodes reported over and over again, about every 30 seconds: --- cut here --- 2017-11-24 18:24:16.496397 7fa308cf0700 0 mds.0.ca

Re: [ceph-users] ceph all-nvme mysql performance tuning

2017-11-28 Thread Luis Periquito
There are a few things I don't like about your machines... If you want latency/IOPS (as you seemingly do) you really want the highest frequency CPUs, even over number of cores. These are not too bad, but not great either. Also you have 2x CPU meaning NUMA. Have you pinned OSDs to NUMA nodes? Ideal

Re: [ceph-users] ceph-disk is now deprecated

2017-11-28 Thread Piotr Dałek
On 17-11-28 09:12 AM, Wido den Hollander wrote: Op 27 november 2017 om 14:36 schreef Alfredo Deza : For the upcoming Luminous release (12.2.2), ceph-disk will be officially in 'deprecated' mode (bug fixes only). A large banner with deprecation information has been added, which will try to rai

Re: [ceph-users] ceph-disk is now deprecated

2017-11-28 Thread Wido den Hollander
> Op 27 november 2017 om 14:36 schreef Alfredo Deza : > > > For the upcoming Luminous release (12.2.2), ceph-disk will be > officially in 'deprecated' mode (bug fixes only). A large banner with > deprecation information has been added, which will try to raise > awareness. > As much as I like c