Re: [ceph-users] ceph client capabilities for the rados gateway

2017-05-31 Thread Diedrich Ehlerding
Gregiory Farnum wrote: > > > You've probably noticed the RGW will create pools if it needs them and > they don't exist. That's why it "needs" the extra monitor capabilities. Yes, I have noticed that - and yes, automatically creating the pools helpeda lot in a lab environment to setup my firs

Re: [ceph-users] ceph client capabilities for the rados gateway

2017-05-31 Thread Gregory Farnum
On Wed, May 31, 2017 at 11:20 PM Diedrich Ehlerding < diedrich.ehlerd...@ts.fujitsu.com> wrote: > Thank you for your response. Yes, as I wrote, the gateway seems to > work with these settings. > > The reason why I am considering the capabilities is: I am trying to > attach a Openstack environment

Re: [ceph-users] ceph client capabilities for the rados gateway

2017-05-31 Thread Diedrich Ehlerding
Thank you for your response. Yes, as I wrote, the gateway seems to work with these settings. The reason why I am considering the capabilities is: I am trying to attach a Openstack environment and a gateway to the same cluster, and I would like to prevent the Openstack admin to access the S3 ga

[ceph-users] Question about PGMonitor::waiting_for_finished_proposal

2017-05-31 Thread 许雪寒
Hi, everyone. Recently, I’m reading the source code of Monitor. I found that, in PGMonitor::preprare_pg_stats() method, a callback C_Stats is put into PGMonitor::waiting_for_finished_proposal. I wonder, if a previous PGMap incremental is in PAXOS's proposeaccept phase at the moment C_Stats

Re: [ceph-users] ceph client capabilities for the rados gateway

2017-05-31 Thread Gregory Farnum
I don't work with the gateway but in general that should work. That said, the RGW also sees all your client data going in so I'm not sure how much you buy by locking it down. If you're just trying to protect against accidents with the pools, you might give it write access on the monitor; any failu

[ceph-users] Ceph.conf and monitors

2017-05-31 Thread Curt
Hello all, Had a recent issue with ceph monitors and osd's when connecting to second/third monitor. I don't have any debug logs to currently paste, but wanted to get feedback on my ceph.conf for the monitors. This is giant release. Here's the error from monB that stuck out "osd_map(174373..1743

Re: [ceph-users] rbd map fails, ceph release jewel

2017-05-31 Thread David Turner
You are trying to use the kernel client to map the RBD in Jewel. Jewel RBDs have options enabled that require you to run a kernel 4.9 or newer. You can disable the features that are requiring the newer kernel, but that's not very good as those new features are very nice to have. You can use RBD-f

[ceph-users] radosgw refuses upload when Content-Type missing from POST policy

2017-05-31 Thread Dave Holland
Hi, I'm trying to get files into radosgw (Ubuntu Ceph package 10.2.3-0ubuntu0.16.04.2) using Fine Uploader https://github.com/FineUploader/fine-uploader but I'm running into difficulties in the case where the uploaded file has a filename extension which the browser can't map to a MIME type (or, no

Re: [ceph-users] Adding a new node to a small cluster (size = 2)

2017-05-31 Thread David Turner
How full is the cluster before adding the third node? If it's over 65% I would recommend adding 2 new nodes instead of 1. The reason for that is that it if you lose one of the nodes, your cluster will try to backfill back onto the 2 nodes and be way too full. There is no rule or recommendation a

[ceph-users] Adding a new node to a small cluster (size = 2)

2017-05-31 Thread Kevin Olbrich
Hi! A customer is running a small two node ceph cluster with 14 disks each. He has min_size 1 and size 2 and it is only used for backups. If we add a third member with 14 identical disks and remain size = 2, replicas should be distributed evenly, right? Or is an uneven count of hosts unadvisable

[ceph-users] rbd map fails, ceph release jewel

2017-05-31 Thread Shambhu Rajak
Hi Cepher, I have created a pool and trying to create rbd image on the ceph client, while mapping the rbd image it fails as: ubuntu@shambhucephnode0:~$ sudo rbd map pool1-img1 -p pool1 rbd: sysfs write failed In some cases useful info is found in syslog - try "dmesg | tail" or so. rbd: map failed

Re: [ceph-users] Lumionous: bluestore 'tp_osd_tp thread tp_osd_tp' had timed out after 60

2017-05-31 Thread Mark Nelson
On 05/31/2017 05:21 AM, nokia ceph wrote: + ceph-devel .. $ps -ef | grep 294 ceph 3539720 1 14 08:04 ?00:16:35 /usr/bin/ceph-osd -f --cluster ceph --id 294 --setuser ceph --setgroup ceph $gcore -o coredump-osd 3539720 $(gdb) bt #0 0x7f5ef68f56d5 in pthread_cond_wait@

Re: [ceph-users] Lumionous: bluestore 'tp_osd_tp thread tp_osd_tp' had timed out after 60

2017-05-31 Thread nokia ceph
+ ceph-devel .. $ps -ef | grep 294 ceph 3539720 1 14 08:04 ?00:16:35 /usr/bin/ceph-osd -f --cluster ceph --id 294 --setuser ceph --setgroup ceph $gcore -o coredump-osd 3539720 $(gdb) bt #0 0x7f5ef68f56d5 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1

[ceph-users] ceph client capabilities for the rados gateway

2017-05-31 Thread Diedrich Ehlerding
Hello. The documentation which I found proposes to create the ceph client for a rados gateway with very global capabilities, namely "mon allow rwx, osd allow rwx". Are there any reasons for these very global capabilities (allowing this client to access and modify (even remove) all pools, all r