[ceph-users] [luminous][ERR] Error -2 reading object

2017-11-03 Thread shadow_lin
Hi all, I am testing luminouse for ec pool backed rbd[k=8,m=2]. My luminouse version is: ceph version 12.2.1-249-g42172a4 (42172a443183ffe6b36e85770e53fe678db293bf) luminous (stable) My cluster had some osd memory oom problem so some osds got oom killed.The cluster entered recovery

Re: [ceph-users] RAM requirements for OSDs in Luminous

2017-11-03 Thread David Turner
The Ceph docs are versioned. The link you used is for jewel. Change the jewel in the url to luminous to look at the luminous version of the docs. That said, the documentation regarding RAM recommendations has not changed, but this topic was covered fairly recently on the ML. Here is a link to

[ceph-users] 答复: Re: Luminous LTS: `ceph osd crush class create` isgone?

2017-11-03 Thread xie.xingguo
> With the caveat that the "ceph osd crush set-device-class" command only works > on existing OSD's which already have a default assigned class so you cannot > plan/create your classes before > adding some OSD's first. > The "ceph osd crush class create" command could be run without any OSD's

[ceph-users] RAM requirements for OSDs in Luminous

2017-11-03 Thread Kamila Součková
Hello, we are in the process of selecting hardware for a cluster. We are wondering about RAM requirements. We found http://docs.ceph.com/docs/jewel/start/hardware-recommendations/#ram, but we are wondering: 1. Is up to date? Are these numbers valid for Luminous? 2. How valid is it for large

Re: [ceph-users] s3 bucket policys

2017-11-03 Thread Adam C. Emerson
On 03/11/2017, Simon Leinen wrote: [snip] > Is this supported by the Luminous version of RadosGW? Yes! There's a few bugfixes in master that are making their way into Luminous, but Luminous has all the features at present. > (Or even Jewel?) No! > Does this work with Keystone integration, i.e.

Re: [ceph-users] s3 bucket policys

2017-11-03 Thread Simon Leinen
Adam C Emerson writes: > I'll save you, Citizen! I'm Captain Bucketpolicy! Good to know! > So! RGW's bucket policies are currently a subset of what's > demonstrated in > http://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html > The big limitations are that we don't support

Re: [ceph-users] announcing ceph-helm (ceph on kubernetes orchestration)

2017-11-03 Thread Bassam Tabbara
(sorry for the late response, just catching up on ceph-users) > Probably the main difference is that ceph-helm aims to run Ceph as part of > the container infrastructure. The containers are privileged so they can > interact with hardware where needed (e.g., lvm for dm-crypt) and the > cluster

Re: [ceph-users] CephFS: clients hanging on write with ceph-fuse

2017-11-03 Thread Andras Pataki
I've tested  the 12.2.1 fuse client - and it also reproduces the problem unfortunately.  Investigating the code that accesses the file system, it looks like multiple processes from multiple nodes write to the same file concurrently, but to different byte ranges of it.  Unfortunately the

[ceph-users] s3 bucket policys

2017-11-03 Thread nigel davies
Hay all i am having some problems with S3 acls / policy I want to set up two buckets bucket_upload bucket_process and two users usr_upload usr_process I want to set up acl or policys where usr_upload can write to bucket_upload usr_process can read to bucket_upload usr_process can read and

Re: [ceph-users] Bluestore OSD_DATA, WAL & DB

2017-11-03 Thread Maged Mokhtar
On 2017-11-03 15:59, Wido den Hollander wrote: > Op 3 november 2017 om 14:43 schreef Mark Nelson : > > On 11/03/2017 08:25 AM, Wido den Hollander wrote: > Op 3 november 2017 om 13:33 schreef Mark Nelson : > > On 11/03/2017 02:44 AM, Wido den Hollander

Re: [ceph-users] Bluestore OSD_DATA, WAL & DB

2017-11-03 Thread Willem Jan Withagen
On 3-11-2017 00:09, Nigel Williams wrote: > On 3 November 2017 at 07:45, Martin Overgaard Hansen > wrote: >> I want to bring this subject back in the light and hope someone can provide >> insight regarding the issue, thanks. > Is it possible to make the DB partition (on the

Re: [ceph-users] Bluestore OSD_DATA, WAL & DB

2017-11-03 Thread Mark Nelson
On 11/03/2017 08:25 AM, Wido den Hollander wrote: Op 3 november 2017 om 13:33 schreef Mark Nelson : On 11/03/2017 02:44 AM, Wido den Hollander wrote: Op 3 november 2017 om 0:09 schreef Nigel Williams : On 3 November 2017 at 07:45,

Re: [ceph-users] Bluestore OSD_DATA, WAL & DB

2017-11-03 Thread Wido den Hollander
> Op 3 november 2017 om 13:33 schreef Mark Nelson : > > > > > On 11/03/2017 02:44 AM, Wido den Hollander wrote: > > > >> Op 3 november 2017 om 0:09 schreef Nigel Williams > >> : > >> > >> > >> On 3 November 2017 at 07:45, Martin Overgaard

Re: [ceph-users] iSCSI: tcmu-runner can't open images?

2017-11-03 Thread Jason Dillaman
On Fri, Nov 3, 2017 at 9:05 AM, Matthias Leopold < matthias.leop...@meduniwien.ac.at> wrote: > > > Am 2017-11-03 um 02:44 schrieb Jason Dillaman: > >> On Thu, Nov 2, 2017 at 11:34 AM, Matthias Leopold < >> matthias.leop...@meduniwien.ac.at > iwien.ac.at>> wrote: >>

Re: [ceph-users] Bluestore OSD_DATA, WAL & DB

2017-11-03 Thread Mark Nelson
On 11/03/2017 04:08 AM, Jorge Pinilla López wrote: well I haven't found any recomendation either but I think that sometimes the SSD space is being wasted. If someone wanted to write it, you could have bluefs share some of the space on the drive for hot object data and release space as

Re: [ceph-users] iSCSI: tcmu-runner can't open images?

2017-11-03 Thread Matthias Leopold
Am 2017-11-03 um 02:44 schrieb Jason Dillaman: On Thu, Nov 2, 2017 at 11:34 AM, Matthias Leopold > wrote: Hi, i'm trying to set up iSCSI gateways for a Ceph luminous cluster using these instructions:

Re: [ceph-users] Bluestore OSD_DATA, WAL & DB

2017-11-03 Thread Mark Nelson
On 11/03/2017 02:44 AM, Wido den Hollander wrote: Op 3 november 2017 om 0:09 schreef Nigel Williams : On 3 November 2017 at 07:45, Martin Overgaard Hansen wrote: I want to bring this subject back in the light and hope someone can provide

Re: [ceph-users] Ceph S3 nginx Proxy

2017-11-03 Thread nigel davies
Thanks for your info Jack i just got it working by adding proxy_set_header Host $host; and it now started to work On Fri, Nov 3, 2017 at 11:32 AM, Jack wrote: > My conf (may not be optimal): > > server { > listen 443 ssl http2; > listen [::]:443 ssl

Re: [ceph-users] Ceph S3 nginx Proxy

2017-11-03 Thread Yoann Moulin
Hello, >> I am trying to set up an ceph cluster with an s3 buckets setup with an >> nignx proxy. >> >> I have the ceph and s3 parts working. :D >> >> when i run my php script through the nginx proxy i get an error >> "> encoding="UTF-8"?>SignatureDoesNotMatch" >> >> >> but direct it works fine.

Re: [ceph-users] Ceph RDB with iSCSI multipath

2017-11-03 Thread Jason Dillaman
On Fri, Nov 3, 2017 at 6:55 AM, Ilya Dryomov wrote: > On Fri, Nov 3, 2017 at 2:51 AM, Jason Dillaman > wrote: > > There was a little delay getting things merged in the upstream kernel so > we > > are now hoping for v4.16. You should be able to take a

Re: [ceph-users] Ceph S3 nginx Proxy

2017-11-03 Thread Jack
My conf (may not be optimal): server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name FQDN; ssl_certificate /etc/ssl/certs/FQDN.crt; ssl_certificate_key /etc/ssl/private/FQDN.key; add_header Strict-Transport-Security

[ceph-users] Ceph S3 nginx Proxy

2017-11-03 Thread nigel davies
Hay all I am trying to set up an ceph cluster with an s3 buckets setup with an nignx proxy. I have the ceph and s3 parts working. :D when i run my php script through the nginx proxy i get an error "SignatureDoesNotMatch" but direct it works fine. Has any one come across this before and can

[ceph-users] Luminous ceph pool %USED calculation

2017-11-03 Thread Alwin Antreich
Hi, I am confused by the %USED calculation in the output 'ceph df' in luminous. In the example below the pools use 2.92% "%USED" but with my calculation, taken from the source code it gives me a 8.28%. On a hammer cluster my calculation gives the same result as in the 'ceph df' output. Am I

Re: [ceph-users] Ceph RDB with iSCSI multipath

2017-11-03 Thread Ilya Dryomov
On Fri, Nov 3, 2017 at 2:51 AM, Jason Dillaman wrote: > There was a little delay getting things merged in the upstream kernel so we > are now hoping for v4.16. You should be able to take a 4.15 rc XYZ kernel I think that should be 4.15 and 4.14-rc respectively ;) > and

[ceph-users] Unexpected multipart upload failure

2017-11-03 Thread Pierre-Louis Garnier
Hi, RGW is returning a 404 when I try to duplicate a file in a bucket. Here is my client version information: Boto3/1.4.4 Python/3.5.2 Linux/4.4.0-97-generic Botocore/1.5.95 $ radosgw --version ceph version 12.2.1 (3e7492b9ada8bdc9a5cd0feafd42fbca27f9c38e) luminous (stable) This only happens on

Re: [ceph-users] Luminous LTS: `ceph osd crush class create` is gone?

2017-11-03 Thread Caspar Smit
2017-11-03 7:59 GMT+01:00 Brad Hubbard : > On Fri, Nov 3, 2017 at 4:04 PM, Linh Vu wrote: > > Hi all, > > > > > > Back in Luminous Dev and RC, I was able to do this: > > > > > > `ceph osd crush class create myclass` > > This was removed as part of

Re: [ceph-users] Bluestore OSD_DATA, WAL & DB

2017-11-03 Thread Jorge Pinilla López
well I haven't found any recomendation either  but I think that sometimes the SSD space is being wasted. I was thinking about making an OSD from the rest of my SSD space, but it wouldnt scale in case more speed is needed. Other option I asked was to use bcache or a mix between bcache and small

Re: [ceph-users] Bluestore OSD_DATA, WAL & DB

2017-11-03 Thread Wido den Hollander
> Op 3 november 2017 om 0:09 schreef Nigel Williams > : > > > On 3 November 2017 at 07:45, Martin Overgaard Hansen > wrote: > > I want to bring this subject back in the light and hope someone can provide > > insight regarding the issue, thanks.

Re: [ceph-users] CephFS desync

2017-11-03 Thread Andrey Klimentyev
I am absolutely incorrect, my apologies. caps: [mds] allow rw caps: [mon] allow r caps: [osd] allow rwx pool=cephfs_metadata, allow rwx pool=cephfs_data On 3 November 2017 at 10:40, Henrik Korkuc wrote: > On 17-11-03 09:29, Andrey Klimentyev wrote: > > Thanks for a swift

Re: [ceph-users] CephFS desync

2017-11-03 Thread Henrik Korkuc
On 17-11-03 09:29, Andrey Klimentyev wrote: Thanks for a swift response. We are using 10.2.10. They all share the same set of permissions (and one key, too). Haven't found anything incriminating in logs, too. caps: [mon] allow r caps: [osd] allow class-read object_prefix rbd_children, allow

Re: [ceph-users] CephFS desync

2017-11-03 Thread Andrey Klimentyev
Thanks for a swift response. We are using 10.2.10. They all share the same set of permissions (and one key, too). Haven't found anything incriminating in logs, too. caps: [mon] allow r caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=rbd On 3 November 2017 at 00:56,

Re: [ceph-users] Luminous LTS: `ceph osd crush class create` is gone?

2017-11-03 Thread Brad Hubbard
On Fri, Nov 3, 2017 at 4:04 PM, Linh Vu wrote: > Hi all, > > > Back in Luminous Dev and RC, I was able to do this: > > > `ceph osd crush class create myclass` This was removed as part of https://github.com/ceph/ceph/pull/16388 It looks like the set-device-class command is

[ceph-users] Luminous LTS: `ceph osd crush class create` is gone?

2017-11-03 Thread Linh Vu
Hi all, Back in Luminous Dev and RC, I was able to do this: `ceph osd crush class create myclass` so I could utilise the new CRUSH device classes feature as described here: http://ceph.com/community/new-luminous-crush-device-classes/ and in use here: