Re: [ceph-users] cephfs speed

2018-08-31 Thread David Byte
Are these single threaded writes that you are referring to? It certainly appears so from the thread, but I thought it would be good to confirm that before digging in further. David Byte Sr. Technology Strategist SCE Enterprise Linux SCE Enterprise Storage Alliances and SUSE Embedded

Re: [ceph-users] cephfs speed

2018-08-31 Thread Joe Comeau
Are you using bluestore OSDs ? if so my thought process on this is what we are having an issue with is caching and bluestore see the thread on bluestore caching "Re: [ceph-users] Best practices for allocating memory to bluestore cache" before when we were on Jewel and filestore we could get a

Re: [ceph-users] filestore split settings

2018-08-31 Thread David Turner
More important than being able to push those settings or further is probably the ability to actually split your subfolders. I've been using variants of this [1] script I created a while back to take care of that. To answer your question, we do run with much larger settings than you're using.

Re: [ceph-users] mount cephfs without tiering

2018-08-31 Thread Gregory Farnum
You mean you set up CephFS with a cache tier but want to ignore it? No, that's generally not possible. How would the backup server get consistent data if it's ignoring the cache? (Answer: It can't.) -Greg On Fri, Aug 31, 2018 at 2:35 AM Fyodor Ustinov wrote: > Hi! > > I have cephfs with

Re: [ceph-users] cephfs speed

2018-08-31 Thread Peter Eisch
[replying to myself] I set aside cephfs and created an rbd volume. I get the same splotchy throughput with rbd as I was getting with cephfs. (image attached) So, withdrawing this as a question here as a cephfs issue. #backingout peter Peter Eisch virginpulse.com

Re: [ceph-users] safe to remove leftover bucket index objects

2018-08-31 Thread Dan van der Ster
So it sounds like you tried what I was going to do, and it broke things. Good to know... thanks. In our case, what triggered the extra index objects was a user running PUT /bucketname/ around 20 million times -- this apparently recreates the index objects. -- dan On Thu, Aug 30, 2018 at 7:20 PM

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-31 Thread Jones de Andrade
Hi Eugen. Entirely my missunderstanding, I thought there would be something at boot time (what would certainly not make any sense at all). Sorry. Before stage 3 I ran the commands you suggested on the nodes, and only one got me the output below: ### # grep -C5 sda4

[ceph-users] Luminous RGW errors at start

2018-08-31 Thread Robert Stanford
I installed a new Luminous cluster. Everything is fine so far. Then I tried to start RGW and got this error: 2018-08-31 15:15:41.998048 7fc350271e80 0 rgw_init_ioctx ERROR: librados::Rados::pool_create returned (34) Numerical result out of range (this can be due to a pool or placement group

[ceph-users] (no subject)

2018-08-31 Thread puyingdong
help end

Re: [ceph-users] Ceph Object Gateway Server - Hardware Recommendations

2018-08-31 Thread Marc Roos
Ok from what I have learned sofar from my own test environment. (Keep in mind I am having a test setup for only a year). The s3 rgw is not so much requiring high latency, so you should be able to do fine with hdd only cluster. I guess my setup should be sufficient for what you need to have,

[ceph-users] Ceph Object Gateway Server - Hardware Recommendations

2018-08-31 Thread Unni Sathyarajan
Hi ceph users, I am setting up a cluster of S3 - like storage, to decide on the server specifications from where can I find the minimum and production ready hardware recommendations? The following URL does not mention it :

[ceph-users] (no subject)

2018-08-31 Thread Stas
Hello there, I'm trying to reduce recovery impact on client operations and using mclock for this purpose. I've tested different weights for queues but didn't see any impacts on real performance. ceph version 12.2.8 luminous (stable) Last tested config: "osd_op_queue": "mclock_opclass",

[ceph-users] mount cephfs without tiering

2018-08-31 Thread Fyodor Ustinov
Hi! I have cephfs with tiering. Does anyone know if it's possible to mount a file system so that the tiring is not used? I.e. I want mount cephfs on backup server without tiering usage and on samba server with tiering usage. It's possible? WBR, Fyodor.

[ceph-users] Is luminous ceph rgw can only run with the civetweb ?

2018-08-31 Thread linghucongsong
In jewel I use the below config rgw is work well with the nginx. But with luminous the nginx look like can not work with the rgw. 10.11.3.57, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/var/run/ceph/ceph-client.rgw.ceph-11.asok:", host: "10.11.3.57:7480" 2018/08/31 16:38:25

Re: [ceph-users] MDS not start. Timeout??

2018-08-31 Thread John Spray
On Fri, Aug 31, 2018 at 6:11 AM morf...@gmail.com wrote: > > Hello all! > > I had a electric power problem. After this I have 2 incomplete pg. But all > RBD volumes are work. > > But not work my CephFS. MDS load stop at "replay" state and MDS related > commands hangs: > > cephfs-journal-tool

Re: [ceph-users] Ceph-Deploy error on 15/71 stage

2018-08-31 Thread Eugen Block
Hi, I'm not sure if there's a misunderstanding. You need to track the logs during the osd deployment step (stage.3), that is where it fails, and this is where /var/log/messages could be useful. Since the deployment failed you have no systemd-units (ceph-osd@.service) to log anything.

Re: [ceph-users] Best practices for allocating memory to bluestore cache

2018-08-31 Thread Christian Balzer
Hello, until Bluestore gets caching that is a) self-tuning (within definable limits) so that a busy OSD can consume more cache than ones that are idle AND b) the cache will be as readily evicted as pagecache in low memory situations you're essential SoL, having the bad choices of increasing