Re: [ceph-users] Questions regarding hardware design of an SSD only cluster

2018-04-23 Thread Mohamad Gebai
On 04/23/2018 09:24 PM, Christian Balzer wrote: > >> If anyone has some ideas/thoughts/pointers, I would be glad to hear them. >> > RAM, you'll need a lot of it, even more with Bluestore given the current > caching. > I'd say 1GB per TB storage as usual and 1-2GB extra per OSD. Does that still

Re: [ceph-users] Questions regarding hardware design of an SSD only cluster

2018-04-23 Thread Christian Balzer
Hello, On Mon, 23 Apr 2018 17:43:03 +0200 Florian Florensa wrote: > Hello everyone, > > I am in the process of designing a Ceph cluster, that will contain > only SSD OSDs, and I was wondering how should I size my cpu. Several threads about this around here, but first things first. Any specifics

[ceph-users] performance tuning

2018-04-23 Thread Frank Ritchie
Hi all, I am building a new cluster that will be using Luminous, Filestore, NVME journals and 10k sas drives. Is there a way to estimate proper values for: filestore_queue_max_bytes filestore_queue_max_ops journal_max_write_bytes journal_max_write_entries or is it a matter of testing and trial

Re: [ceph-users] OSDs not starting if the cluster name is not ceph

2018-04-23 Thread Robert Stanford
Thanks, yeah we will move away from it. Sadly, this is one of many little- (or non-) documented things that have made adapting Ceph for large-scale use a pain. Hopefully it will be worth it. On Mon, Apr 23, 2018 at 4:25 PM, David Turner wrote: > If you can move away

Re: [ceph-users] OSDs not starting if the cluster name is not ceph

2018-04-23 Thread David Turner
If you can move away from having a non-default cluster name, do that. It's honestly worth the hassle if it's early enough in your deployment. Otherwise you'll end up needing to symlink a lot of things to the default ceph name. Back when it was supported, we still needed to have

Re: [ceph-users] Fixing bad radosgw index

2018-04-23 Thread Matt Benjamin
Mimic (and higher) contain a new async gc mechanism, which should handle this workload internally. Matt On Mon, Apr 23, 2018 at 2:55 PM, David Turner wrote: > When figuring out why space is not freeing up after deleting buckets and > objects in RGW, look towards the RGW

Re: [ceph-users] What are the current rados gw pools

2018-04-23 Thread David Turner
>From my experience, Luminous now only uses a .users pool and not the .users.etc pools. I agree that this could be better documented for configuring RGW. I don't know the full list. Before you go and delete any pools make sure to create users, put data in the cluster, and that no objects exist

Re: [ceph-users] Fixing bad radosgw index

2018-04-23 Thread David Turner
When figuring out why space is not freeing up after deleting buckets and objects in RGW, look towards the RGW Garbage Collection. This has come up on the ML several times in the past. I am almost finished catching up on a GC of 200 Million objects that was taking up a substantial amount of space

Re: [ceph-users] Is it possible to suggest the active MDS to move to a datacenter ?

2018-04-23 Thread David Turner
If your cluster needs both datacenters to operate, then I wouldn't really worry about where you active MDS is running. OTOH, if you're set on having the active MDS be in 1 DC or the other, you could utilize some external scripting to see if the active MDS is in DC #2 while an MDS for DC #1 is in

Re: [ceph-users] Does jewel 10.2.10 support filestore_split_rand_factor?

2018-04-23 Thread David Turner
I believe that Luminous has an ability like this as you can specify how many objects you anticipate a pool to have when you create it. However, if you're creating pools in Luminous, you're probably using bluestore. For Jewel and before, pre-splitting PGs doesn't help as much as you'd think. As

Re: [ceph-users] etag/hash and content_type fields were concatenated wit \u0000

2018-04-23 Thread Syed Armani
I forgot to add some information which is critical here: On 23/04/18 4:50 PM, Syed Armani wrote: > Hello folks, > > We are running radosgw(Luminous) with Swift API enabled. We observed > that after updating an object the "hash" and "content_type" > fields were concatenated with "\u000". > >

Re: [ceph-users] Help Configuring Replication

2018-04-23 Thread Christopher Meadors
That seems to have worked. Thanks much! And yes, I realize my setup is less than ideal, but I'm planning on migrating from another storage system, and this is the hardware I have to work with. I'll definitely keep your recommendations in mind when I start to grow the cluster. On 04/23/2018

Re: [ceph-users] Help Configuring Replication

2018-04-23 Thread Paul Emmerich
Hi, this doesn't sound like a good idea: two hosts is usually a poor configuration for Ceph. Also, fewer disks on more servers is typically better than lots of disks in few servers. But to answer your question: you could use a crush rule like this: min_size 4 max_size 4 step take default step

[ceph-users] Questions regarding hardware design of an SSD only cluster

2018-04-23 Thread Florian Florensa
Hello everyone, I am in the process of designing a Ceph cluster, that will contain only SSD OSDs, and I was wondering how should I size my cpu. The cluster will only be used for block storage. The OSDs will be Samsung PM863 (2Tb or 4Tb, this will be determined when we will set the total volumetry

[ceph-users] Help Configuring Replication

2018-04-23 Thread Christopher Meadors
I'm starting to get a small Ceph cluster running. I'm to the point where I've created a pool, and stored some test data in it, but I'm having trouble configuring the level of replication that I want. The goal is to have two OSD host nodes, each with 20 OSDs. The target replication will be:

[ceph-users] etag/hash and content_type fields were concatenated wit \u0000

2018-04-23 Thread Syed Armani
Hello folks, We are running radosgw(Luminous) with Swift API enabled. We observed that after updating an object the "hash" and "content_type" fields were concatenated with "\u000". Steps to reproduce the issue. [1] Create a container(Swift nomenclature) [2] Upload a file with "swift --debug

Re: [ceph-users] London Ceph day yesterday

2018-04-23 Thread John Spray
On Fri, Apr 20, 2018 at 9:32 AM, Sean Purdy wrote: > Just a quick note to say thanks for organising the London Ceph/OpenStack day. > I got a lot out of it, and it was nice to see the community out in force. +1, thanks for Wido and the ShapeBlue guys for a great event,

[ceph-users] Nfs-ganesha rgw config for multi tenancy rgw users

2018-04-23 Thread Marc Roos
I have problems exporting a bucket that really does exist. I have tried Path = "/test:test3"; Path = "/test3"; Results in ganesha not start with message ExportId=301 Path=/test:test3 FSAL_ERROR=(Invalid object type,0) If I use path=/ I can mount something but that is a empty export, but cannot

Re: [ceph-users] Is there a faster way of copy files to and from a rgw bucket?

2018-04-23 Thread Sean Purdy
On Sat, 21 Apr 2018, Marc Roos said: > > I wondered if there are faster ways to copy files to and from a bucket, > like eg not having to use the radosgw? Is nfs-ganesha doing this faster > than s3cmd? I find the go-based S3 clients e.g. rclone, minio mc, are a bit faster than the python-based

Re: [ceph-users] Is there a faster way of copy files to and from a rgw bucket?

2018-04-23 Thread Lenz Grimmer
Hi Marc, On 04/21/2018 11:34 AM, Marc Roos wrote: > I wondered if there are faster ways to copy files to and from a bucket, > like eg not having to use the radosgw? Is nfs-ganesha doing this faster > than s3cmd? I have doubts that putting another layer of on top of S3 will make if faster than