Re: [ceph-users] Ceph expansion/deploy via ansible

2019-06-03 Thread Shawn Iverson
The cephfs_metadata pool makes sense on ssd, but it won't need a lot of space. Chances are that you'll have plenty of ssd storage to spare for other uses. Personally, I'm migrating away from a cache tier and rebuilding my OSDs. I am finding that performance with Bluestore OSDs with the block.db

Re: [ceph-users] Ceph expansion/deploy via ansible

2019-06-03 Thread Daniele Riccucci
Hello, sorry to jump in. I'm looking to expand with SSDs on an HDD cluster. I'm thinking about moving cephfs_metadata to the SSDs (maybe with device class?) or to use them as cache layer in front of the cluster. Any tips on how to do it with ceph-ansible? I can share the config I currently have

Re: [ceph-users] Ceph expansion/deploy via ansible

2019-04-17 Thread Sinan Polat
I have deployed, expanded and upgraded multiple Ceph clusters using ceph-ansible. Works great. What information are you looking for? -- Sinan > Op 17 apr. 2019 om 16:24 heeft Francois Lafont > het volgende geschreven: > > Hi, > > +1 for ceph-ansible too. ;) > > -- > François (flaf) >

Re: [ceph-users] Ceph expansion/deploy via ansible

2019-04-17 Thread Francois Lafont
Hi, +1 for ceph-ansible too. ;) -- François (flaf) ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ceph expansion/deploy via ansible

2019-04-17 Thread Daniel Gryniewicz
On 4/17/19 4:24 AM, John Molefe wrote: Hi everyone, I currently have a ceph cluster running on SUSE and I have an expansion project that I will be starting with around June. Has anybody here deployed (from scratch) or expanded their ceph cluster via ansible?? I would appreciate it if you'd 

[ceph-users] Ceph expansion/deploy via ansible

2019-04-17 Thread John Molefe
Hi everyone, I currently have a ceph cluster running on SUSE and I have an expansion project that I will be starting with around June. Has anybody here deployed (from scratch) or expanded their ceph cluster via ansible?? I would appreciate it if you'd share your experiences, challenges,

Re: [ceph-users] CEPH Expansion

2015-01-25 Thread Georgios Dimitrakakis
Hi Craig! Indeed I had reduced the replicated size to 2 instead of 3 while the minimum size is 1. I hadn't touched the crushmap though. I would like to keep on going with the replicated size of 2 . Do you think this would be a problem? Please find below the output of the command: $ ceph

Re: [ceph-users] CEPH Expansion

2015-01-23 Thread Craig Lewis
It depends. There are a lot of variables, like how many nodes and disks you currently have. Are you using journals on SSD. How much data is already in the cluster. What the client load is on the cluster. Since you only have 40 GB in the cluster, it shouldn't take long to backfill. You may

Re: [ceph-users] CEPH Expansion

2015-01-23 Thread Craig Lewis
You've either modified the crushmap, or changed the pool size to 1. The defaults create 3 replicas on different hosts. What does `ceph osd dump | grep ^pool` output? If the size param is 1, then you reduced the replica count. If the size param is 1, you must've adjusted the crushmap. Either

Re: [ceph-users] CEPH Expansion

2015-01-23 Thread Georgios Dimitrakakis
Hi Craig! For the moment I have only one node with 10 OSDs. I want to add a second one with 10 more OSDs. Each OSD in every node is a 4TB SATA drive. No SSD disks! The data ara approximately 40GB and I will do my best to have zero or at least very very low load during the expansion process.

Re: [ceph-users] CEPH Expansion

2015-01-18 Thread Jiri Kanicky
Hi George, List disks available: # $ ceph-deploy disk list {node-name [node-name]...} Add OSD using osd create: # $ ceph-deploy osd create {node-name}:{disk}[:{path/to/journal}] Or you can use the manual steps to prepare and activate disk described at

Re: [ceph-users] CEPH Expansion

2015-01-18 Thread Georgios Dimitrakakis
Hi Jiri, thanks for the feedback. My main concern is if it's better to add each OSD one-by-one and wait for the cluster to rebalance every time or do it all-together at once. Furthermore an estimate of the time to rebalance would be great! Regards, George Hi George, List disks

[ceph-users] CEPH Expansion

2015-01-16 Thread Georgios Dimitrakakis
Hi all! I would like to expand our CEPH Cluster and add a second OSD node. In this node I will have ten 4TB disks dedicated to CEPH. What is the proper way of putting them in the already available CEPH node? I guess that the first thing to do is to prepare them with ceph-deploy and mark