Re: [ceph-users] Ceph re-ip of OSD node

2017-08-30 Thread Jeremy Hanmer
This is simply not true. We run quite a few ceph clusters with rack-level layer2 domains (thus routing between racks) and everything works great. On Wed, Aug 30, 2017 at 10:52 AM, David Turner wrote: > ALL OSDs need to be running the same private network at the same time. ALL > clients, RGW, OSD

Re: [ceph-users] OSPF to the host

2016-06-06 Thread Jeremy Hanmer
We do the same thing. OSPF between ToR switches, BGP to all of the hosts with each one advertising its own /32 (each has 2 NICs). On Mon, Jun 6, 2016 at 6:29 AM, Luis Periquito wrote: > Nick, > > TL;DR: works brilliantly :) > > Where I work we have all of the ceph nodes (and a lot of other stuff

Re: [ceph-users] Openstack Havana root fs resize don't work

2014-08-06 Thread Jeremy Hanmer
nces I see that the partition wasn't resized. > /proc/partions + fdisk -l show the size of the image partition, not the > instance partition specified by the flavor. > > > > --- > original message > timestamp: Tuesday, August 05, 2014 03:50:55 PM > from: Jeremy

Re: [ceph-users] Openstack Havana root fs resize don't work

2014-08-05 Thread Jeremy Hanmer
This is *not* a case of that bug. That LP bug is referring to an issue with the 'nova resize' command and *not* with an instance resizing its own root filesystem. I can confirm that the latter case works perfectly fine in Havana if you have things configured properly. A few questions: 1) What w

Re: [ceph-users] Hard drives of different sizes.

2014-06-05 Thread Jeremy Hanmer
You'll also want to change the crush weights of your OSDs to reflect the different sizes so that the smaller disks don't get filled up prematurely. See "weighting bucket items" here: http://ceph.com/docs/master/rados/operations/crush-map/ On Thu, Jun 5, 2014 at 10:14 AM, Michael wrote: > ceph os

Re: [ceph-users] RBD does not load at boot

2014-04-01 Thread Jeremy Hanmer
Use /etc/modules, not /etc/rc.modules, if you're using Ubuntu. You're using RedHat config files for a Debian system. On Tue, Apr 1, 2014 at 1:50 PM, Dan Koren wrote: > Hi Ivan, > I am using the repos. > I don't however see how this could have anything to do with the repos, > since rbd is install

Re: [ceph-users] Largest Production Ceph Cluster

2014-04-01 Thread Jeremy Hanmer
Our (DreamHost's) largest cluster is roughly the same size as yours, ~3PB on just shy of 1100 OSDs currently. The architecture's quite similar too, except we have "separate" 10G front-end and back-end networks with a partial spine-leaf architecture using 40G interconnects. I say "separate" becaus