This is simply not true. We run quite a few ceph clusters with
rack-level layer2 domains (thus routing between racks) and everything
works great.
On Wed, Aug 30, 2017 at 10:52 AM, David Turner wrote:
> ALL OSDs need to be running the same private network at the same time. ALL
> clients, RGW, OSD
We do the same thing. OSPF between ToR switches, BGP to all of the hosts
with each one advertising its own /32 (each has 2 NICs).
On Mon, Jun 6, 2016 at 6:29 AM, Luis Periquito wrote:
> Nick,
>
> TL;DR: works brilliantly :)
>
> Where I work we have all of the ceph nodes (and a lot of other stuff
nces I see that the partition wasn't resized.
> /proc/partions + fdisk -l show the size of the image partition, not the
> instance partition specified by the flavor.
>
>
>
> ---
> original message
> timestamp: Tuesday, August 05, 2014 03:50:55 PM
> from: Jeremy
This is *not* a case of that bug. That LP bug is referring to an
issue with the 'nova resize' command and *not* with an instance
resizing its own root filesystem. I can confirm that the latter case
works perfectly fine in Havana if you have things configured properly.
A few questions:
1) What w
You'll also want to change the crush weights of your OSDs to reflect
the different sizes so that the smaller disks don't get filled up
prematurely. See "weighting bucket items" here:
http://ceph.com/docs/master/rados/operations/crush-map/
On Thu, Jun 5, 2014 at 10:14 AM, Michael wrote:
> ceph os
Use /etc/modules, not /etc/rc.modules, if you're using Ubuntu. You're
using RedHat config files for a Debian system.
On Tue, Apr 1, 2014 at 1:50 PM, Dan Koren wrote:
> Hi Ivan,
> I am using the repos.
> I don't however see how this could have anything to do with the repos,
> since rbd is install
Our (DreamHost's) largest cluster is roughly the same size as yours,
~3PB on just shy of 1100 OSDs currently. The architecture's quite
similar too, except we have "separate" 10G front-end and back-end
networks with a partial spine-leaf architecture using 40G
interconnects. I say "separate" becaus