Sorry I mixed up my terminology. Let me better phrase my question.

When originally running swift-ring-builder container.builder create, I set
the partitions at 15 to give a total of 32768 partitions to split across 23
hosts with 12 disks each. Now I am replacing the container service on these
23 hosts with 4 hosts that have 1 disk.

Since I have 4 hosts with 1 disk each, do I calculate the weight of each of
these disks as 2^15 / 4 so that the same overall partition numbers are
available even though we're only using a handful of the disks?


On Fri, Feb 28, 2014 at 11:20 AM, Pete Zaitcev <[email protected]> wrote:

> On Fri, 28 Feb 2014 09:10:06 -0800
> Stephen Wood <[email protected]> wrote:
>
> > However I realize that the shard count is completely different now.
>
> What is a "shard count"? Do you have a document that uses such
> terminology?
>
> > I
> > originally used a partition value of 15 but this now seems much to high
> for
> > 4 servers with only disk each.
>
> So what? As long as there are no ill effects, it's all good.
> Meaning if you have enough RAM to keep your ring once it's loaded,
> then no problem, isn't it? It's not like your A+C servers magically
> shrunk when you swapped the winchesters for SSDs, right?
>
> > Can I dynamically
> > adjust the partition values after the swift ring has been created?
>
> No, you can't.
>
> >  Or
> > should I just take the disks on my 4 SSD hosts and put their weight as
> 2^15
> > / 4 so the overall shard count stays the same?
>
> I am failing to make sense of the above sentence. Weight only matters
> for builder scattering partitions at devices relative to each other.
> So, if one replaces rotating media with SSDs, but keeps the cluster
> running, the number of parititions stays the same, right? At that point
> weights can be redefined at, say, 100, or any other number, without
> any effect on total or per-device number of partitions.
>
> I think we need to circle back to the definition of the mysterious
> "shard count" before we can get to the bottom of this.
>
> -- Pete
>



-- 
Stephen Wood
www.heystephenwood.com
_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : [email protected]
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to