Hello together, I'd like to discuss a way to increase the partition power of an existing Swift cluster. This is most likely interesting for smaller clusters that are growing beyond their original planed size.
As discussed earlier [1] a rehashing is required after changing the partition power to make existing data available again. My idea is to increase the partition power by 1 and then assign the same devices to (old partition*2 and old_partition*2+1). For example: Assigned devices on older ring: |Partition 0: 2 3 0 Partition 1: 1 0 3| Assigned devices on new ring with partition power +1: |Partition 0: 2 3 0 Partition 1: 2 3 0 Partition 2: 1 0 3 Partition 3: 1 0 3 | The hash of an object doesn't change with a new partition, only the assigned partition. An object on partition 1 on the old ring will be assigned to partition 2 OR 3 on the ring with the increased partition power. Because of the fact that the used devices are the same for the new partitions no data movement to other devices or storage nodes is required (only locally). A longer example together with a small tool can be found at https://github.com/cschwede/swift-ring-tool Since the device distribution on the new ring might not be optimal it is possible to use a "fresh" distribution and migrate the ring with the increased partition power to a ring with a new distribution. So far this worked for smaller clusters (with a few hundred TB) as well as in local SAIO installations. I'd like to discuss this approach and see if it makes sense to continue work on this and adding this tool to swift, python-swiftclient or stackforge (or whatever else might be appropriate). Please let me know what you think. Best regards, Christian [1] http://lists.openstack.org/pipermail/openstack-operators/2013-January/002544.html _______________________________________________ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev