Re: [PATCH] chunkd: add cp command, for local intra-table copies
On Tue, 6 Jul 2010 03:24:29 -0400 Jeff Garzik j...@garzik.org wrote: The following patch, against current hail.git, adds the CP command to chunkd, permitting copying from object-object inside a single table. What is it for? -- Pete -- To unsubscribe from this list: send the line unsubscribe hail-devel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] chunkd: add cp command, for local intra-table copies
On 07/06/2010 11:17 AM, Pete Zaitcev wrote: On Tue, 6 Jul 2010 03:24:29 -0400 Jeff Garzikj...@garzik.org wrote: The following patch, against current hail.git, adds the CP command to chunkd, permitting copying from object-object inside a single table. What is it for? Fun! :) More seriously, it is mainly an infrastructure patch, adding things that the upcoming RCP command will use. As CP is far less complex, this allows me to verify several bits of machinery before moving forward. I imagine CP will be tangentially helpful, but not a crucial feature in and of itself. Jeff -- To unsubscribe from this list: send the line unsubscribe hail-devel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] chunkd: add cp command, for local intra-table copies
On 07/06/2010 11:17 AM, Pete Zaitcev wrote: On Tue, 6 Jul 2010 03:24:29 -0400 Jeff Garzikj...@garzik.org wrote: The following patch, against current hail.git, adds the CP command to chunkd, permitting copying from object-object inside a single table. What is it for? Here's a real-world example. Quoting from the S3 documentation, this describes the PUT (copy) operation, something that tabled does not yet support, but should: This implementation of the PUT operation creates a copy of an object that is already stored in Amazon S3. A PUT copy operation is the same as performing a GET and then a PUT. Adding the request header, x-amz-copy-source, makes the PUT operation copy the source object into the destination bucket. Assuming that a given tabled object is already fully replicated -- HOPEFULLY the common case for us -- the least expensive way to implement this is for each chunkd containing object OLD_KEY CHO_CP(object OLD_KEY - object NEW_KEY) Assuming each chunkd node has the necessary free space, this method totally avoids using network bandwidth, when creating a copy of an object Jeff -- To unsubscribe from this list: send the line unsubscribe hail-devel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html