surekhasaharan commented on a change in pull request #7598: Add tool for migrating from local deep storage/Derby metadata URL: https://github.com/apache/incubator-druid/pull/7598#discussion_r281308552
########## File path: docs/content/operations/quickstart-migration.md ########## @@ -0,0 +1,169 @@ +--- +layout: doc_page +title: "Migrating Derby Metadata and Local Deep Storage" +--- + +<!-- + ~ Licensed to the Apache Software Foundation (ASF) under one + ~ or more contributor license agreements. See the NOTICE file + ~ distributed with this work for additional information + ~ regarding copyright ownership. The ASF licenses this file + ~ to you under the Apache License, Version 2.0 (the + ~ "License"); you may not use this file except in compliance + ~ with the License. You may obtain a copy of the License at + ~ + ~ http://www.apache.org/licenses/LICENSE-2.0 + ~ + ~ Unless required by applicable law or agreed to in writing, + ~ software distributed under the License is distributed on an + ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + ~ KIND, either express or implied. See the License for the + ~ specific language governing permissions and limitations + ~ under the License. + --> + +# Migrating Derby Metadata and Local Deep Storage + +If you have been running an evaluation Druid cluster using the built-in Derby metadata storage and local +deep storage (configurations used by the tutorial and single-machine quickstarts), and wish to migrate to a +more production-capable metadata store such as MySQL or PostgreSQL, and/or migrate your deep storage from local +to S3 or HDFS, Druid provides the `export-metadata` tool to assist with such migrations. + +This tool exports the contents of the following Druid tables: +- segments +- rules +- config +- datasource +- supervisors + +Additionally, the tool can rewrite the local deep storage location descriptors in the rows of the segments table +to point to new deep storage locations (S3, HDFS, and local rewrite paths are supported). + +## `export-metadata` Options + +The `export-metadata` tool provides the following options: + +### Output Path + +`--output-path`, `-o`: The output directory of the tool. CSV files for the Druid segments, rules, config, datasource, and supervisors tables will be written to this directory. + +### S3 Migration + +By setting the options below, the tool will rewrite the segment load specs to point to a new S3 deep storage location. + +This helps users migrate segments stored in local deep storage to S3. + +`--s3bucket`, `-b`: The S3 bucket that will hold the migrated segments +`--s3baseKey`, `-k`: The base S3 key where the migrated segments will be stored + +When copying the local deep storage segments to S3, the rewrite performed by this tool requires that the directory structure of the segments be unchanged. + +For example, if the cluster had the following local deep storage configuration: + +``` +druid.storage.type=local +druid.storage.storageDirectory=/druid/segments +``` + +If the target S3 bucket was `migration`, with a base key of `example`, the contents of `s3://migration/example/` must be identical to that of `/druid/segments` on the old local filesystem. + +### HDFS Migration + +By setting the options below, the tool will rewrite the segment load specs to point to a new HDFS deep storage location. + +This helps users migrate segments stored in local deep storage to HDFS. + +`--hadoopStorageDirectory`, `h`: The HDFS path that will hold the migrated segments Review comment: `-h` instead of `h` ? ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
