Thanks for the clarification Shahid. Much Appreciated. Actually if the export command is on CarbonData table, we can just zip the actual table folder & associated agg table folders into user mentioned location. It dont export Metadata Copy data from 1 cluster to other will still remain same in your approach also.
After copying data into new cluster, how to synchronize incremental loads or schema evolution from old cluster to new cluster ? should we need to drop the table in new cluster, copy the data from old cluster to new cluster & recreate table again ? I think creating carbondata table requires schema information also to be passed. CREATE TABLE $dbName.$tbName (${ fields.map(f => f.rawSchema).mkString(",") }) USING CARBONDATA OPTIONS (tableName "$tbName", dbName "$dbName", tablePath "$tablePath") --- Regards, Naresh P R On Fri, Nov 24, 2017 at 10:02 AM, mohdshahidkhan < mohdshahidkhan1...@gmail.com> wrote: > Hi Naresh, > Hive export export the meta data as well as the table data also. > We do not want to export the table data as it will tedious for TB's of > data. > We have table and table data in the store location but the table is not > register with hive metastore. > > Regards, > Shahid > > > > -- > Sent from: http://apache-carbondata-dev-mailing-list-archive.1130556. > n5.nabble.com/ >