Hey Folks,

I am using derby-db currently. I will play with the suggestions in the
thread. I was wondering since we already have a set location for table
partitions, hive should add support for wildcards in partition name etc. I
think that would be the best solution for the problem.

Best
Bhupesh


On Sat, Aug 20, 2011 at 10:05 PM, Ayon Sinha <ayonsi...@yahoo.com> wrote:

> Make sure your Hive metadata database is a separate one and the new one has
> the tables pointing to the new cluster. I has this situation "hive comes
> up fine and show tables etc but the hive location is still pointing to old
> cluster" so all MR for Hive queries were pulling data over the network
> from the old cluster.
> Another way is to dump the metadata DB for the old cluster and string
> replace old cluster name with new cluster name in the db-dump.
> And the other option may be to use Sqoop telling it to pull data from the
> old Hive cluster via Hive JDBC drivers.
>
> -Ayon
> See My Photos on Flickr <http://www.flickr.com/photos/ayonsinha/>
> Also check out my Blog for answers to commonly asked 
> questions.<http://dailyadvisor.blogspot.com>
>
> ------------------------------
> *From:* Bhupesh Bansal <bhup...@groupon.com>
> *To:* user@hive.apache.org
> *Sent:* Friday, August 19, 2011 2:26 PM
> *Subject:* Alter table Set Locations for all partitions
>
> Hey Folks,
>
> I am wondering what is the easiest way to migrate data off one hadoop/hive
> cluster to another.
>
> I distcp all data to new cluster, and then copied the metadata directory to
> new cluster.
> hive comes up fine and show tables etc but the hive location is still
> pointing to old cluster
>
> There is one command
> alter table table_name set location new_location
>
> but it doesnt work for partitioned tables, is there a way we can do it for
> *ALL* partitions easily ??
>
> Best
> Bhupesh
>
>
>
>
>

Reply via email to