The best way to restore is from a backup. We use distcp to keep this
scalable : http://hadoop.apache.org/docs/r1.2.0/distcp2.html
The data we feed to hdfs also gets pushed to this backup and the
metadatabase from hive also gets pushed here. So this combination works
well for us (had to use it
that is what I search for a long time, and no responses. But if you are not
in the cloud (AWS, Azure,...) you can add the jar for your all Datanodes in
$HADOOP_HOME/lib , and then restart the service mapreduce-tasktracker like
this
/etc/init.d/*mapreduce-tasktracker stop
Hi experts,
I have created a table in hive and loaded the data into it. now i want to
change the datatype of one particular column. Do i need to drop and move the
file again to hive?will it work fine if i just alter the data type alone in
hive?
Thanks,Manickam P
You have to send a mail to user-unsubscr...@hive.apache.org
On Thu, Jul 18, 2013 at 1:30 PM, Beau Rothrock beau.rothr...@lookout.comwrote:
On Jul 18, 2013, at 1:40 PM, Tzur Turkenitz wrote:
Hello,
Just finished reading the Hive-Architecture pdf, and failed to find the
answers I was hoping for. So here I am, hoping this community will shed some
light.
I think I know what the answers will be, I need that bolted down and
i wait long time,no result ,why hive is so slow?
hive select cookie,url,ip,source,vsid,token,residence,edate from
hb_cookie_history where edate='1371398400500' and edate='1371400200500';
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce
one mapper. how big is the table?
2013/7/18 ch huang justlo...@gmail.com
i wait long time,no result ,why hive is so slow?
hive select cookie,url,ip,source,vsid,token,residence,edate from
hb_cookie_history where edate='1371398400500' and edate='1371400200500';
Total MapReduce jobs = 1
ATT