. Hi users, As im inserting 1GB data in to hive partition table and the job was ended with the below error
Below is my vender1gb data vender string supplier string order_date string quantity int Vendor_1 Supplier_121 2012-03-06 2763 Vendor_1 Supplier_122 2012-03-06 5211 Vendor_1 Supplier_123 2012-03-06 2847 Vendor_1 Supplier_124 2012-03-06 9020 Vendor_1 Supplier_125 2012-03-06 7431 Vendor_1 Supplier_126 2012-03-06 8469 Vendor_1 Supplier_127 2012-03-06 8553 Vendor_1 Supplier_128 2012-03-06 3154 Vendor_1 Supplier_129 2012-03-06 3772 Vendor_1 Supplier_130 2012-03-06 10985 Vendor_1 Supplier_131 2012-03-06 3697 Vendor_1 Supplier_132 2012-03-06 3061 Vendor_1 Supplier_133 2012-03-06 1335 Vendor_1 Supplier_134 2012-03-06 4940 Vendor_1 Supplier_135 2012-03-06 3285 Vendor_1 Supplier_136 2012-03-06 8077 Vendor_1 Supplier_137 2012-03-06 3404 Vendor_1 Supplier_138 2012-03-06 9097 Vendor_1 Supplier_139 2012-03-06 5015 Vendor_1 Supplier_140 2012-03-06 2944 Vendor_1 Supplier_141 2012-03-06 1298 Vendor_1 Supplier_142 2012-03-06 9528 Vendor_1 Supplier_143 2012-03-06 4420 Vendor_1 Supplier_144 2012-03-06 8261 Vendor_1 Supplier_145 2012-03-06 5698 Vendor_1 Supplier_146 2012-03-06 6128 Vendor_1 Supplier_147 2012-03-06 2354 Vendor_1 Supplier_148 2012-03-06 7810 Vendor_1 Supplier_149 2012-03-06 9383 Vendor_1 Supplier_150 2012-03-06 3716 Vendor_1 Supplier_151 2012-03-06 3734 Vendor_1 Supplier_152 2012-03-06 10295 Vendor_1 Supplier_153 2012-03-06 9314 Vendor_1 Supplier_154 2012-03-06 6329 Vendor_1 Supplier_155 2012-03-06 9792 Vendor_1 Supplier_156 2012-03-06 9379 Vendor_1 Supplier_157 2012-03-06 5342 Vendor_1 Supplier_158 2012-03-06 3928 Vendor_1 Supplier_159 2012-03-06 7268 Before running i enable the below things set hive.exec.dynamic.partition=true; set hive.exec.dynamic.partition.mode=nonstrict; set hive.exec.max.dynamic.partitions.pernode=200; And as in my hive table i had 200 distinct suppliers are there means it will create 200 partitions . Below cmd as i used to insert the data in the partition(dynamic) hive table : FROM vender1gb v INSERT OVERWRITE TABLE vender_part PARTITION(supplier) SELECT v.vender,v.supplier,v.order_date,v.quantity DISTRIBUTE BY supplier; After run the above cmd .../supplier=9280 .../supplier=9283 .../supplier=9337 .../supplier=9373 .../supplier=9393 .../supplier=9430 .../supplier=9439 .../supplier=9457 .../supplier=9509 .../supplier=9539 .../supplier=9561 .../supplier=9563 .../supplier=9572 .../supplier=9633 .../supplier=9672 .../supplier=9704 .../supplier=9731 .../supplier=9758 .../supplier=9839 .../supplier=9844 .../supplier=9845 .../supplier=9923 ... . Killing the job. MapReduce Total cumulative CPU time: 4 minutes 9 seconds 230 msec Ended Job = job_201207161513_0018 with errors Error during job, obtaining debugging information... Examining task ID: task_201207161513_0018_m_000006 (and more) from job job_201207161513_0018 Exception in thread "Thread-57" java.lang.RuntimeException: Error while reading from task log url at org.apache.hadoop.hive.ql.exec.errors.TaskLogProcessor.getStackTraces(TaskLogProcessor.java:240) at org.apache.hadoop.hive.ql.exec.JobDebugger.showJobFailDebugInfo(JobDebugger.java:227) at org.apache.hadoop.hive.ql.exec.JobDebugger.run(JobDebugger.java:92) at java.lang.Thread.run(Thread.java:662) Caused by: java.io.IOException: Server returned HTTP response code: 400 for URL: http://md-trngpoc2:50060/tasklog?taskid=attempt_201207161513_0018_r_000000_0&start=-8193 at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1436) at java.net.URL.openStream(URL.java:1010) at org.apache.hadoop.hive.ql.exec.errors.TaskLogProcessor.getStackTraces(TaskLogProcessor.java:192) ... 3 more FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask MapReduce Jobs Launched: Job 0: Map: 5 Reduce: 2 Cumulative CPU: 249.23 sec HDFS Read: 1126757288 HDFS Write: 4969 FAIL Total MapReduce CPU Time Spent: 4 minutes 9 seconds 230 msec After running the above error im getting unable to trace the prob ,please provide me solution for this. Regards shaik.