On Thu, Apr 8, 2010 at 6:58 PM, Ted Yu <[email protected]> wrote:
> Typo in Ed's last email (table name): > echo "create external table if not exists ed*_*test ( dat string ) > partitioned by (dummy string) location '/tmp/a';" > test.q > > > On Thu, Apr 8, 2010 at 3:14 PM, Edward Capriolo <[email protected]>wrote: > >> >> >> >> On Thu, Apr 8, 2010 at 5:22 PM, Edward Capriolo <[email protected]>wrote: >> >>> >>> >>> On Thu, Apr 8, 2010 at 5:14 PM, Paul Yang <[email protected]> wrote: >>> >>>> Seems to be fixed in 0.6. Here's what I got: >>>> >>>> test.q: >>>> alter table tmp_pyang_t ADD PARTITION (ds='2') LOCATION '/tmp/blah2'; >>>> alter table tmp_pyang_t ADD PARTITION (ds='3') LOCATION '/tmp/blah2'; >>>> alter table tmp_pyang_t ADD PARTITION (ds='4') LOCATION '/tmp/blah2'; >>>> alter table tmp_pyang_t ADD PARTITION (ds='5') LOCATION '/tmp/blah2'; >>>> alter table tmp_pyang_t ADD PARTITION (ds='6') LOCATION '/tmp/blah2'; >>>> alter table tmp_pyang_t ADD PARTITION (ds='7') LOCATION '/tmp/blah2'; >>>> alter table tmp_pyang_t ADD PARTITION (ds='8') LOCATION '/tmp/blah2'; >>>> >>>> >>>> Hive history >>>> file=/tmp/pyang/hive_job_log_pyang_201004081410_378771152.txt >>>> OK >>>> Time taken: 4.101 seconds >>>> OK >>>> Time taken: 0.558 seconds >>>> OK >>>> Time taken: 0.453 seconds >>>> OK >>>> Time taken: 0.416 seconds >>>> OK >>>> Time taken: 0.378 seconds >>>> OK >>>> Time taken: 0.457 seconds >>>> OK >>>> Time taken: 0.454 seconds >>>> >>>> >>>> Can you the stack trace in /tmp/<username>/hive.log? >>>> >>>> >>>> -----Original Message----- >>>> From: Prasad Chakka [mailto:[email protected]] >>>> Sent: Thursday, April 08, 2010 1:03 PM >>>> To: [email protected] >>>> Subject: Re: enough alter tables in the same .q file eventually fail >>>> >>>> There was a bug that got fixed where each request was creating a >>>> separate metastore client. That could be it or something similar that >>>> hasn't >>>> gotten fixed. >>>> >>>> On Apr 8, 2010, at 11:47 AM, Edward Capriolo wrote: >>>> >>>> > Hive 5.0 mysql as metastore backend. Using external tables with >>>> location for partitions... >>>> > >>>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid = >>>> '000843') LOCATION 'hit_date=20100329/mid=000843'; >>>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid = >>>> '000844') LOCATION 'hit_date=20100329/mid=000844'; >>>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid = >>>> '000849') LOCATION 'hit_date=20100329/mid=000849'; >>>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid = >>>> '000850') LOCATION 'hit_date=20100329/mid=000850'; >>>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid = >>>> '000851') LOCATION 'hit_date=20100329/mid=000851'; >>>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid = >>>> '000852') LOCATION 'hit_date=20100329/mid=000852'; >>>> > >>>> > Eventually this fails after a number of entries. >>>> > >>>> > Time taken: 0.159 seconds >>>> > OK >>>> > Time taken: 0.17 seconds >>>> > OK >>>> > Time taken: 0.241 seconds >>>> > FAILED: Error in metadata: Unable to fetch table XXXXX_action >>>> > FAILED: Execution Error, return code 1 from >>>> org.apache.hadoop.hive.ql.exec.DDLTask >>>> > >>>> > Restarting the process after removing the already added tables works >>>> until it breaks again. Anyone ever dealt with this? >>>> > >>>> > Doing one hive -e per table always works but takes a lot longer ...3 >>>> seconds a partition rather then ~.5 seconds. >>>> > >>>> > >>>> >>>> >>> It does not happen after 4 or 5 more like 100 or 1000+. I will try to >>> track this down a bit. >>> >>> Edward >>> >> >> >> Try this: >> >> echo "create external table if not exists edtest ( dat string ) >> partitioned by (dummy string) location '/tmp/a';" > test.q >> for i in {1..3000} ; do echo "alter table ed_test add partition >> (dummy='${i}') location '/tmp/duh';" ; done >> test.q >> hive -f test.q >> >> On Hive 5.0 I get a failure mid way. >> Edward >> > > Also trying to do selects from the table without enough pruning in the where clause causes the same error, sometimes it comes as a JDBC/jpox access denied error.
