[jira] [Commented] (HIVE-21111) ConditionalTask cannot be cast to MapRedTask
[ https://issues.apache.org/jira/browse/HIVE-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16799672#comment-16799672 ] zhuwei commented on HIVE-2: --- [~lirui] -HIVE-14557- fixed this issue. Thanks. > ConditionalTask cannot be cast to MapRedTask > > > Key: HIVE-2 > URL: https://issues.apache.org/jira/browse/HIVE-2 > Project: Hive > Issue Type: Bug > Components: Physical Optimizer >Affects Versions: 2.1.1, 3.1.1, 2.3.4 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-2.1.patch > > > We met error like this in our product environment: > java.lang.ClassCastException: org.apache.hadoop.hive.ql.exec.ConditionalTask > cannot be cast to org.apache.hadoop.hive.ql.exec.mr.MapRedTask > at > org.apache.hadoop.hive.ql.optimizer.physical.AbstractJoinTaskDispatcher.dispatch(AbstractJoinTaskDispatcher.java:173) > > There is a bug in function > org.apache.hadoop.hive.ql.optimizer.physical.AbstractJoinTaskDispatcher.dispatch: > if (tsk.isMapRedTask()) { > Task newTask = this.processCurrentTask((MapRedTask) > tsk, > ((ConditionalTask) currTask), physicalContext.getContext()); > walkerCtx.addToDispatchList(newTask); > } > In the above code, when tsk is instance of ConditionalTask, > tsk.isMapRedTask() still can be true, but it cannot be cast to MapRedTask. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21111) ConditionalTask cannot be cast to MapRedTask
[ https://issues.apache.org/jira/browse/HIVE-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16797787#comment-16797787 ] zhuwei commented on HIVE-2: --- [~lirui] Since it's related to table data size , it's not easy to reproduce it from beginning. The root cause is that a child task of conditional task is still conditional task. Please take a look at the code that I pasted in description, I think this bug is obvious. The SQL that triggered this bug in our product environment is like this: set hive.auto.convert.join=true; set hive.optimize.skewjoin = true; explain insert overwrite table dw.dwd_tc_order_old_d_orign select a.order_no, a.kdt_id, a.store_id, a.order_type, a.features, a.state, a.close_state, a.pay_state, b.origin_price, a.buy_way, b.goods_num, b.goods_pay, a.express_type, case when ((a.state >=6 and a.state <> 99) or a.express_time <> 0) then 1 else 0 end as express_state, case when ((a.state >=6 and a.state <> 99) or a.express_time <> 0) then 'a' else 'b' end as express_state_name, if((a.order_type=6 and a.pay_state>0),1,a.stock_state) as stock_state, a.customer_id, a.customer_type, a.customer_name, a.buyer_id, a.buyer_phone, if(a.book_time=0 or a.book_time is null,'0',udf.format_unixtime(a.book_time)) as book_time, if(a.pay_time=0 or a.pay_time is null,'0',udf.format_unixtime(a.pay_time)) as pay_time, if(a.express_time=0 or a.express_time is null,'0',udf.format_unixtime(a.express_time)) as express_time, if(a.success_time=0 or a.success_time is null,'0',udf.format_unixtime(a.success_time)) as success_time, if(a.close_time=0 or a.close_time is null,0,udf.format_unixtime(a.close_time)) as close_time, if(a.feedback_time=0 or a.feedback_time is null,'0',udf.format_unixtime(a.feedback_time)) as feedback_time FROM ( select order_no, kdt_id,store_id,features,state,close_state,pay_state,order_type, buy_way,express_type,activity_type, express_state,feedback,refund_state,stock_state,customer_id,customer_type,customer_name,buyer_id,buyer_phone, book_time,pay_time, express_time,success_time,close_time,feedback_time FROM ods.tc_seller_order where kdt_id<>0 and (length(order_no)<> 24 OR substr(order_no,1,1) <> 'E' OR substr(order_no,-5,1) <> '0') ) a join ( select order_no, cast(sum(price * num)as bigint) as origin_price , sum(num) AS goods_num, cast(sum(pay_price*num) AS bigint) AS goods_pay from ods.tc_order_item where (length(order_no)<> 24 OR substr(order_no,1,1) <> 'E' OR substr(order_no,-5,1) <> '0') group by order_no ) b on a.order_no = b.order_no; > ConditionalTask cannot be cast to MapRedTask > > > Key: HIVE-2 > URL: https://issues.apache.org/jira/browse/HIVE-2 > Project: Hive > Issue Type: Bug > Components: Physical Optimizer >Affects Versions: 2.1.1, 3.1.1, 2.3.4 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-2.1.patch > > > We met error like this in our product environment: > java.lang.ClassCastException: org.apache.hadoop.hive.ql.exec.ConditionalTask > cannot be cast to org.apache.hadoop.hive.ql.exec.mr.MapRedTask > at > org.apache.hadoop.hive.ql.optimizer.physical.AbstractJoinTaskDispatcher.dispatch(AbstractJoinTaskDispatcher.java:173) > > There is a bug in function > org.apache.hadoop.hive.ql.optimizer.physical.AbstractJoinTaskDispatcher.dispatch: > if (tsk.isMapRedTask()) { > Task newTask = this.processCurrentTask((MapRedTask) > tsk, > ((ConditionalTask) currTask), physicalContext.getContext()); > walkerCtx.addToDispatchList(newTask); > } > In the above code, when tsk is instance of ConditionalTask, > tsk.isMapRedTask() still can be true, but it cannot be cast to MapRedTask. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21111) ConditionalTask cannot be cast to MapRedTask
[ https://issues.apache.org/jira/browse/HIVE-2?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-2: -- Status: Patch Available (was: Open) > ConditionalTask cannot be cast to MapRedTask > > > Key: HIVE-2 > URL: https://issues.apache.org/jira/browse/HIVE-2 > Project: Hive > Issue Type: Bug > Components: Physical Optimizer >Affects Versions: 2.3.4, 3.1.1, 2.1.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-2.1.patch > > > We met error like this in our product environment: > java.lang.ClassCastException: org.apache.hadoop.hive.ql.exec.ConditionalTask > cannot be cast to org.apache.hadoop.hive.ql.exec.mr.MapRedTask > at > org.apache.hadoop.hive.ql.optimizer.physical.AbstractJoinTaskDispatcher.dispatch(AbstractJoinTaskDispatcher.java:173) > > There is a bug in function > org.apache.hadoop.hive.ql.optimizer.physical.AbstractJoinTaskDispatcher.dispatch: > if (tsk.isMapRedTask()) { > Task newTask = this.processCurrentTask((MapRedTask) > tsk, > ((ConditionalTask) currTask), physicalContext.getContext()); > walkerCtx.addToDispatchList(newTask); > } > In the above code, when tsk is instance of ConditionalTask, > tsk.isMapRedTask() still can be true, but it cannot be cast to MapRedTask. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21111) ConditionalTask cannot be cast to MapRedTask
[ https://issues.apache.org/jira/browse/HIVE-2?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-2: -- Status: Open (was: Patch Available) > ConditionalTask cannot be cast to MapRedTask > > > Key: HIVE-2 > URL: https://issues.apache.org/jira/browse/HIVE-2 > Project: Hive > Issue Type: Bug > Components: Physical Optimizer >Affects Versions: 2.3.4, 3.1.1, 2.1.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-2.1.patch > > > We met error like this in our product environment: > java.lang.ClassCastException: org.apache.hadoop.hive.ql.exec.ConditionalTask > cannot be cast to org.apache.hadoop.hive.ql.exec.mr.MapRedTask > at > org.apache.hadoop.hive.ql.optimizer.physical.AbstractJoinTaskDispatcher.dispatch(AbstractJoinTaskDispatcher.java:173) > > There is a bug in function > org.apache.hadoop.hive.ql.optimizer.physical.AbstractJoinTaskDispatcher.dispatch: > if (tsk.isMapRedTask()) { > Task newTask = this.processCurrentTask((MapRedTask) > tsk, > ((ConditionalTask) currTask), physicalContext.getContext()); > walkerCtx.addToDispatchList(newTask); > } > In the above code, when tsk is instance of ConditionalTask, > tsk.isMapRedTask() still can be true, but it cannot be cast to MapRedTask. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21111) ConditionalTask cannot be cast to MapRedTask
[ https://issues.apache.org/jira/browse/HIVE-2?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-2: -- Affects Version/s: 3.1.1 2.3.4 > ConditionalTask cannot be cast to MapRedTask > > > Key: HIVE-2 > URL: https://issues.apache.org/jira/browse/HIVE-2 > Project: Hive > Issue Type: Bug > Components: Physical Optimizer >Affects Versions: 2.1.1, 3.1.1, 2.3.4 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-2.1.patch > > > We met error like this in our product environment: > java.lang.ClassCastException: org.apache.hadoop.hive.ql.exec.ConditionalTask > cannot be cast to org.apache.hadoop.hive.ql.exec.mr.MapRedTask > at > org.apache.hadoop.hive.ql.optimizer.physical.AbstractJoinTaskDispatcher.dispatch(AbstractJoinTaskDispatcher.java:173) > > There is a bug in function > org.apache.hadoop.hive.ql.optimizer.physical.AbstractJoinTaskDispatcher.dispatch: > if (tsk.isMapRedTask()) { > Task newTask = this.processCurrentTask((MapRedTask) > tsk, > ((ConditionalTask) currTask), physicalContext.getContext()); > walkerCtx.addToDispatchList(newTask); > } > In the above code, when tsk is instance of ConditionalTask, > tsk.isMapRedTask() still can be true, but it cannot be cast to MapRedTask. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21111) ConditionalTask cannot be cast to MapRedTask
[ https://issues.apache.org/jira/browse/HIVE-2?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-2: -- Status: Patch Available (was: Open) > ConditionalTask cannot be cast to MapRedTask > > > Key: HIVE-2 > URL: https://issues.apache.org/jira/browse/HIVE-2 > Project: Hive > Issue Type: Bug > Components: Physical Optimizer >Affects Versions: 2.1.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-2.1.patch > > > We met error like this in our product environment: > java.lang.ClassCastException: org.apache.hadoop.hive.ql.exec.ConditionalTask > cannot be cast to org.apache.hadoop.hive.ql.exec.mr.MapRedTask > at > org.apache.hadoop.hive.ql.optimizer.physical.AbstractJoinTaskDispatcher.dispatch(AbstractJoinTaskDispatcher.java:173) > > There is a bug in function > org.apache.hadoop.hive.ql.optimizer.physical.AbstractJoinTaskDispatcher.dispatch: > if (tsk.isMapRedTask()) { > Task newTask = this.processCurrentTask((MapRedTask) > tsk, > ((ConditionalTask) currTask), physicalContext.getContext()); > walkerCtx.addToDispatchList(newTask); > } > In the above code, when tsk is instance of ConditionalTask, > tsk.isMapRedTask() still can be true, but it cannot be cast to MapRedTask. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21111) ConditionalTask cannot be cast to MapRedTask
[ https://issues.apache.org/jira/browse/HIVE-2?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-2: -- Affects Version/s: 2.1.1 > ConditionalTask cannot be cast to MapRedTask > > > Key: HIVE-2 > URL: https://issues.apache.org/jira/browse/HIVE-2 > Project: Hive > Issue Type: Bug > Components: Physical Optimizer >Affects Versions: 2.1.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > > We met error like this in our product environment: > java.lang.ClassCastException: org.apache.hadoop.hive.ql.exec.ConditionalTask > cannot be cast to org.apache.hadoop.hive.ql.exec.mr.MapRedTask > at > org.apache.hadoop.hive.ql.optimizer.physical.AbstractJoinTaskDispatcher.dispatch(AbstractJoinTaskDispatcher.java:173) > > There is a bug in function > org.apache.hadoop.hive.ql.optimizer.physical.AbstractJoinTaskDispatcher.dispatch: > if (tsk.isMapRedTask()) { > Task newTask = this.processCurrentTask((MapRedTask) > tsk, > ((ConditionalTask) currTask), physicalContext.getContext()); > walkerCtx.addToDispatchList(newTask); > } > In the above code, when tsk is instance of ConditionalTask, > tsk.isMapRedTask() still can be true, but it cannot be cast to MapRedTask. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21111) ConditionalTask cannot be cast to MapRedTask
[ https://issues.apache.org/jira/browse/HIVE-2?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-2: -- Attachment: HIVE-2.1.patch > ConditionalTask cannot be cast to MapRedTask > > > Key: HIVE-2 > URL: https://issues.apache.org/jira/browse/HIVE-2 > Project: Hive > Issue Type: Bug > Components: Physical Optimizer >Affects Versions: 2.1.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-2.1.patch > > > We met error like this in our product environment: > java.lang.ClassCastException: org.apache.hadoop.hive.ql.exec.ConditionalTask > cannot be cast to org.apache.hadoop.hive.ql.exec.mr.MapRedTask > at > org.apache.hadoop.hive.ql.optimizer.physical.AbstractJoinTaskDispatcher.dispatch(AbstractJoinTaskDispatcher.java:173) > > There is a bug in function > org.apache.hadoop.hive.ql.optimizer.physical.AbstractJoinTaskDispatcher.dispatch: > if (tsk.isMapRedTask()) { > Task newTask = this.processCurrentTask((MapRedTask) > tsk, > ((ConditionalTask) currTask), physicalContext.getContext()); > walkerCtx.addToDispatchList(newTask); > } > In the above code, when tsk is instance of ConditionalTask, > tsk.isMapRedTask() still can be true, but it cannot be cast to MapRedTask. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21111) ConditionalTask cannot be cast to MapRedTask
[ https://issues.apache.org/jira/browse/HIVE-2?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-2: -- Component/s: Physical Optimizer > ConditionalTask cannot be cast to MapRedTask > > > Key: HIVE-2 > URL: https://issues.apache.org/jira/browse/HIVE-2 > Project: Hive > Issue Type: Bug > Components: Physical Optimizer >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > > We met error like this in our product environment: > java.lang.ClassCastException: org.apache.hadoop.hive.ql.exec.ConditionalTask > cannot be cast to org.apache.hadoop.hive.ql.exec.mr.MapRedTask > at > org.apache.hadoop.hive.ql.optimizer.physical.AbstractJoinTaskDispatcher.dispatch(AbstractJoinTaskDispatcher.java:173) > > There is a bug in function > org.apache.hadoop.hive.ql.optimizer.physical.AbstractJoinTaskDispatcher.dispatch: > if (tsk.isMapRedTask()) { > Task newTask = this.processCurrentTask((MapRedTask) > tsk, > ((ConditionalTask) currTask), physicalContext.getContext()); > walkerCtx.addToDispatchList(newTask); > } > In the above code, when tsk is instance of ConditionalTask, > tsk.isMapRedTask() still can be true, but it cannot be cast to MapRedTask. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-21111) ConditionalTask cannot be cast to MapRedTask
[ https://issues.apache.org/jira/browse/HIVE-2?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei reassigned HIVE-2: - > ConditionalTask cannot be cast to MapRedTask > > > Key: HIVE-2 > URL: https://issues.apache.org/jira/browse/HIVE-2 > Project: Hive > Issue Type: Bug >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > > We met error like this in our product environment: > java.lang.ClassCastException: org.apache.hadoop.hive.ql.exec.ConditionalTask > cannot be cast to org.apache.hadoop.hive.ql.exec.mr.MapRedTask > at > org.apache.hadoop.hive.ql.optimizer.physical.AbstractJoinTaskDispatcher.dispatch(AbstractJoinTaskDispatcher.java:173) > > There is a bug in function > org.apache.hadoop.hive.ql.optimizer.physical.AbstractJoinTaskDispatcher.dispatch: > if (tsk.isMapRedTask()) { > Task newTask = this.processCurrentTask((MapRedTask) > tsk, > ((ConditionalTask) currTask), physicalContext.getContext()); > walkerCtx.addToDispatchList(newTask); > } > In the above code, when tsk is instance of ConditionalTask, > tsk.isMapRedTask() still can be true, but it cannot be cast to MapRedTask. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19287) parse error with semicolon in comment which starts with whitespace in file
[ https://issues.apache.org/jira/browse/HIVE-19287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16653042#comment-16653042 ] zhuwei commented on HIVE-19287: --- [Zoltan Haindrich|https://issues.apache.org/jira/secure/ViewProfile.jspa?name=kgyrtkirk] Could you help to review the patch ? > parse error with semicolon in comment which starts with whitespace in file > -- > > Key: HIVE-19287 > URL: https://issues.apache.org/jira/browse/HIVE-19287 > Project: Hive > Issue Type: Bug > Components: Parser > Environment: hive 2.2.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-19287.1.patch, HIVE-19287.2.patch, > HIVE-19287.3.patch, HIVE-19287.4.patch, HIVE-19287.5.patch > > > It will get error when hive query written in file look like this: > select col > --this is; an example > from db.table > limit 1; > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20497) ParseException, failed to recognize quoted identifier when re-parsing the re-written query
[ https://issues.apache.org/jira/browse/HIVE-20497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-20497: -- Attachment: HIVE-20497.7.patch > ParseException, failed to recognize quoted identifier when re-parsing the > re-written query > -- > > Key: HIVE-20497 > URL: https://issues.apache.org/jira/browse/HIVE-20497 > Project: Hive > Issue Type: Bug > Components: Parser > Environment: hive 2.1.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-20497.1.patch, HIVE-20497.2.patch, > HIVE-20497.3.patch, HIVE-20497.4.patch, HIVE-20497.5.patch, > HIVE-20497.6.patch, HIVE-20497.7.patch > > > select `user` from team; > If we have a table `team`, and one of its column has been masked out with > `` with column level authorization. The above query will fail with error > "SemanticException org.apache.hadoop.hive.ql.parse.ParseException: line 1:9 > Failed to recognize predicate 'user'. Failed rule: 'identifier' in expression > specification" > The root cause is that after re-written the ast, the back quote has been lost. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20497) ParseException, failed to recognize quoted identifier when re-parsing the re-written query
[ https://issues.apache.org/jira/browse/HIVE-20497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-20497: -- Attachment: HIVE-20497.6.patch > ParseException, failed to recognize quoted identifier when re-parsing the > re-written query > -- > > Key: HIVE-20497 > URL: https://issues.apache.org/jira/browse/HIVE-20497 > Project: Hive > Issue Type: Bug > Components: Parser > Environment: hive 2.1.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-20497.1.patch, HIVE-20497.2.patch, > HIVE-20497.3.patch, HIVE-20497.4.patch, HIVE-20497.5.patch, HIVE-20497.6.patch > > > select `user` from team; > If we have a table `team`, and one of its column has been masked out with > `` with column level authorization. The above query will fail with error > "SemanticException org.apache.hadoop.hive.ql.parse.ParseException: line 1:9 > Failed to recognize predicate 'user'. Failed rule: 'identifier' in expression > specification" > The root cause is that after re-written the ast, the back quote has been lost. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19287) parse error with semicolon in comment which starts with whitespace in file
[ https://issues.apache.org/jira/browse/HIVE-19287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-19287: -- Attachment: HIVE-19287.5.patch > parse error with semicolon in comment which starts with whitespace in file > -- > > Key: HIVE-19287 > URL: https://issues.apache.org/jira/browse/HIVE-19287 > Project: Hive > Issue Type: Bug > Components: Parser > Environment: hive 2.2.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-19287.1.patch, HIVE-19287.2.patch, > HIVE-19287.3.patch, HIVE-19287.4.patch, HIVE-19287.5.patch > > > It will get error when hive query written in file look like this: > select col > --this is; an example > from db.table > limit 1; > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19287) parse error with semicolon in comment which starts with whitespace in file
[ https://issues.apache.org/jira/browse/HIVE-19287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-19287: -- Priority: Major (was: Minor) > parse error with semicolon in comment which starts with whitespace in file > -- > > Key: HIVE-19287 > URL: https://issues.apache.org/jira/browse/HIVE-19287 > Project: Hive > Issue Type: Bug > Components: Parser > Environment: hive 2.2.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-19287.1.patch, HIVE-19287.2.patch, > HIVE-19287.3.patch, HIVE-19287.4.patch > > > It will get error when hive query written in file look like this: > select col > --this is; an example > from db.table > limit 1; > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20497) ParseException, failed to recognize quoted identifier when re-parsing the re-written query
[ https://issues.apache.org/jira/browse/HIVE-20497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-20497: -- Attachment: HIVE-20497.5.patch > ParseException, failed to recognize quoted identifier when re-parsing the > re-written query > -- > > Key: HIVE-20497 > URL: https://issues.apache.org/jira/browse/HIVE-20497 > Project: Hive > Issue Type: Bug > Components: Parser > Environment: hive 2.1.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-20497.1.patch, HIVE-20497.2.patch, > HIVE-20497.3.patch, HIVE-20497.4.patch, HIVE-20497.5.patch > > > select `user` from team; > If we have a table `team`, and one of its column has been masked out with > `` with column level authorization. The above query will fail with error > "SemanticException org.apache.hadoop.hive.ql.parse.ParseException: line 1:9 > Failed to recognize predicate 'user'. Failed rule: 'identifier' in expression > specification" > The root cause is that after re-written the ast, the back quote has been lost. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19450) OOM due to map join and backup task not invoked
[ https://issues.apache.org/jira/browse/HIVE-19450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16649215#comment-16649215 ] zhuwei commented on HIVE-19450: --- [~prasanth_j] Could you help to review ? > OOM due to map join and backup task not invoked > --- > > Key: HIVE-19450 > URL: https://issues.apache.org/jira/browse/HIVE-19450 > Project: Hive > Issue Type: Bug >Affects Versions: 2.1.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-19450.1.patch, HIVE-19450.2.patch > > > Map join task may cause OOM due to orc compression , in most cases , a backup > task will be invoked. However , if the size of hash table is close to memory > limit, the task which load the hash table will NOT fail . OOM will happen in > next task witch do local join. The load task has a backup but next task not. > So in this case , the whole query will fail. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19450) OOM due to map join and backup task not invoked
[ https://issues.apache.org/jira/browse/HIVE-19450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-19450: -- Affects Version/s: 2.1.1 > OOM due to map join and backup task not invoked > --- > > Key: HIVE-19450 > URL: https://issues.apache.org/jira/browse/HIVE-19450 > Project: Hive > Issue Type: Bug >Affects Versions: 2.1.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-19450.1.patch, HIVE-19450.2.patch > > > Map join task may cause OOM due to orc compression , in most cases , a backup > task will be invoked. However , if the size of hash table is close to memory > limit, the task which load the hash table will NOT fail . OOM will happen in > next task witch do local join. The load task has a backup but next task not. > So in this case , the whole query will fail. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19287) parse error with semicolon in comment which starts with whitespace in file
[ https://issues.apache.org/jira/browse/HIVE-19287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-19287: -- Attachment: HIVE-19287.4.patch > parse error with semicolon in comment which starts with whitespace in file > -- > > Key: HIVE-19287 > URL: https://issues.apache.org/jira/browse/HIVE-19287 > Project: Hive > Issue Type: Bug > Components: Parser > Environment: hive 2.2.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Minor > Attachments: HIVE-19287.1.patch, HIVE-19287.2.patch, > HIVE-19287.3.patch, HIVE-19287.4.patch > > > It will get error when hive query written in file look like this: > select col > --this is; an example > from db.table > limit 1; > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HIVE-20725) Simultaneous dynamic inserts can result in partition files lost
[ https://issues.apache.org/jira/browse/HIVE-20725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei resolved HIVE-20725. --- Resolution: Duplicate Since it belongs to a class of bugs fixed via - HIVE-14535. Close it as duplicated. > Simultaneous dynamic inserts can result in partition files lost > > > Key: HIVE-20725 > URL: https://issues.apache.org/jira/browse/HIVE-20725 > Project: Hive > Issue Type: Bug >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-20725.1.patch > > > If two users attempt a dynamic insert into the same new partition at the same > time, a possible race condition exists which result in error state. In that > case the partition info has been inserted to metastore but data files been > removed. > The current logic in function "add_partition_core" in class > HiveMetaStore.HMSHandler is like this : > # check if partition already exists > # create the partition files directory if not exists > # try to add partition > # if add partition failed and it created the directory in step 2, delete > that directory > Assume that two users are inserting the same partition at the same time, > there are two threads operating their requests, say thread A and thread B. If > 1~4 steps of thread B are all done between step 2 and step 3 of thread A. The > sequence like this : A1 A2 B1 B2 B3 B4 A3 A4. The partition files written by > B will be removed by A. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20725) Simultaneous dynamic inserts can result in partition files lost
[ https://issues.apache.org/jira/browse/HIVE-20725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647298#comment-16647298 ] zhuwei commented on HIVE-20725: --- [~gopalv] Thanks for the message. I am not sure in which version this feature would be released. Since I have made a fix for the usage in my company, I attached the patch in case others using old version can have a simple quick fix. > Simultaneous dynamic inserts can result in partition files lost > > > Key: HIVE-20725 > URL: https://issues.apache.org/jira/browse/HIVE-20725 > Project: Hive > Issue Type: Bug >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-20725.1.patch > > > If two users attempt a dynamic insert into the same new partition at the same > time, a possible race condition exists which result in error state. In that > case the partition info has been inserted to metastore but data files been > removed. > The current logic in function "add_partition_core" in class > HiveMetaStore.HMSHandler is like this : > # check if partition already exists > # create the partition files directory if not exists > # try to add partition > # if add partition failed and it created the directory in step 2, delete > that directory > Assume that two users are inserting the same partition at the same time, > there are two threads operating their requests, say thread A and thread B. If > 1~4 steps of thread B are all done between step 2 and step 3 of thread A. The > sequence like this : A1 A2 B1 B2 B3 B4 A3 A4. The partition files written by > B will be removed by A. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20725) Simultaneous dynamic inserts can result in partition files lost
[ https://issues.apache.org/jira/browse/HIVE-20725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-20725: -- Attachment: HIVE-20725.1.patch > Simultaneous dynamic inserts can result in partition files lost > > > Key: HIVE-20725 > URL: https://issues.apache.org/jira/browse/HIVE-20725 > Project: Hive > Issue Type: Bug >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-20725.1.patch > > > If two users attempt a dynamic insert into the same new partition at the same > time, a possible race condition exists which result in error state. In that > case the partition info has been inserted to metastore but data files been > removed. > The current logic in function "add_partition_core" in class > HiveMetaStore.HMSHandler is like this : > # check if partition already exists > # create the partition files directory if not exists > # try to add partition > # if add partition failed and it created the directory in step 2, delete > that directory > Assume that two users are inserting the same partition at the same time, > there are two threads operating their requests, say thread A and thread B. If > 1~4 steps of thread B are all done between step 2 and step 3 of thread A. The > sequence like this : A1 A2 B1 B2 B3 B4 A3 A4. The partition files written by > B will be removed by A. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-20725) Simultaneous dynamic inserts can result in partition files lost
[ https://issues.apache.org/jira/browse/HIVE-20725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei reassigned HIVE-20725: - > Simultaneous dynamic inserts can result in partition files lost > > > Key: HIVE-20725 > URL: https://issues.apache.org/jira/browse/HIVE-20725 > Project: Hive > Issue Type: Bug >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > > If two users attempt a dynamic insert into the same new partition at the same > time, a possible race condition exists which result in error state. In that > case the partition info has been inserted to metastore but data files been > removed. > The current logic in function "add_partition_core" in class > HiveMetaStore.HMSHandler is like this : > # check if partition already exists > # create the partition files directory if not exists > # try to add partition > # if add partition failed and it created the directory in step 2, delete > that directory > Assume that two users are inserting the same partition at the same time, > there are two threads operating their requests, say thread A and thread B. If > 1~4 steps of thread B are all done between step 2 and step 3 of thread A. The > sequence like this : A1 A2 B1 B2 B3 B4 A3 A4. The partition files written by > B will be removed by A. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18871) hive on tez execution error due to set hive.aux.jars.path to hdfs://
[ https://issues.apache.org/jira/browse/HIVE-18871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16626664#comment-16626664 ] zhuwei commented on HIVE-18871: --- [~prasanth_j] zhuwei8...@gmail.com > hive on tez execution error due to set hive.aux.jars.path to hdfs:// > > > Key: HIVE-18871 > URL: https://issues.apache.org/jira/browse/HIVE-18871 > Project: Hive > Issue Type: Bug > Components: Tez >Affects Versions: 2.2.1, 4.0.0, 3.2.0 > Environment: hadoop 2.6.5 > hive 2.2.1 > tez 0.8.4 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-18871.1.patch, HIVE-18871.2.patch, > HIVE-18871.3.patch, HIVE-18871.4.patch, HIVE-18871.5.patch, > HIVE-18871.6.patch, HIVE-18871.7.patch, HIVE-18871.8.patch > > > When set the properties > hive.aux.jars.path=hdfs://mycluster/apps/hive/lib/guava.jar > and hive.execution.engine=tez; execute any query will fail with below error > log: > exec.Task: Failed to execute tez graph. > java.lang.IllegalArgumentException: Wrong FS: > hdfs://mycluster/apps/hive/lib/guava.jar, expected: file:/// > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1905) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:1007) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:902) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:845) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:466) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:252) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager$TezSessionPoolSession.openInternal(TezSessionPoolManager.java:622) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:206) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:283) > ~[hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:155) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) > [hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:429) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:445) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:151) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at
[jira] [Updated] (HIVE-20497) ParseException, failed to recognize quoted identifier when re-parsing the re-written query
[ https://issues.apache.org/jira/browse/HIVE-20497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-20497: -- Attachment: HIVE-20497.4.patch > ParseException, failed to recognize quoted identifier when re-parsing the > re-written query > -- > > Key: HIVE-20497 > URL: https://issues.apache.org/jira/browse/HIVE-20497 > Project: Hive > Issue Type: Bug > Components: Parser > Environment: hive 2.1.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-20497.1.patch, HIVE-20497.2.patch, > HIVE-20497.3.patch, HIVE-20497.4.patch > > > select `user` from team; > If we have a table `team`, and one of its column has been masked out with > `` with column level authorization. The above query will fail with error > "SemanticException org.apache.hadoop.hive.ql.parse.ParseException: line 1:9 > Failed to recognize predicate 'user'. Failed rule: 'identifier' in expression > specification" > The root cause is that after re-written the ast, the back quote has been lost. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20497) ParseException, failed to recognize quoted identifier when re-parsing the re-written query
[ https://issues.apache.org/jira/browse/HIVE-20497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16621429#comment-16621429 ] zhuwei commented on HIVE-20497: --- I have checked the failed test case, they are not related to my change. @[~ashutoshc] Could you help to review the code? > ParseException, failed to recognize quoted identifier when re-parsing the > re-written query > -- > > Key: HIVE-20497 > URL: https://issues.apache.org/jira/browse/HIVE-20497 > Project: Hive > Issue Type: Bug > Components: Parser > Environment: hive 2.1.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-20497.1.patch, HIVE-20497.2.patch, > HIVE-20497.3.patch > > > select `user` from team; > If we have a table `team`, and one of its column has been masked out with > `` with column level authorization. The above query will fail with error > "SemanticException org.apache.hadoop.hive.ql.parse.ParseException: line 1:9 > Failed to recognize predicate 'user'. Failed rule: 'identifier' in expression > specification" > The root cause is that after re-written the ast, the back quote has been lost. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20497) ParseException, failed to recognize quoted identifier when re-parsing the re-written query
[ https://issues.apache.org/jira/browse/HIVE-20497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16620288#comment-16620288 ] zhuwei commented on HIVE-20497: --- revise the code to comply with hive code style > ParseException, failed to recognize quoted identifier when re-parsing the > re-written query > -- > > Key: HIVE-20497 > URL: https://issues.apache.org/jira/browse/HIVE-20497 > Project: Hive > Issue Type: Bug > Components: Parser > Environment: hive 2.1.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-20497.1.patch, HIVE-20497.2.patch, > HIVE-20497.3.patch > > > select `user` from team; > If we have a table `team`, and one of its column has been masked out with > `` with column level authorization. The above query will fail with error > "SemanticException org.apache.hadoop.hive.ql.parse.ParseException: line 1:9 > Failed to recognize predicate 'user'. Failed rule: 'identifier' in expression > specification" > The root cause is that after re-written the ast, the back quote has been lost. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20497) ParseException, failed to recognize quoted identifier when re-parsing the re-written query
[ https://issues.apache.org/jira/browse/HIVE-20497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-20497: -- Attachment: HIVE-20497.3.patch > ParseException, failed to recognize quoted identifier when re-parsing the > re-written query > -- > > Key: HIVE-20497 > URL: https://issues.apache.org/jira/browse/HIVE-20497 > Project: Hive > Issue Type: Bug > Components: Parser > Environment: hive 2.1.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-20497.1.patch, HIVE-20497.2.patch, > HIVE-20497.3.patch > > > select `user` from team; > If we have a table `team`, and one of its column has been masked out with > `` with column level authorization. The above query will fail with error > "SemanticException org.apache.hadoop.hive.ql.parse.ParseException: line 1:9 > Failed to recognize predicate 'user'. Failed rule: 'identifier' in expression > specification" > The root cause is that after re-written the ast, the back quote has been lost. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20497) ParseException, failed to recognize quoted identifier when re-parsing the re-written query
[ https://issues.apache.org/jira/browse/HIVE-20497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-20497: -- Status: Patch Available (was: Open) The first patch will cause regression issue in some corner case. re-fix it. > ParseException, failed to recognize quoted identifier when re-parsing the > re-written query > -- > > Key: HIVE-20497 > URL: https://issues.apache.org/jira/browse/HIVE-20497 > Project: Hive > Issue Type: Bug > Components: Parser > Environment: hive 2.1.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-20497.1.patch, HIVE-20497.2.patch > > > select `user` from team; > If we have a table `team`, and one of its column has been masked out with > `` with column level authorization. The above query will fail with error > "SemanticException org.apache.hadoop.hive.ql.parse.ParseException: line 1:9 > Failed to recognize predicate 'user'. Failed rule: 'identifier' in expression > specification" > The root cause is that after re-written the ast, the back quote has been lost. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20497) ParseException, failed to recognize quoted identifier when re-parsing the re-written query
[ https://issues.apache.org/jira/browse/HIVE-20497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-20497: -- Attachment: HIVE-20497.2.patch > ParseException, failed to recognize quoted identifier when re-parsing the > re-written query > -- > > Key: HIVE-20497 > URL: https://issues.apache.org/jira/browse/HIVE-20497 > Project: Hive > Issue Type: Bug > Components: Parser > Environment: hive 2.1.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-20497.1.patch, HIVE-20497.2.patch > > > select `user` from team; > If we have a table `team`, and one of its column has been masked out with > `` with column level authorization. The above query will fail with error > "SemanticException org.apache.hadoop.hive.ql.parse.ParseException: line 1:9 > Failed to recognize predicate 'user'. Failed rule: 'identifier' in expression > specification" > The root cause is that after re-written the ast, the back quote has been lost. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20497) ParseException, failed to recognize quoted identifier when re-parsing the re-written query
[ https://issues.apache.org/jira/browse/HIVE-20497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-20497: -- Status: Open (was: Patch Available) > ParseException, failed to recognize quoted identifier when re-parsing the > re-written query > -- > > Key: HIVE-20497 > URL: https://issues.apache.org/jira/browse/HIVE-20497 > Project: Hive > Issue Type: Bug > Components: Parser > Environment: hive 2.1.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-20497.1.patch > > > select `user` from team; > If we have a table `team`, and one of its column has been masked out with > `` with column level authorization. The above query will fail with error > "SemanticException org.apache.hadoop.hive.ql.parse.ParseException: line 1:9 > Failed to recognize predicate 'user'. Failed rule: 'identifier' in expression > specification" > The root cause is that after re-written the ast, the back quote has been lost. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20497) ParseException, failed to recognize quoted identifier when re-parsing the re-written query
[ https://issues.apache.org/jira/browse/HIVE-20497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-20497: -- Status: Patch Available (was: Open) > ParseException, failed to recognize quoted identifier when re-parsing the > re-written query > -- > > Key: HIVE-20497 > URL: https://issues.apache.org/jira/browse/HIVE-20497 > Project: Hive > Issue Type: Bug > Components: Parser > Environment: hive 2.1.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-20497.1.patch > > > select `user` from team; > If we have a table `team`, and one of its column has been masked out with > `` with column level authorization. The above query will fail with error > "SemanticException org.apache.hadoop.hive.ql.parse.ParseException: line 1:9 > Failed to recognize predicate 'user'. Failed rule: 'identifier' in expression > specification" > The root cause is that after re-written the ast, the back quote has been lost. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20497) ParseException, failed to recognize quoted identifier when re-parsing the re-written query
[ https://issues.apache.org/jira/browse/HIVE-20497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-20497: -- Status: Open (was: Patch Available) > ParseException, failed to recognize quoted identifier when re-parsing the > re-written query > -- > > Key: HIVE-20497 > URL: https://issues.apache.org/jira/browse/HIVE-20497 > Project: Hive > Issue Type: Bug > Components: Parser > Environment: hive 2.1.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-20497.1.patch > > > select `user` from team; > If we have a table `team`, and one of its column has been masked out with > `` with column level authorization. The above query will fail with error > "SemanticException org.apache.hadoop.hive.ql.parse.ParseException: line 1:9 > Failed to recognize predicate 'user'. Failed rule: 'identifier' in expression > specification" > The root cause is that after re-written the ast, the back quote has been lost. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20497) ParseException, failed to recognize quoted identifier when re-parsing the re-written query
[ https://issues.apache.org/jira/browse/HIVE-20497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-20497: -- Status: Open (was: Patch Available) > ParseException, failed to recognize quoted identifier when re-parsing the > re-written query > -- > > Key: HIVE-20497 > URL: https://issues.apache.org/jira/browse/HIVE-20497 > Project: Hive > Issue Type: Bug > Components: Parser > Environment: hive 2.1.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-20497.1.patch > > > select `user` from team; > If we have a table `team`, and one of its column has been masked out with > `` with column level authorization. The above query will fail with error > "SemanticException org.apache.hadoop.hive.ql.parse.ParseException: line 1:9 > Failed to recognize predicate 'user'. Failed rule: 'identifier' in expression > specification" > The root cause is that after re-written the ast, the back quote has been lost. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20497) ParseException, failed to recognize quoted identifier when re-parsing the re-written query
[ https://issues.apache.org/jira/browse/HIVE-20497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-20497: -- Status: Patch Available (was: Open) > ParseException, failed to recognize quoted identifier when re-parsing the > re-written query > -- > > Key: HIVE-20497 > URL: https://issues.apache.org/jira/browse/HIVE-20497 > Project: Hive > Issue Type: Bug > Components: Parser > Environment: hive 2.1.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-20497.1.patch > > > select `user` from team; > If we have a table `team`, and one of its column has been masked out with > `` with column level authorization. The above query will fail with error > "SemanticException org.apache.hadoop.hive.ql.parse.ParseException: line 1:9 > Failed to recognize predicate 'user'. Failed rule: 'identifier' in expression > specification" > The root cause is that after re-written the ast, the back quote has been lost. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20497) ParseException, failed to recognize quoted identifier when re-parsing the re-written query
[ https://issues.apache.org/jira/browse/HIVE-20497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-20497: -- Attachment: HIVE-20497.1.patch > ParseException, failed to recognize quoted identifier when re-parsing the > re-written query > -- > > Key: HIVE-20497 > URL: https://issues.apache.org/jira/browse/HIVE-20497 > Project: Hive > Issue Type: Bug > Components: Parser > Environment: hive 2.1.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-20497.1.patch > > > select `user` from team; > If we have a table `team`, and one of its column has been masked out with > `` with column level authorization. The above query will fail with error > "SemanticException org.apache.hadoop.hive.ql.parse.ParseException: line 1:9 > Failed to recognize predicate 'user'. Failed rule: 'identifier' in expression > specification" > The root cause is that after re-written the ast, the back quote has been lost. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20497) ParseException, failed to recognize quoted identifier when re-parsing the re-written query
[ https://issues.apache.org/jira/browse/HIVE-20497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-20497: -- Status: Patch Available (was: Open) > ParseException, failed to recognize quoted identifier when re-parsing the > re-written query > -- > > Key: HIVE-20497 > URL: https://issues.apache.org/jira/browse/HIVE-20497 > Project: Hive > Issue Type: Bug > Components: Parser > Environment: hive 2.1.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-20497.1.patch > > > select `user` from team; > If we have a table `team`, and one of its column has been masked out with > `` with column level authorization. The above query will fail with error > "SemanticException org.apache.hadoop.hive.ql.parse.ParseException: line 1:9 > Failed to recognize predicate 'user'. Failed rule: 'identifier' in expression > specification" > The root cause is that after re-written the ast, the back quote has been lost. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-20497) ParseException, failed to recognize quoted identifier when re-parsing the re-written query
[ https://issues.apache.org/jira/browse/HIVE-20497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei reassigned HIVE-20497: - > ParseException, failed to recognize quoted identifier when re-parsing the > re-written query > -- > > Key: HIVE-20497 > URL: https://issues.apache.org/jira/browse/HIVE-20497 > Project: Hive > Issue Type: Bug > Components: Parser > Environment: hive 2.1.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > > select `user` from team; > If we have a table `team`, and one of its column has been masked out with > `` with column level authorization. The above query will fail with error > "SemanticException org.apache.hadoop.hive.ql.parse.ParseException: line 1:9 > Failed to recognize predicate 'user'. Failed rule: 'identifier' in expression > specification" > The root cause is that after re-written the ast, the back quote has been lost. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19287) parse error with semicolon in comment which starts with whitespace in file
[ https://issues.apache.org/jira/browse/HIVE-19287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-19287: -- Component/s: Parser > parse error with semicolon in comment which starts with whitespace in file > -- > > Key: HIVE-19287 > URL: https://issues.apache.org/jira/browse/HIVE-19287 > Project: Hive > Issue Type: Bug > Components: Parser > Environment: hive 2.2.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Minor > Attachments: HIVE-19287.1.patch, HIVE-19287.2.patch, > HIVE-19287.3.patch > > > It will get error when hive query written in file look like this: > select col > --this is; an example > from db.table > limit 1; > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18871) hive on tez execution error due to set hive.aux.jars.path to hdfs://
[ https://issues.apache.org/jira/browse/HIVE-18871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-18871: -- Status: Patch Available (was: Open) > hive on tez execution error due to set hive.aux.jars.path to hdfs:// > > > Key: HIVE-18871 > URL: https://issues.apache.org/jira/browse/HIVE-18871 > Project: Hive > Issue Type: Bug > Components: Tez >Affects Versions: 2.2.1 > Environment: hadoop 2.6.5 > hive 2.2.1 > tez 0.8.4 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-18871.1.patch, HIVE-18871.2.patch, > HIVE-18871.3.patch, HIVE-18871.4.patch > > > When set the properties > hive.aux.jars.path=hdfs://mycluster/apps/hive/lib/guava.jar > and hive.execution.engine=tez; execute any query will fail with below error > log: > exec.Task: Failed to execute tez graph. > java.lang.IllegalArgumentException: Wrong FS: > hdfs://mycluster/apps/hive/lib/guava.jar, expected: file:/// > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1905) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:1007) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:902) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:845) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:466) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:252) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager$TezSessionPoolSession.openInternal(TezSessionPoolManager.java:622) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:206) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:283) > ~[hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:155) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) > [hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:429) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:445) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:151) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714) >
[jira] [Updated] (HIVE-19287) parse error with semicolon in comment which starts with whitespace in file
[ https://issues.apache.org/jira/browse/HIVE-19287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-19287: -- Status: Open (was: Patch Available) > parse error with semicolon in comment which starts with whitespace in file > -- > > Key: HIVE-19287 > URL: https://issues.apache.org/jira/browse/HIVE-19287 > Project: Hive > Issue Type: Bug > Environment: hive 2.2.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Minor > Attachments: HIVE-19287.1.patch, HIVE-19287.2.patch, > HIVE-19287.3.patch > > > It will get error when hive query written in file look like this: > select col > --this is; an example > from db.table > limit 1; > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19287) parse error with semicolon in comment which starts with whitespace in file
[ https://issues.apache.org/jira/browse/HIVE-19287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-19287: -- Status: Patch Available (was: Open) > parse error with semicolon in comment which starts with whitespace in file > -- > > Key: HIVE-19287 > URL: https://issues.apache.org/jira/browse/HIVE-19287 > Project: Hive > Issue Type: Bug > Environment: hive 2.2.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Minor > Attachments: HIVE-19287.1.patch, HIVE-19287.2.patch, > HIVE-19287.3.patch > > > It will get error when hive query written in file look like this: > select col > --this is; an example > from db.table > limit 1; > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18871) hive on tez execution error due to set hive.aux.jars.path to hdfs://
[ https://issues.apache.org/jira/browse/HIVE-18871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-18871: -- Status: Open (was: Patch Available) > hive on tez execution error due to set hive.aux.jars.path to hdfs:// > > > Key: HIVE-18871 > URL: https://issues.apache.org/jira/browse/HIVE-18871 > Project: Hive > Issue Type: Bug > Components: Tez >Affects Versions: 2.2.1 > Environment: hadoop 2.6.5 > hive 2.2.1 > tez 0.8.4 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-18871.1.patch, HIVE-18871.2.patch, > HIVE-18871.3.patch, HIVE-18871.4.patch > > > When set the properties > hive.aux.jars.path=hdfs://mycluster/apps/hive/lib/guava.jar > and hive.execution.engine=tez; execute any query will fail with below error > log: > exec.Task: Failed to execute tez graph. > java.lang.IllegalArgumentException: Wrong FS: > hdfs://mycluster/apps/hive/lib/guava.jar, expected: file:/// > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1905) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:1007) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:902) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:845) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:466) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:252) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager$TezSessionPoolSession.openInternal(TezSessionPoolManager.java:622) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:206) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:283) > ~[hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:155) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) > [hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:429) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:445) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:151) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714) >
[jira] [Updated] (HIVE-19450) OOM due to map join and backup task not invoked
[ https://issues.apache.org/jira/browse/HIVE-19450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-19450: -- Status: Patch Available (was: Open) add null pointer check > OOM due to map join and backup task not invoked > --- > > Key: HIVE-19450 > URL: https://issues.apache.org/jira/browse/HIVE-19450 > Project: Hive > Issue Type: Bug >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-19450.1.patch, HIVE-19450.2.patch > > > Map join task may cause OOM due to orc compression , in most cases , a backup > task will be invoked. However , if the size of hash table is close to memory > limit, the task which load the hash table will NOT fail . OOM will happen in > next task witch do local join. The load task has a backup but next task not. > So in this case , the whole query will fail. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19450) OOM due to map join and backup task not invoked
[ https://issues.apache.org/jira/browse/HIVE-19450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-19450: -- Attachment: HIVE-19450.2.patch > OOM due to map join and backup task not invoked > --- > > Key: HIVE-19450 > URL: https://issues.apache.org/jira/browse/HIVE-19450 > Project: Hive > Issue Type: Bug >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-19450.1.patch, HIVE-19450.2.patch > > > Map join task may cause OOM due to orc compression , in most cases , a backup > task will be invoked. However , if the size of hash table is close to memory > limit, the task which load the hash table will NOT fail . OOM will happen in > next task witch do local join. The load task has a backup but next task not. > So in this case , the whole query will fail. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19450) OOM due to map join and backup task not invoked
[ https://issues.apache.org/jira/browse/HIVE-19450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-19450: -- Status: Open (was: Patch Available) > OOM due to map join and backup task not invoked > --- > > Key: HIVE-19450 > URL: https://issues.apache.org/jira/browse/HIVE-19450 > Project: Hive > Issue Type: Bug >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-19450.1.patch > > > Map join task may cause OOM due to orc compression , in most cases , a backup > task will be invoked. However , if the size of hash table is close to memory > limit, the task which load the hash table will NOT fail . OOM will happen in > next task witch do local join. The load task has a backup but next task not. > So in this case , the whole query will fail. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18871) hive on tez execution error due to set hive.aux.jars.path to hdfs://
[ https://issues.apache.org/jira/browse/HIVE-18871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-18871: -- Status: Open (was: Patch Available) > hive on tez execution error due to set hive.aux.jars.path to hdfs:// > > > Key: HIVE-18871 > URL: https://issues.apache.org/jira/browse/HIVE-18871 > Project: Hive > Issue Type: Bug > Components: Tez >Affects Versions: 2.2.1 > Environment: hadoop 2.6.5 > hive 2.2.1 > tez 0.8.4 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-18871.1.patch, HIVE-18871.2.patch, > HIVE-18871.3.patch, HIVE-18871.4.patch > > > When set the properties > hive.aux.jars.path=hdfs://mycluster/apps/hive/lib/guava.jar > and hive.execution.engine=tez; execute any query will fail with below error > log: > exec.Task: Failed to execute tez graph. > java.lang.IllegalArgumentException: Wrong FS: > hdfs://mycluster/apps/hive/lib/guava.jar, expected: file:/// > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1905) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:1007) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:902) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:845) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:466) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:252) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager$TezSessionPoolSession.openInternal(TezSessionPoolManager.java:622) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:206) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:283) > ~[hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:155) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) > [hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:429) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:445) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:151) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714) >
[jira] [Updated] (HIVE-18871) hive on tez execution error due to set hive.aux.jars.path to hdfs://
[ https://issues.apache.org/jira/browse/HIVE-18871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-18871: -- Status: Patch Available (was: Open) > hive on tez execution error due to set hive.aux.jars.path to hdfs:// > > > Key: HIVE-18871 > URL: https://issues.apache.org/jira/browse/HIVE-18871 > Project: Hive > Issue Type: Bug > Components: Tez >Affects Versions: 2.2.1 > Environment: hadoop 2.6.5 > hive 2.2.1 > tez 0.8.4 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-18871.1.patch, HIVE-18871.2.patch, > HIVE-18871.3.patch, HIVE-18871.4.patch > > > When set the properties > hive.aux.jars.path=hdfs://mycluster/apps/hive/lib/guava.jar > and hive.execution.engine=tez; execute any query will fail with below error > log: > exec.Task: Failed to execute tez graph. > java.lang.IllegalArgumentException: Wrong FS: > hdfs://mycluster/apps/hive/lib/guava.jar, expected: file:/// > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1905) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:1007) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:902) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:845) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:466) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:252) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager$TezSessionPoolSession.openInternal(TezSessionPoolManager.java:622) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:206) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:283) > ~[hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:155) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) > [hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:429) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:445) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:151) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714) >
[jira] [Commented] (HIVE-19450) OOM due to map join and backup task not invoked
[ https://issues.apache.org/jira/browse/HIVE-19450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16468633#comment-16468633 ] zhuwei commented on HIVE-19450: --- [~ashutoshc] I have checked the failed tests, not related to my change. Could you help to review the change ? > OOM due to map join and backup task not invoked > --- > > Key: HIVE-19450 > URL: https://issues.apache.org/jira/browse/HIVE-19450 > Project: Hive > Issue Type: Bug >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-19450.1.patch > > > Map join task may cause OOM due to orc compression , in most cases , a backup > task will be invoked. However , if the size of hash table is close to memory > limit, the task which load the hash table will NOT fail . OOM will happen in > next task witch do local join. The load task has a backup but next task not. > So in this case , the whole query will fail. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19287) parse error with semicolon in comment which starts with whitespace in file
[ https://issues.apache.org/jira/browse/HIVE-19287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16468599#comment-16468599 ] zhuwei commented on HIVE-19287: --- I have revised the change to stick to hive code convention and checked the failed tests, they are not introduced by my change. [~abstractdog] Could you help to review the code ? > parse error with semicolon in comment which starts with whitespace in file > -- > > Key: HIVE-19287 > URL: https://issues.apache.org/jira/browse/HIVE-19287 > Project: Hive > Issue Type: Bug > Environment: hive 2.2.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Minor > Attachments: HIVE-19287.1.patch, HIVE-19287.2.patch, > HIVE-19287.3.patch > > > It will get error when hive query written in file look like this: > select col > --this is; an example > from db.table > limit 1; > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19287) parse error with semicolon in comment which starts with whitespace in file
[ https://issues.apache.org/jira/browse/HIVE-19287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-19287: -- Attachment: HIVE-19287.3.patch > parse error with semicolon in comment which starts with whitespace in file > -- > > Key: HIVE-19287 > URL: https://issues.apache.org/jira/browse/HIVE-19287 > Project: Hive > Issue Type: Bug > Environment: hive 2.2.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Minor > Attachments: HIVE-19287.1.patch, HIVE-19287.2.patch, > HIVE-19287.3.patch > > > It will get error when hive query written in file look like this: > select col > --this is; an example > from db.table > limit 1; > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19287) parse error with semicolon in comment which starts with whitespace in file
[ https://issues.apache.org/jira/browse/HIVE-19287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-19287: -- Status: Patch Available (was: Open) resubmit the patch to trigger the pre-merge check > parse error with semicolon in comment which starts with whitespace in file > -- > > Key: HIVE-19287 > URL: https://issues.apache.org/jira/browse/HIVE-19287 > Project: Hive > Issue Type: Bug > Environment: hive 2.2.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Minor > Attachments: HIVE-19287.1.patch, HIVE-19287.2.patch > > > It will get error when hive query written in file look like this: > select col > --this is; an example > from db.table > limit 1; > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18871) hive on tez execution error due to set hive.aux.jars.path to hdfs://
[ https://issues.apache.org/jira/browse/HIVE-18871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-18871: -- Status: Open (was: Patch Available) > hive on tez execution error due to set hive.aux.jars.path to hdfs:// > > > Key: HIVE-18871 > URL: https://issues.apache.org/jira/browse/HIVE-18871 > Project: Hive > Issue Type: Bug > Components: Tez >Affects Versions: 2.2.1 > Environment: hadoop 2.6.5 > hive 2.2.1 > tez 0.8.4 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-18871.1.patch, HIVE-18871.2.patch, > HIVE-18871.3.patch, HIVE-18871.4.patch > > > When set the properties > hive.aux.jars.path=hdfs://mycluster/apps/hive/lib/guava.jar > and hive.execution.engine=tez; execute any query will fail with below error > log: > exec.Task: Failed to execute tez graph. > java.lang.IllegalArgumentException: Wrong FS: > hdfs://mycluster/apps/hive/lib/guava.jar, expected: file:/// > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1905) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:1007) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:902) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:845) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:466) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:252) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager$TezSessionPoolSession.openInternal(TezSessionPoolManager.java:622) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:206) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:283) > ~[hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:155) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) > [hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:429) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:445) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:151) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714) >
[jira] [Updated] (HIVE-19287) parse error with semicolon in comment which starts with whitespace in file
[ https://issues.apache.org/jira/browse/HIVE-19287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-19287: -- Status: Open (was: Patch Available) > parse error with semicolon in comment which starts with whitespace in file > -- > > Key: HIVE-19287 > URL: https://issues.apache.org/jira/browse/HIVE-19287 > Project: Hive > Issue Type: Bug > Environment: hive 2.2.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Minor > Attachments: HIVE-19287.1.patch, HIVE-19287.2.patch > > > It will get error when hive query written in file look like this: > select col > --this is; an example > from db.table > limit 1; > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18871) hive on tez execution error due to set hive.aux.jars.path to hdfs://
[ https://issues.apache.org/jira/browse/HIVE-18871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-18871: -- Status: Patch Available (was: Open) > hive on tez execution error due to set hive.aux.jars.path to hdfs:// > > > Key: HIVE-18871 > URL: https://issues.apache.org/jira/browse/HIVE-18871 > Project: Hive > Issue Type: Bug > Components: Tez >Affects Versions: 2.2.1 > Environment: hadoop 2.6.5 > hive 2.2.1 > tez 0.8.4 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-18871.1.patch, HIVE-18871.2.patch, > HIVE-18871.3.patch, HIVE-18871.4.patch > > > When set the properties > hive.aux.jars.path=hdfs://mycluster/apps/hive/lib/guava.jar > and hive.execution.engine=tez; execute any query will fail with below error > log: > exec.Task: Failed to execute tez graph. > java.lang.IllegalArgumentException: Wrong FS: > hdfs://mycluster/apps/hive/lib/guava.jar, expected: file:/// > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1905) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:1007) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:902) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:845) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:466) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:252) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager$TezSessionPoolSession.openInternal(TezSessionPoolManager.java:622) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:206) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:283) > ~[hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:155) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) > [hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:429) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:445) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:151) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714) >
[jira] [Commented] (HIVE-19202) CBO failed due to NullPointerException in HiveAggregate.isBucketedInput()
[ https://issues.apache.org/jira/browse/HIVE-19202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466817#comment-16466817 ] zhuwei commented on HIVE-19202: --- [~dvoros] The query which failing is in our production environment and a little bit complicate. According to the code logic, there are some keyword related: count/sum/group by/join > CBO failed due to NullPointerException in HiveAggregate.isBucketedInput() > - > > Key: HIVE-19202 > URL: https://issues.apache.org/jira/browse/HIVE-19202 > Project: Hive > Issue Type: Bug > Components: CBO >Affects Versions: 2.1.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Critical > Fix For: 3.1.0 > > Attachments: HIVE-19202.1.patch, HIVE-19202.2.patch > > > I ran a query with join and group by with below settings, COB failed due to > NullPointerException in HiveAggregate.isBucketedInput() > set hive.execution.engine=tez; > set hive.cbo.costmodel.extended=true; > > In class HiveRelMdDistribution, we implemented below functions: > public RelDistribution distribution(HiveAggregate aggregate, RelMetadataQuery > mq) > public RelDistribution distribution(HiveJoin join, RelMetadataQuery mq) > > But in HiveAggregate.isBucketedInput, the argument passed to distribution is > "this.getInput()" > , obviously it's not right here. The right argument needed is "this" -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19450) OOM due to map join and backup task not invoked
[ https://issues.apache.org/jira/browse/HIVE-19450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-19450: -- Status: Patch Available (was: Open) The fix is to further checkout parent task's backup task. > OOM due to map join and backup task not invoked > --- > > Key: HIVE-19450 > URL: https://issues.apache.org/jira/browse/HIVE-19450 > Project: Hive > Issue Type: Bug >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-19450.1.patch > > > Map join task may cause OOM due to orc compression , in most cases , a backup > task will be invoked. However , if the size of hash table is close to memory > limit, the task which load the hash table will NOT fail . OOM will happen in > next task witch do local join. The load task has a backup but next task not. > So in this case , the whole query will fail. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19450) OOM due to map join and backup task not invoked
[ https://issues.apache.org/jira/browse/HIVE-19450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-19450: -- Attachment: HIVE-19450.1.patch > OOM due to map join and backup task not invoked > --- > > Key: HIVE-19450 > URL: https://issues.apache.org/jira/browse/HIVE-19450 > Project: Hive > Issue Type: Bug >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-19450.1.patch > > > Map join task may cause OOM due to orc compression , in most cases , a backup > task will be invoked. However , if the size of hash table is close to memory > limit, the task which load the hash table will NOT fail . OOM will happen in > next task witch do local join. The load task has a backup but next task not. > So in this case , the whole query will fail. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19450) OOM due to map join and backup task not invoked
[ https://issues.apache.org/jira/browse/HIVE-19450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei reassigned HIVE-19450: - > OOM due to map join and backup task not invoked > --- > > Key: HIVE-19450 > URL: https://issues.apache.org/jira/browse/HIVE-19450 > Project: Hive > Issue Type: Bug >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > > Map join task may cause OOM due to orc compression , in most cases , a backup > task will be invoked. However , if the size of hash table is close to memory > limit, the task which load the hash table will NOT fail . OOM will happen in > next task witch do local join. The load task has a backup but next task not. > So in this case , the whole query will fail. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19287) parse error with semicolon in comment which starts with whitespace in file
[ https://issues.apache.org/jira/browse/HIVE-19287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449974#comment-16449974 ] zhuwei commented on HIVE-19287: --- [~abstractdog] Thanks . I have revise the title. Actually, it's related with both semicolon and whitespace. If there is no semicolon, the whitespace will be handled later . > parse error with semicolon in comment which starts with whitespace in file > -- > > Key: HIVE-19287 > URL: https://issues.apache.org/jira/browse/HIVE-19287 > Project: Hive > Issue Type: Bug > Environment: hive 2.2.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Minor > Attachments: HIVE-19287.1.patch, HIVE-19287.2.patch > > > It will get error when hive query written in file look like this: > select col > --this is; an example > from db.table > limit 1; > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19287) parse error with semicolon in comment which starts with whitespace in file
[ https://issues.apache.org/jira/browse/HIVE-19287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-19287: -- Summary: parse error with semicolon in comment which starts with whitespace in file (was: parse error with semicolon in comment with in file) > parse error with semicolon in comment which starts with whitespace in file > -- > > Key: HIVE-19287 > URL: https://issues.apache.org/jira/browse/HIVE-19287 > Project: Hive > Issue Type: Bug > Environment: hive 2.2.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Minor > Attachments: HIVE-19287.1.patch, HIVE-19287.2.patch > > > It will get error when hive query written in file look like this: > select col > --this is; an example > from db.table > limit 1; > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19287) parse error with semicolon in comment with in file
[ https://issues.apache.org/jira/browse/HIVE-19287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-19287: -- Summary: parse error with semicolon in comment with in file (was: parse error with semicolon in comment with in file) > parse error with semicolon in comment with in file > --- > > Key: HIVE-19287 > URL: https://issues.apache.org/jira/browse/HIVE-19287 > Project: Hive > Issue Type: Bug > Environment: hive 2.2.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Minor > Attachments: HIVE-19287.1.patch, HIVE-19287.2.patch > > > It will get error when hive query written in file look like this: > select col > --this is; an example > from db.table > limit 1; > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19287) parse error with semicolon in comment with in file
[ https://issues.apache.org/jira/browse/HIVE-19287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-19287: -- Summary: parse error with semicolon in comment with in file (was: parse error with semicolon in comment in file) > parse error with semicolon in comment with in file > -- > > Key: HIVE-19287 > URL: https://issues.apache.org/jira/browse/HIVE-19287 > Project: Hive > Issue Type: Bug > Environment: hive 2.2.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Minor > Attachments: HIVE-19287.1.patch, HIVE-19287.2.patch > > > It will get error when hive query written in file look like this: > select col > --this is; an example > from db.table > limit 1; > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19287) parse error with semicolon in comment in file
[ https://issues.apache.org/jira/browse/HIVE-19287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-19287: -- Status: Patch Available (was: Open) > parse error with semicolon in comment in file > - > > Key: HIVE-19287 > URL: https://issues.apache.org/jira/browse/HIVE-19287 > Project: Hive > Issue Type: Bug > Environment: hive 2.2.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Minor > Attachments: HIVE-19287.1.patch, HIVE-19287.2.patch > > > It will get error when hive query written in file look like this: > select col > --this is; an example > from db.table > limit 1; > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19287) parse error with semicolon in comment in file
[ https://issues.apache.org/jira/browse/HIVE-19287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-19287: -- Attachment: HIVE-19202.2.patch > parse error with semicolon in comment in file > - > > Key: HIVE-19287 > URL: https://issues.apache.org/jira/browse/HIVE-19287 > Project: Hive > Issue Type: Bug > Environment: hive 2.2.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Minor > Attachments: HIVE-19287.1.patch, HIVE-19287.2.patch > > > It will get error when hive query written in file look like this: > select col > --this is; an example > from db.table > limit 1; > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19287) parse error with semicolon in comment in file
[ https://issues.apache.org/jira/browse/HIVE-19287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-19287: -- Attachment: HIVE-19287.2.patch > parse error with semicolon in comment in file > - > > Key: HIVE-19287 > URL: https://issues.apache.org/jira/browse/HIVE-19287 > Project: Hive > Issue Type: Bug > Environment: hive 2.2.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Minor > Attachments: HIVE-19287.1.patch, HIVE-19287.2.patch > > > It will get error when hive query written in file look like this: > select col > --this is; an example > from db.table > limit 1; > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19287) parse error with semicolon in comment in file
[ https://issues.apache.org/jira/browse/HIVE-19287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-19287: -- Attachment: (was: HIVE-19202.2.patch) > parse error with semicolon in comment in file > - > > Key: HIVE-19287 > URL: https://issues.apache.org/jira/browse/HIVE-19287 > Project: Hive > Issue Type: Bug > Environment: hive 2.2.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Minor > Attachments: HIVE-19287.1.patch, HIVE-19287.2.patch > > > It will get error when hive query written in file look like this: > select col > --this is; an example > from db.table > limit 1; > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19287) parse error with semicolon in comment in file
[ https://issues.apache.org/jira/browse/HIVE-19287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-19287: -- Status: Open (was: Patch Available) > parse error with semicolon in comment in file > - > > Key: HIVE-19287 > URL: https://issues.apache.org/jira/browse/HIVE-19287 > Project: Hive > Issue Type: Bug > Environment: hive 2.2.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Minor > Attachments: HIVE-19287.1.patch > > > It will get error when hive query written in file look like this: > select col > --this is; an example > from db.table > limit 1; > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19287) parse error with semicolon in comment in file
[ https://issues.apache.org/jira/browse/HIVE-19287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-19287: -- Status: Patch Available (was: Open) add line.trim() to fix the bug > parse error with semicolon in comment in file > - > > Key: HIVE-19287 > URL: https://issues.apache.org/jira/browse/HIVE-19287 > Project: Hive > Issue Type: Bug > Environment: hive 2.2.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Minor > Attachments: HIVE-19287.1.patch > > > It will get error when hive query written in file look like this: > select col > --this is; an example > from db.table > limit 1; > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19287) parse error with semicolon in comment in file
[ https://issues.apache.org/jira/browse/HIVE-19287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-19287: -- Attachment: HIVE-19287.1.patch > parse error with semicolon in comment in file > - > > Key: HIVE-19287 > URL: https://issues.apache.org/jira/browse/HIVE-19287 > Project: Hive > Issue Type: Bug > Environment: hive 2.2.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Minor > Attachments: HIVE-19287.1.patch > > > It will get error when hive query written in file look like this: > select col > --this is; an example > from db.table > limit 1; > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19287) parse error with semicolon in comment in file
[ https://issues.apache.org/jira/browse/HIVE-19287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei reassigned HIVE-19287: - > parse error with semicolon in comment in file > - > > Key: HIVE-19287 > URL: https://issues.apache.org/jira/browse/HIVE-19287 > Project: Hive > Issue Type: Bug > Environment: hive 2.2.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Minor > > It will get error when hive query written in file look like this: > select col > --this is; an example > from db.table > limit 1; > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18871) hive on tez execution error due to set hive.aux.jars.path to hdfs://
[ https://issues.apache.org/jira/browse/HIVE-18871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-18871: -- Target Version/s: (was: 2.2.1) Status: Patch Available (was: Open) refine the code to stick to hive code style > hive on tez execution error due to set hive.aux.jars.path to hdfs:// > > > Key: HIVE-18871 > URL: https://issues.apache.org/jira/browse/HIVE-18871 > Project: Hive > Issue Type: Bug > Components: Tez >Affects Versions: 2.2.1 > Environment: hadoop 2.6.5 > hive 2.2.1 > tez 0.8.4 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-18871.1.patch, HIVE-18871.2.patch, > HIVE-18871.3.patch, HIVE-18871.4.patch > > > When set the properties > hive.aux.jars.path=hdfs://mycluster/apps/hive/lib/guava.jar > and hive.execution.engine=tez; execute any query will fail with below error > log: > exec.Task: Failed to execute tez graph. > java.lang.IllegalArgumentException: Wrong FS: > hdfs://mycluster/apps/hive/lib/guava.jar, expected: file:/// > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1905) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:1007) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:902) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:845) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:466) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:252) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager$TezSessionPoolSession.openInternal(TezSessionPoolManager.java:622) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:206) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:283) > ~[hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:155) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) > [hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:429) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:445) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:151) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776) >
[jira] [Updated] (HIVE-18871) hive on tez execution error due to set hive.aux.jars.path to hdfs://
[ https://issues.apache.org/jira/browse/HIVE-18871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-18871: -- Attachment: HIVE-18871.4.patch > hive on tez execution error due to set hive.aux.jars.path to hdfs:// > > > Key: HIVE-18871 > URL: https://issues.apache.org/jira/browse/HIVE-18871 > Project: Hive > Issue Type: Bug > Components: Tez >Affects Versions: 2.2.1 > Environment: hadoop 2.6.5 > hive 2.2.1 > tez 0.8.4 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-18871.1.patch, HIVE-18871.2.patch, > HIVE-18871.3.patch, HIVE-18871.4.patch > > > When set the properties > hive.aux.jars.path=hdfs://mycluster/apps/hive/lib/guava.jar > and hive.execution.engine=tez; execute any query will fail with below error > log: > exec.Task: Failed to execute tez graph. > java.lang.IllegalArgumentException: Wrong FS: > hdfs://mycluster/apps/hive/lib/guava.jar, expected: file:/// > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1905) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:1007) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:902) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:845) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:466) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:252) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager$TezSessionPoolSession.openInternal(TezSessionPoolManager.java:622) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:206) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:283) > ~[hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:155) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) > [hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:429) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:445) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:151) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714) >
[jira] [Updated] (HIVE-18871) hive on tez execution error due to set hive.aux.jars.path to hdfs://
[ https://issues.apache.org/jira/browse/HIVE-18871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-18871: -- Status: Open (was: Patch Available) > hive on tez execution error due to set hive.aux.jars.path to hdfs:// > > > Key: HIVE-18871 > URL: https://issues.apache.org/jira/browse/HIVE-18871 > Project: Hive > Issue Type: Bug > Components: Tez >Affects Versions: 2.2.1 > Environment: hadoop 2.6.5 > hive 2.2.1 > tez 0.8.4 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-18871.1.patch, HIVE-18871.2.patch, > HIVE-18871.3.patch, HIVE-18871.4.patch > > > When set the properties > hive.aux.jars.path=hdfs://mycluster/apps/hive/lib/guava.jar > and hive.execution.engine=tez; execute any query will fail with below error > log: > exec.Task: Failed to execute tez graph. > java.lang.IllegalArgumentException: Wrong FS: > hdfs://mycluster/apps/hive/lib/guava.jar, expected: file:/// > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1905) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:1007) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:902) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:845) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:466) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:252) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager$TezSessionPoolSession.openInternal(TezSessionPoolManager.java:622) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:206) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:283) > ~[hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:155) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) > [hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:429) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:445) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:151) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714) >
[jira] [Updated] (HIVE-18871) hive on tez execution error due to set hive.aux.jars.path to hdfs://
[ https://issues.apache.org/jira/browse/HIVE-18871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-18871: -- Status: Patch Available (was: Open) Upload a new patch file targeted to master branch. > hive on tez execution error due to set hive.aux.jars.path to hdfs:// > > > Key: HIVE-18871 > URL: https://issues.apache.org/jira/browse/HIVE-18871 > Project: Hive > Issue Type: Bug > Components: Tez >Affects Versions: 2.2.1 > Environment: hadoop 2.6.5 > hive 2.2.1 > tez 0.8.4 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-18871.1.patch, HIVE-18871.2.patch, > HIVE-18871.3.patch > > > When set the properties > hive.aux.jars.path=hdfs://mycluster/apps/hive/lib/guava.jar > and hive.execution.engine=tez; execute any query will fail with below error > log: > exec.Task: Failed to execute tez graph. > java.lang.IllegalArgumentException: Wrong FS: > hdfs://mycluster/apps/hive/lib/guava.jar, expected: file:/// > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1905) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:1007) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:902) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:845) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:466) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:252) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager$TezSessionPoolSession.openInternal(TezSessionPoolManager.java:622) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:206) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:283) > ~[hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:155) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) > [hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:429) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:445) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:151) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776) > [hive-cli-2.1.1.jar:2.1.1] > at
[jira] [Updated] (HIVE-18871) hive on tez execution error due to set hive.aux.jars.path to hdfs://
[ https://issues.apache.org/jira/browse/HIVE-18871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-18871: -- Attachment: HIVE-18871.3.patch > hive on tez execution error due to set hive.aux.jars.path to hdfs:// > > > Key: HIVE-18871 > URL: https://issues.apache.org/jira/browse/HIVE-18871 > Project: Hive > Issue Type: Bug > Components: Tez >Affects Versions: 2.2.1 > Environment: hadoop 2.6.5 > hive 2.2.1 > tez 0.8.4 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-18871.1.patch, HIVE-18871.2.patch, > HIVE-18871.3.patch > > > When set the properties > hive.aux.jars.path=hdfs://mycluster/apps/hive/lib/guava.jar > and hive.execution.engine=tez; execute any query will fail with below error > log: > exec.Task: Failed to execute tez graph. > java.lang.IllegalArgumentException: Wrong FS: > hdfs://mycluster/apps/hive/lib/guava.jar, expected: file:/// > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1905) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:1007) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:902) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:845) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:466) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:252) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager$TezSessionPoolSession.openInternal(TezSessionPoolManager.java:622) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:206) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:283) > ~[hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:155) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) > [hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:429) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:445) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:151) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714) > [hive-cli-2.1.1.jar:2.1.1] > at
[jira] [Updated] (HIVE-18871) hive on tez execution error due to set hive.aux.jars.path to hdfs://
[ https://issues.apache.org/jira/browse/HIVE-18871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-18871: -- Status: Open (was: Patch Available) > hive on tez execution error due to set hive.aux.jars.path to hdfs:// > > > Key: HIVE-18871 > URL: https://issues.apache.org/jira/browse/HIVE-18871 > Project: Hive > Issue Type: Bug > Components: Tez >Affects Versions: 2.2.1 > Environment: hadoop 2.6.5 > hive 2.2.1 > tez 0.8.4 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Fix For: 2.2.1 > > Attachments: HIVE-18871.1.patch, HIVE-18871.2.patch, > HIVE-18871.3.patch > > > When set the properties > hive.aux.jars.path=hdfs://mycluster/apps/hive/lib/guava.jar > and hive.execution.engine=tez; execute any query will fail with below error > log: > exec.Task: Failed to execute tez graph. > java.lang.IllegalArgumentException: Wrong FS: > hdfs://mycluster/apps/hive/lib/guava.jar, expected: file:/// > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1905) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:1007) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:902) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:845) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:466) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:252) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager$TezSessionPoolSession.openInternal(TezSessionPoolManager.java:622) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:206) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:283) > ~[hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:155) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) > [hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:429) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:445) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:151) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714)
[jira] [Updated] (HIVE-18871) hive on tez execution error due to set hive.aux.jars.path to hdfs://
[ https://issues.apache.org/jira/browse/HIVE-18871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-18871: -- Fix Version/s: (was: 2.2.1) > hive on tez execution error due to set hive.aux.jars.path to hdfs:// > > > Key: HIVE-18871 > URL: https://issues.apache.org/jira/browse/HIVE-18871 > Project: Hive > Issue Type: Bug > Components: Tez >Affects Versions: 2.2.1 > Environment: hadoop 2.6.5 > hive 2.2.1 > tez 0.8.4 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-18871.1.patch, HIVE-18871.2.patch, > HIVE-18871.3.patch > > > When set the properties > hive.aux.jars.path=hdfs://mycluster/apps/hive/lib/guava.jar > and hive.execution.engine=tez; execute any query will fail with below error > log: > exec.Task: Failed to execute tez graph. > java.lang.IllegalArgumentException: Wrong FS: > hdfs://mycluster/apps/hive/lib/guava.jar, expected: file:/// > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1905) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:1007) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:902) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:845) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:466) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:252) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager$TezSessionPoolSession.openInternal(TezSessionPoolManager.java:622) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:206) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:283) > ~[hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:155) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) > [hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:429) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:445) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:151) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714) > [hive-cli-2.1.1.jar:2.1.1] > at
[jira] [Commented] (HIVE-19202) CBO failed due to NullPointerException in HiveAggregate.isBucketedInput()
[ https://issues.apache.org/jira/browse/HIVE-19202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440349#comment-16440349 ] zhuwei commented on HIVE-19202: --- Hi [~ashutoshc] , Thanks for comment , I am new to open source community . I checked the failed tests, they are not introduced by my change. what to do next ? How can I submit a review request to hive ? > CBO failed due to NullPointerException in HiveAggregate.isBucketedInput() > - > > Key: HIVE-19202 > URL: https://issues.apache.org/jira/browse/HIVE-19202 > Project: Hive > Issue Type: Bug > Components: CBO >Affects Versions: 2.1.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Critical > Attachments: HIVE-19202.1.patch, HIVE-19202.2.patch > > > I ran a query with join and group by with below settings, COB failed due to > NullPointerException in HiveAggregate.isBucketedInput() > set hive.execution.engine=tez; > set hive.cbo.costmodel.extended=true; > > In class HiveRelMdDistribution, we implemented below functions: > public RelDistribution distribution(HiveAggregate aggregate, RelMetadataQuery > mq) > public RelDistribution distribution(HiveJoin join, RelMetadataQuery mq) > > But in HiveAggregate.isBucketedInput, the argument passed to distribution is > "this.getInput()" > , obviously it's not right here. The right argument needed is "this" -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19202) CBO failed due to NullPointerException in HiveAggregate.isBucketedInput()
[ https://issues.apache.org/jira/browse/HIVE-19202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-19202: -- Fix Version/s: (was: 2.1.1) > CBO failed due to NullPointerException in HiveAggregate.isBucketedInput() > - > > Key: HIVE-19202 > URL: https://issues.apache.org/jira/browse/HIVE-19202 > Project: Hive > Issue Type: Bug > Components: CBO >Affects Versions: 2.1.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Critical > Attachments: HIVE-19202.1.patch, HIVE-19202.2.patch > > > I ran a query with join and group by with below settings, COB failed due to > NullPointerException in HiveAggregate.isBucketedInput() > set hive.execution.engine=tez; > set hive.cbo.costmodel.extended=true; > > In class HiveRelMdDistribution, we implemented below functions: > public RelDistribution distribution(HiveAggregate aggregate, RelMetadataQuery > mq) > public RelDistribution distribution(HiveJoin join, RelMetadataQuery mq) > > But in HiveAggregate.isBucketedInput, the argument passed to distribution is > "this.getInput()" > , obviously it's not right here. The right argument needed is "this" -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19202) CBO failed due to NullPointerException in HiveAggregate.isBucketedInput()
[ https://issues.apache.org/jira/browse/HIVE-19202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-19202: -- Status: Patch Available (was: Open) Upload a new patch file targeted to master branch. > CBO failed due to NullPointerException in HiveAggregate.isBucketedInput() > - > > Key: HIVE-19202 > URL: https://issues.apache.org/jira/browse/HIVE-19202 > Project: Hive > Issue Type: Bug > Components: CBO >Affects Versions: 2.1.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Critical > Fix For: 2.1.1 > > Attachments: HIVE-19202.1.patch, HIVE-19202.2.patch > > > I ran a query with join and group by with below settings, COB failed due to > NullPointerException in HiveAggregate.isBucketedInput() > set hive.execution.engine=tez; > set hive.cbo.costmodel.extended=true; > > In class HiveRelMdDistribution, we implemented below functions: > public RelDistribution distribution(HiveAggregate aggregate, RelMetadataQuery > mq) > public RelDistribution distribution(HiveJoin join, RelMetadataQuery mq) > > But in HiveAggregate.isBucketedInput, the argument passed to distribution is > "this.getInput()" > , obviously it's not right here. The right argument needed is "this" -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19202) CBO failed due to NullPointerException in HiveAggregate.isBucketedInput()
[ https://issues.apache.org/jira/browse/HIVE-19202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-19202: -- Attachment: HIVE-19202.2.patch > CBO failed due to NullPointerException in HiveAggregate.isBucketedInput() > - > > Key: HIVE-19202 > URL: https://issues.apache.org/jira/browse/HIVE-19202 > Project: Hive > Issue Type: Bug > Components: CBO >Affects Versions: 2.1.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Critical > Fix For: 2.1.1 > > Attachments: HIVE-19202.1.patch, HIVE-19202.2.patch > > > I ran a query with join and group by with below settings, COB failed due to > NullPointerException in HiveAggregate.isBucketedInput() > set hive.execution.engine=tez; > set hive.cbo.costmodel.extended=true; > > In class HiveRelMdDistribution, we implemented below functions: > public RelDistribution distribution(HiveAggregate aggregate, RelMetadataQuery > mq) > public RelDistribution distribution(HiveJoin join, RelMetadataQuery mq) > > But in HiveAggregate.isBucketedInput, the argument passed to distribution is > "this.getInput()" > , obviously it's not right here. The right argument needed is "this" -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19202) CBO failed due to NullPointerException in HiveAggregate.isBucketedInput()
[ https://issues.apache.org/jira/browse/HIVE-19202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-19202: -- Status: Open (was: Patch Available) > CBO failed due to NullPointerException in HiveAggregate.isBucketedInput() > - > > Key: HIVE-19202 > URL: https://issues.apache.org/jira/browse/HIVE-19202 > Project: Hive > Issue Type: Bug > Components: CBO >Affects Versions: 2.1.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Critical > Fix For: 2.1.1 > > Attachments: HIVE-19202.1.patch > > > I ran a query with join and group by with below settings, COB failed due to > NullPointerException in HiveAggregate.isBucketedInput() > set hive.execution.engine=tez; > set hive.cbo.costmodel.extended=true; > > In class HiveRelMdDistribution, we implemented below functions: > public RelDistribution distribution(HiveAggregate aggregate, RelMetadataQuery > mq) > public RelDistribution distribution(HiveJoin join, RelMetadataQuery mq) > > But in HiveAggregate.isBucketedInput, the argument passed to distribution is > "this.getInput()" > , obviously it's not right here. The right argument needed is "this" -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19202) CBO failed due to NullPointerException in HiveAggregate.isBucketedInput()
[ https://issues.apache.org/jira/browse/HIVE-19202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-19202: -- Component/s: CBO > CBO failed due to NullPointerException in HiveAggregate.isBucketedInput() > - > > Key: HIVE-19202 > URL: https://issues.apache.org/jira/browse/HIVE-19202 > Project: Hive > Issue Type: Bug > Components: CBO >Affects Versions: 2.1.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Critical > Fix For: 2.1.1 > > Attachments: HIVE-19202.1.patch > > > I ran a query with join and group by with below settings, COB failed due to > NullPointerException in HiveAggregate.isBucketedInput() > set hive.execution.engine=tez; > set hive.cbo.costmodel.extended=true; > > In class HiveRelMdDistribution, we implemented below functions: > public RelDistribution distribution(HiveAggregate aggregate, RelMetadataQuery > mq) > public RelDistribution distribution(HiveJoin join, RelMetadataQuery mq) > > But in HiveAggregate.isBucketedInput, the argument passed to distribution is > "this.getInput()" > , obviously it's not right here. The right argument needed is "this" -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19202) CBO failed due to NullPointerException in HiveAggregate.isBucketedInput()
[ https://issues.apache.org/jira/browse/HIVE-19202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-19202: -- Fix Version/s: 2.1.1 Affects Version/s: 2.1.1 Status: Patch Available (was: Open) > CBO failed due to NullPointerException in HiveAggregate.isBucketedInput() > - > > Key: HIVE-19202 > URL: https://issues.apache.org/jira/browse/HIVE-19202 > Project: Hive > Issue Type: Bug >Affects Versions: 2.1.1 >Reporter: zhuwei >Assignee: zhuwei >Priority: Critical > Fix For: 2.1.1 > > Attachments: HIVE-19202.1.patch > > > I ran a query with join and group by with below settings, COB failed due to > NullPointerException in HiveAggregate.isBucketedInput() > set hive.execution.engine=tez; > set hive.cbo.costmodel.extended=true; > > In class HiveRelMdDistribution, we implemented below functions: > public RelDistribution distribution(HiveAggregate aggregate, RelMetadataQuery > mq) > public RelDistribution distribution(HiveJoin join, RelMetadataQuery mq) > > But in HiveAggregate.isBucketedInput, the argument passed to distribution is > "this.getInput()" > , obviously it's not right here. The right argument needed is "this" -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19202) CBO failed due to NullPointerException in HiveAggregate.isBucketedInput()
[ https://issues.apache.org/jira/browse/HIVE-19202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-19202: -- Attachment: HIVE-19202.1.patch > CBO failed due to NullPointerException in HiveAggregate.isBucketedInput() > - > > Key: HIVE-19202 > URL: https://issues.apache.org/jira/browse/HIVE-19202 > Project: Hive > Issue Type: Bug >Reporter: zhuwei >Assignee: zhuwei >Priority: Critical > Attachments: HIVE-19202.1.patch > > > I ran a query with join and group by with below settings, COB failed due to > NullPointerException in HiveAggregate.isBucketedInput() > set hive.execution.engine=tez; > set hive.cbo.costmodel.extended=true; > > In class HiveRelMdDistribution, we implemented below functions: > public RelDistribution distribution(HiveAggregate aggregate, RelMetadataQuery > mq) > public RelDistribution distribution(HiveJoin join, RelMetadataQuery mq) > > But in HiveAggregate.isBucketedInput, the argument passed to distribution is > "this.getInput()" > , obviously it's not right here. The right argument needed is "this" -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19202) CBO failed due to NullPointerException in HiveAggregate.isBucketedInput()
[ https://issues.apache.org/jira/browse/HIVE-19202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei reassigned HIVE-19202: - > CBO failed due to NullPointerException in HiveAggregate.isBucketedInput() > - > > Key: HIVE-19202 > URL: https://issues.apache.org/jira/browse/HIVE-19202 > Project: Hive > Issue Type: Bug >Reporter: zhuwei >Assignee: zhuwei >Priority: Critical > > I ran a query with join and group by with below settings, COB failed due to > NullPointerException in HiveAggregate.isBucketedInput() > set hive.execution.engine=tez; > set hive.cbo.costmodel.extended=true; > > In class HiveRelMdDistribution, we implemented below functions: > public RelDistribution distribution(HiveAggregate aggregate, RelMetadataQuery > mq) > public RelDistribution distribution(HiveJoin join, RelMetadataQuery mq) > > But in HiveAggregate.isBucketedInput, the argument passed to distribution is > "this.getInput()" > , obviously it's not right here. The right argument needed is "this" -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18871) hive on tez execution error due to set hive.aux.jars.path to hdfs://
[ https://issues.apache.org/jira/browse/HIVE-18871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-18871: -- Status: Patch Available (was: Open) Try to re-triger the preCommit patch testing. > hive on tez execution error due to set hive.aux.jars.path to hdfs:// > > > Key: HIVE-18871 > URL: https://issues.apache.org/jira/browse/HIVE-18871 > Project: Hive > Issue Type: Bug > Components: Tez >Affects Versions: 2.2.1 > Environment: hadoop 2.6.5 > hive 2.2.1 > tez 0.8.4 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Fix For: 2.2.1 > > Attachments: HIVE-18871.1.patch, HIVE-18871.2.patch > > > When set the properties > hive.aux.jars.path=hdfs://mycluster/apps/hive/lib/guava.jar > and hive.execution.engine=tez; execute any query will fail with below error > log: > exec.Task: Failed to execute tez graph. > java.lang.IllegalArgumentException: Wrong FS: > hdfs://mycluster/apps/hive/lib/guava.jar, expected: file:/// > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1905) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:1007) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:902) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:845) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:466) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:252) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager$TezSessionPoolSession.openInternal(TezSessionPoolManager.java:622) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:206) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:283) > ~[hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:155) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) > [hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:429) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:445) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:151) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776) > [hive-cli-2.1.1.jar:2.1.1] > at
[jira] [Updated] (HIVE-18871) hive on tez execution error due to set hive.aux.jars.path to hdfs://
[ https://issues.apache.org/jira/browse/HIVE-18871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-18871: -- Status: Open (was: Patch Available) > hive on tez execution error due to set hive.aux.jars.path to hdfs:// > > > Key: HIVE-18871 > URL: https://issues.apache.org/jira/browse/HIVE-18871 > Project: Hive > Issue Type: Bug > Components: Tez >Affects Versions: 2.2.1 > Environment: hadoop 2.6.5 > hive 2.2.1 > tez 0.8.4 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Fix For: 2.2.1 > > Attachments: HIVE-18871.1.patch, HIVE-18871.2.patch > > > When set the properties > hive.aux.jars.path=hdfs://mycluster/apps/hive/lib/guava.jar > and hive.execution.engine=tez; execute any query will fail with below error > log: > exec.Task: Failed to execute tez graph. > java.lang.IllegalArgumentException: Wrong FS: > hdfs://mycluster/apps/hive/lib/guava.jar, expected: file:/// > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1905) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:1007) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:902) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:845) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:466) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:252) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager$TezSessionPoolSession.openInternal(TezSessionPoolManager.java:622) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:206) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:283) > ~[hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:155) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) > [hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:429) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:445) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:151) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714) >
[jira] [Updated] (HIVE-18871) hive on tez execution error due to set hive.aux.jars.path to hdfs://
[ https://issues.apache.org/jira/browse/HIVE-18871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-18871: -- Fix Version/s: 2.2.1 Status: Patch Available (was: Open) > hive on tez execution error due to set hive.aux.jars.path to hdfs:// > > > Key: HIVE-18871 > URL: https://issues.apache.org/jira/browse/HIVE-18871 > Project: Hive > Issue Type: Bug > Components: Tez >Affects Versions: 2.2.1 > Environment: hadoop 2.6.5 > hive 2.2.1 > tez 0.8.4 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Fix For: 2.2.1 > > Attachments: HIVE-18871.1.patch, HIVE-18871.2.patch > > > When set the properties > hive.aux.jars.path=hdfs://mycluster/apps/hive/lib/guava.jar > and hive.execution.engine=tez; execute any query will fail with below error > log: > exec.Task: Failed to execute tez graph. > java.lang.IllegalArgumentException: Wrong FS: > hdfs://mycluster/apps/hive/lib/guava.jar, expected: file:/// > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1905) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:1007) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:902) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:845) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:466) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:252) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager$TezSessionPoolSession.openInternal(TezSessionPoolManager.java:622) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:206) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:283) > ~[hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:155) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) > [hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:429) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:445) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:151) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776) > [hive-cli-2.1.1.jar:2.1.1] > at
[jira] [Updated] (HIVE-18871) hive on tez execution error due to set hive.aux.jars.path to hdfs://
[ https://issues.apache.org/jira/browse/HIVE-18871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-18871: -- Status: Open (was: Patch Available) > hive on tez execution error due to set hive.aux.jars.path to hdfs:// > > > Key: HIVE-18871 > URL: https://issues.apache.org/jira/browse/HIVE-18871 > Project: Hive > Issue Type: Bug > Components: Tez >Affects Versions: 2.2.1 > Environment: hadoop 2.6.5 > hive 2.2.1 > tez 0.8.4 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-18871.1.patch, HIVE-18871.2.patch > > > When set the properties > hive.aux.jars.path=hdfs://mycluster/apps/hive/lib/guava.jar > and hive.execution.engine=tez; execute any query will fail with below error > log: > exec.Task: Failed to execute tez graph. > java.lang.IllegalArgumentException: Wrong FS: > hdfs://mycluster/apps/hive/lib/guava.jar, expected: file:/// > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1905) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:1007) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:902) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:845) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:466) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:252) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager$TezSessionPoolSession.openInternal(TezSessionPoolManager.java:622) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:206) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:283) > ~[hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:155) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) > [hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:429) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:445) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:151) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714) > [hive-cli-2.1.1.jar:2.1.1] > at
[jira] [Updated] (HIVE-18871) hive on tez execution error due to set hive.aux.jars.path to hdfs://
[ https://issues.apache.org/jira/browse/HIVE-18871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-18871: -- Status: Open (was: Patch Available) > hive on tez execution error due to set hive.aux.jars.path to hdfs:// > > > Key: HIVE-18871 > URL: https://issues.apache.org/jira/browse/HIVE-18871 > Project: Hive > Issue Type: Bug > Components: Tez >Affects Versions: 2.2.1 > Environment: hadoop 2.6.5 > hive 2.2.1 > tez 0.8.4 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-18871.1.patch, HIVE-18871.2.patch > > > When set the properties > hive.aux.jars.path=hdfs://mycluster/apps/hive/lib/guava.jar > and hive.execution.engine=tez; execute any query will fail with below error > log: > exec.Task: Failed to execute tez graph. > java.lang.IllegalArgumentException: Wrong FS: > hdfs://mycluster/apps/hive/lib/guava.jar, expected: file:/// > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1905) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:1007) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:902) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:845) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:466) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:252) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager$TezSessionPoolSession.openInternal(TezSessionPoolManager.java:622) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:206) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:283) > ~[hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:155) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) > [hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:429) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:445) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:151) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714) > [hive-cli-2.1.1.jar:2.1.1] > at
[jira] [Updated] (HIVE-18871) hive on tez execution error due to set hive.aux.jars.path to hdfs://
[ https://issues.apache.org/jira/browse/HIVE-18871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-18871: -- Status: Patch Available (was: Open) > hive on tez execution error due to set hive.aux.jars.path to hdfs:// > > > Key: HIVE-18871 > URL: https://issues.apache.org/jira/browse/HIVE-18871 > Project: Hive > Issue Type: Bug > Components: Tez >Affects Versions: 2.2.1 > Environment: hadoop 2.6.5 > hive 2.2.1 > tez 0.8.4 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-18871.1.patch, HIVE-18871.2.patch > > > When set the properties > hive.aux.jars.path=hdfs://mycluster/apps/hive/lib/guava.jar > and hive.execution.engine=tez; execute any query will fail with below error > log: > exec.Task: Failed to execute tez graph. > java.lang.IllegalArgumentException: Wrong FS: > hdfs://mycluster/apps/hive/lib/guava.jar, expected: file:/// > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1905) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:1007) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:902) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:845) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:466) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:252) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager$TezSessionPoolSession.openInternal(TezSessionPoolManager.java:622) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:206) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:283) > ~[hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:155) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) > [hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:429) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:445) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:151) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714) > [hive-cli-2.1.1.jar:2.1.1] > at
[jira] [Updated] (HIVE-18871) hive on tez execution error due to set hive.aux.jars.path to hdfs://
[ https://issues.apache.org/jira/browse/HIVE-18871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-18871: -- Attachment: HIVE-18871.2.patch > hive on tez execution error due to set hive.aux.jars.path to hdfs:// > > > Key: HIVE-18871 > URL: https://issues.apache.org/jira/browse/HIVE-18871 > Project: Hive > Issue Type: Bug > Components: Tez >Affects Versions: 2.2.1 > Environment: hadoop 2.6.5 > hive 2.2.1 > tez 0.8.4 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-18871.1.patch, HIVE-18871.2.patch > > > When set the properties > hive.aux.jars.path=hdfs://mycluster/apps/hive/lib/guava.jar > and hive.execution.engine=tez; execute any query will fail with below error > log: > exec.Task: Failed to execute tez graph. > java.lang.IllegalArgumentException: Wrong FS: > hdfs://mycluster/apps/hive/lib/guava.jar, expected: file:/// > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1905) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:1007) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:902) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:845) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:466) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:252) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager$TezSessionPoolSession.openInternal(TezSessionPoolManager.java:622) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:206) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:283) > ~[hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:155) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) > [hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:429) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:445) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:151) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714) > [hive-cli-2.1.1.jar:2.1.1] > at
[jira] [Updated] (HIVE-18871) hive on tez execution error due to set hive.aux.jars.path to hdfs://
[ https://issues.apache.org/jira/browse/HIVE-18871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-18871: -- Status: Patch Available (was: Open) > hive on tez execution error due to set hive.aux.jars.path to hdfs:// > > > Key: HIVE-18871 > URL: https://issues.apache.org/jira/browse/HIVE-18871 > Project: Hive > Issue Type: Bug > Components: Tez >Affects Versions: 2.2.1 > Environment: hadoop 2.6.5 > hive 2.2.1 > tez 0.8.4 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-18871.1.patch > > > When set the properties > hive.aux.jars.path=hdfs://mycluster/apps/hive/lib/guava.jar > and hive.execution.engine=tez; execute any query will fail with below error > log: > exec.Task: Failed to execute tez graph. > java.lang.IllegalArgumentException: Wrong FS: > hdfs://mycluster/apps/hive/lib/guava.jar, expected: file:/// > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1905) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:1007) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:902) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:845) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:466) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:252) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager$TezSessionPoolSession.openInternal(TezSessionPoolManager.java:622) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:206) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:283) > ~[hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:155) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) > [hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:429) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:445) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:151) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714) > [hive-cli-2.1.1.jar:2.1.1] > at
[jira] [Updated] (HIVE-18871) hive on tez execution error due to set hive.aux.jars.path to hdfs://
[ https://issues.apache.org/jira/browse/HIVE-18871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-18871: -- Attachment: HIVE-18871.1.patch > hive on tez execution error due to set hive.aux.jars.path to hdfs:// > > > Key: HIVE-18871 > URL: https://issues.apache.org/jira/browse/HIVE-18871 > Project: Hive > Issue Type: Bug > Components: Tez >Affects Versions: 2.2.1 > Environment: hadoop 2.6.5 > hive 2.2.1 > tez 0.8.4 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-18871.1.patch > > > When set the properties > hive.aux.jars.path=hdfs://mycluster/apps/hive/lib/guava.jar > and hive.execution.engine=tez; execute any query will fail with below error > log: > exec.Task: Failed to execute tez graph. > java.lang.IllegalArgumentException: Wrong FS: > hdfs://mycluster/apps/hive/lib/guava.jar, expected: file:/// > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1905) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:1007) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:902) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:845) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:466) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:252) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager$TezSessionPoolSession.openInternal(TezSessionPoolManager.java:622) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:206) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:283) > ~[hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:155) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) > [hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:429) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:445) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:151) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714) > [hive-cli-2.1.1.jar:2.1.1] > at
[jira] [Updated] (HIVE-18871) hive on tez execution error due to set hive.aux.jars.path to hdfs://
[ https://issues.apache.org/jira/browse/HIVE-18871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-18871: -- Status: Open (was: Patch Available) > hive on tez execution error due to set hive.aux.jars.path to hdfs:// > > > Key: HIVE-18871 > URL: https://issues.apache.org/jira/browse/HIVE-18871 > Project: Hive > Issue Type: Bug > Components: Tez >Affects Versions: 2.2.1 > Environment: hadoop 2.6.5 > hive 2.2.1 > tez 0.8.4 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > > When set the properties > hive.aux.jars.path=hdfs://mycluster/apps/hive/lib/guava.jar > and hive.execution.engine=tez; execute any query will fail with below error > log: > exec.Task: Failed to execute tez graph. > java.lang.IllegalArgumentException: Wrong FS: > hdfs://mycluster/apps/hive/lib/guava.jar, expected: file:/// > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1905) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:1007) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:902) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:845) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:466) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:252) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager$TezSessionPoolSession.openInternal(TezSessionPoolManager.java:622) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:206) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:283) > ~[hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:155) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) > [hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:429) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:445) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:151) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:641) >
[jira] [Updated] (HIVE-18871) hive on tez execution error due to set hive.aux.jars.path to hdfs://
[ https://issues.apache.org/jira/browse/HIVE-18871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-18871: -- Attachment: (was: HIVE-18871.1.patch) > hive on tez execution error due to set hive.aux.jars.path to hdfs:// > > > Key: HIVE-18871 > URL: https://issues.apache.org/jira/browse/HIVE-18871 > Project: Hive > Issue Type: Bug > Components: Tez >Affects Versions: 2.2.1 > Environment: hadoop 2.6.5 > hive 2.2.1 > tez 0.8.4 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > > When set the properties > hive.aux.jars.path=hdfs://mycluster/apps/hive/lib/guava.jar > and hive.execution.engine=tez; execute any query will fail with below error > log: > exec.Task: Failed to execute tez graph. > java.lang.IllegalArgumentException: Wrong FS: > hdfs://mycluster/apps/hive/lib/guava.jar, expected: file:/// > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1905) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:1007) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:902) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:845) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:466) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:252) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager$TezSessionPoolSession.openInternal(TezSessionPoolManager.java:622) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:206) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:283) > ~[hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:155) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) > [hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:429) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:445) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:151) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:641) >
[jira] [Updated] (HIVE-18871) hive on tez execution error due to set hive.aux.jars.path to hdfs://
[ https://issues.apache.org/jira/browse/HIVE-18871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-18871: -- Attachment: HIVE-18871.1.patch > hive on tez execution error due to set hive.aux.jars.path to hdfs:// > > > Key: HIVE-18871 > URL: https://issues.apache.org/jira/browse/HIVE-18871 > Project: Hive > Issue Type: Bug > Components: Tez >Affects Versions: 2.2.1 > Environment: hadoop 2.6.5 > hive 2.2.1 > tez 0.8.4 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-18871.1.patch > > > When set the properties > hive.aux.jars.path=hdfs://mycluster/apps/hive/lib/guava.jar > and hive.execution.engine=tez; execute any query will fail with below error > log: > exec.Task: Failed to execute tez graph. > java.lang.IllegalArgumentException: Wrong FS: > hdfs://mycluster/apps/hive/lib/guava.jar, expected: file:/// > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1905) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:1007) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:902) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:845) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:466) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:252) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager$TezSessionPoolSession.openInternal(TezSessionPoolManager.java:622) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:206) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:283) > ~[hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:155) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) > [hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:429) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:445) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:151) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714) > [hive-cli-2.1.1.jar:2.1.1] > at
[jira] [Updated] (HIVE-18871) hive on tez execution error due to set hive.aux.jars.path to hdfs://
[ https://issues.apache.org/jira/browse/HIVE-18871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhuwei updated HIVE-18871: -- Attachment: (was: HIVE-18871.1.patch) > hive on tez execution error due to set hive.aux.jars.path to hdfs:// > > > Key: HIVE-18871 > URL: https://issues.apache.org/jira/browse/HIVE-18871 > Project: Hive > Issue Type: Bug > Components: Tez >Affects Versions: 2.2.1 > Environment: hadoop 2.6.5 > hive 2.2.1 > tez 0.8.4 >Reporter: zhuwei >Assignee: zhuwei >Priority: Major > Attachments: HIVE-18871.1.patch > > > When set the properties > hive.aux.jars.path=hdfs://mycluster/apps/hive/lib/guava.jar > and hive.execution.engine=tez; execute any query will fail with below error > log: > exec.Task: Failed to execute tez graph. > java.lang.IllegalArgumentException: Wrong FS: > hdfs://mycluster/apps/hive/lib/guava.jar, expected: file:/// > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337) > ~[hadoop-common-2.6.0.jar:?] > at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1905) > ~[hadoop-common-2.6.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:1007) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:902) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:845) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:466) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:252) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager$TezSessionPoolSession.openInternal(TezSessionPoolManager.java:622) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:206) > ~[hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:283) > ~[hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:155) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) > [hive-exec-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161) > [hive-exec-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:429) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:445) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:151) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776) > [hive-cli-2.1.1.jar:2.1.1] > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714) > [hive-cli-2.1.1.jar:2.1.1] > at