[jira] [Updated] (HIVE-14428) HadoopMetrics2Reporter leaks memory if the metrics sink is not configured correctly
[ https://issues.apache.org/jira/browse/HIVE-14428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-14428: - Status: Patch Available (was: Open) > HadoopMetrics2Reporter leaks memory if the metrics sink is not configured > correctly > --- > > Key: HIVE-14428 > URL: https://issues.apache.org/jira/browse/HIVE-14428 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2 >Reporter: Siddharth Seth >Assignee: Thejas M Nair >Priority: Critical > Attachments: HIVE-14428.1.patch > > > About 80MB held after 7 hours of running. Metrics2Collector aggregates only > when it's invoked by the Hadoop sink. > Options - the first one is better IMO. > 1. Fix Metrics2Collector to aggregate more often, and fix the dependency in > Hive accordingly > 2. Don't setup the metrics sub-system if a sink is not configured. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14428) HadoopMetrics2Reporter leaks memory if the metrics sink is not configured correctly
[ https://issues.apache.org/jira/browse/HIVE-14428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-14428: - Attachment: HIVE-14428.1.patch > HadoopMetrics2Reporter leaks memory if the metrics sink is not configured > correctly > --- > > Key: HIVE-14428 > URL: https://issues.apache.org/jira/browse/HIVE-14428 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2 >Reporter: Siddharth Seth >Assignee: Thejas M Nair >Priority: Critical > Attachments: HIVE-14428.1.patch > > > About 80MB held after 7 hours of running. Metrics2Collector aggregates only > when it's invoked by the Hadoop sink. > Options - the first one is better IMO. > 1. Fix Metrics2Collector to aggregate more often, and fix the dependency in > Hive accordingly > 2. Don't setup the metrics sub-system if a sink is not configured. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HIVE-14428) HadoopMetrics2Reporter leaks memory if the metrics sink is not configured correctly
[ https://issues.apache.org/jira/browse/HIVE-14428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair reassigned HIVE-14428: Assignee: Thejas M Nair > HadoopMetrics2Reporter leaks memory if the metrics sink is not configured > correctly > --- > > Key: HIVE-14428 > URL: https://issues.apache.org/jira/browse/HIVE-14428 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2 >Reporter: Siddharth Seth >Assignee: Thejas M Nair >Priority: Critical > > About 80MB held after 7 hours of running. Metrics2Collector aggregates only > when it's invoked by the Hadoop sink. > Options - the first one is better IMO. > 1. Fix Metrics2Collector to aggregate more often, and fix the dependency in > Hive accordingly > 2. Don't setup the metrics sub-system if a sink is not configured. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14390) Wrong Table alias when CBO is on
[ https://issues.apache.org/jira/browse/HIVE-14390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15410830#comment-15410830 ] Nemon Lou commented on HIVE-14390: -- Thanks a lot. [~pxiong] [~ashutoshc] . I haven't managed to update these qtest result yet. > Wrong Table alias when CBO is on > > > Key: HIVE-14390 > URL: https://issues.apache.org/jira/browse/HIVE-14390 > Project: Hive > Issue Type: Bug > Components: CBO >Affects Versions: 1.2.1 >Reporter: Nemon Lou >Assignee: Nemon Lou >Priority: Minor > Fix For: 2.2.0 > > Attachments: HIVE-14390.patch, explain.rar > > > There are 5 web_sales references in query95 of tpcds ,with alias ws1-ws5. > But the query plan only has ws1 when CBO is on. > query95 : > {noformat} > SELECT count(distinct ws1.ws_order_number) as order_count, >sum(ws1.ws_ext_ship_cost) as total_shipping_cost, >sum(ws1.ws_net_profit) as total_net_profit > FROM web_sales ws1 > JOIN customer_address ca ON (ws1.ws_ship_addr_sk = ca.ca_address_sk) > JOIN web_site s ON (ws1.ws_web_site_sk = s.web_site_sk) > JOIN date_dim d ON (ws1.ws_ship_date_sk = d.d_date_sk) > LEFT SEMI JOIN (SELECT ws2.ws_order_number as ws_order_number >FROM web_sales ws2 JOIN web_sales ws3 >ON (ws2.ws_order_number = ws3.ws_order_number) >WHERE ws2.ws_warehouse_sk <> > ws3.ws_warehouse_sk > ) ws_wh1 > ON (ws1.ws_order_number = ws_wh1.ws_order_number) > LEFT SEMI JOIN (SELECT wr_order_number >FROM web_returns wr >JOIN (SELECT ws4.ws_order_number as > ws_order_number > FROM web_sales ws4 JOIN web_sales > ws5 > ON (ws4.ws_order_number = > ws5.ws_order_number) > WHERE ws4.ws_warehouse_sk <> > ws5.ws_warehouse_sk > ) ws_wh2 >ON (wr.wr_order_number = > ws_wh2.ws_order_number)) tmp1 > ON (ws1.ws_order_number = tmp1.wr_order_number) > WHERE d.d_date between '2002-05-01' and '2002-06-30' and >ca.ca_state = 'GA' and >s.web_company_name = 'pri'; > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14456) HS2 memory leak if hadoop2 metrics sink is not configured properly
[ https://issues.apache.org/jira/browse/HIVE-14456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-14456: - Status: Patch Available (was: Open) > HS2 memory leak if hadoop2 metrics sink is not configured properly > -- > > Key: HIVE-14456 > URL: https://issues.apache.org/jira/browse/HIVE-14456 > Project: Hive > Issue Type: Bug > Components: HiveServer2, Metastore >Reporter: Thejas M Nair >Assignee: Thejas M Nair >Priority: Critical > Attachments: HIVE-14456.1.patch > > > The dropwizard-metrics-hadoop-metrics2-reporter version needs to be updated > to pick the fix for this in > https://github.com/joshelser/dropwizard-hadoop-metrics2/issues/4 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14456) HS2 memory leak if hadoop2 metrics sink is not configured properly
[ https://issues.apache.org/jira/browse/HIVE-14456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-14456: - Attachment: HIVE-14456.1.patch > HS2 memory leak if hadoop2 metrics sink is not configured properly > -- > > Key: HIVE-14456 > URL: https://issues.apache.org/jira/browse/HIVE-14456 > Project: Hive > Issue Type: Bug > Components: HiveServer2, Metastore >Reporter: Thejas M Nair >Assignee: Thejas M Nair >Priority: Critical > Attachments: HIVE-14456.1.patch > > > The dropwizard-metrics-hadoop-metrics2-reporter version needs to be updated > to pick the fix for this in > https://github.com/joshelser/dropwizard-hadoop-metrics2/issues/4 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14455) upgrade httpclient, httpcore to match updated hadoop dependency
[ https://issues.apache.org/jira/browse/HIVE-14455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15410806#comment-15410806 ] Hive QA commented on HIVE-14455: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12822456/HIVE-14455.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 10440 tests executed *Failed tests:* {noformat} TestMsgBusConnection - did not produce a TEST-*.xml file TestQueryLifeTimeHook - did not produce a TEST-*.xml file org.apache.hive.jdbc.TestJdbcWithMiniHS2.testAddJarConstructorUnCaching {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/802/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/802/console Test logs: http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-802/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 3 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12822456 - PreCommit-HIVE-MASTER-Build > upgrade httpclient, httpcore to match updated hadoop dependency > --- > > Key: HIVE-14455 > URL: https://issues.apache.org/jira/browse/HIVE-14455 > Project: Hive > Issue Type: Bug >Reporter: Thejas M Nair >Assignee: Thejas M Nair > Attachments: HIVE-14455.1.patch > > > Hive was having a newer version of httpclient and httpcore since 1.2.0 > (HIVE-9709), when compared to Hadoop 2.x versions, to be able to make use of > newer apis in httpclient 4.4. > There was security issue in the older version of httpclient and httpcore > that hadoop was using, and as a result moved to httpclient 4.5.2 and > httpcore 4.4.4 (HADOOP-12767). > As hadoop was using the older version of these libraries and they often end > up earlier in the classpath, we have had bunch of difficulties in different > environments with class/method not found errors. > Now, that hadoops dependencies in versions with security fix are newer and > have the API that hive needs, we can be on the same version. For older > versions of hadoop this version update doesn't matter as the difference is > already there. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14455) upgrade httpclient, httpcore to match updated hadoop dependency
[ https://issues.apache.org/jira/browse/HIVE-14455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-14455: - Status: Patch Available (was: Open) > upgrade httpclient, httpcore to match updated hadoop dependency > --- > > Key: HIVE-14455 > URL: https://issues.apache.org/jira/browse/HIVE-14455 > Project: Hive > Issue Type: Bug >Reporter: Thejas M Nair >Assignee: Thejas M Nair > Attachments: HIVE-14455.1.patch > > > Hive was having a newer version of httpclient and httpcore since 1.2.0 > (HIVE-9709), when compared to Hadoop 2.x versions, to be able to make use of > newer apis in httpclient 4.4. > There was security issue in the older version of httpclient and httpcore > that hadoop was using, and as a result moved to httpclient 4.5.2 and > httpcore 4.4.4 (HADOOP-12767). > As hadoop was using the older version of these libraries and they often end > up earlier in the classpath, we have had bunch of difficulties in different > environments with class/method not found errors. > Now, that hadoops dependencies in versions with security fix are newer and > have the API that hive needs, we can be on the same version. For older > versions of hadoop this version update doesn't matter as the difference is > already there. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14455) upgrade httpclient, httpcore to match updated hadoop dependency
[ https://issues.apache.org/jira/browse/HIVE-14455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-14455: - Attachment: HIVE-14455.1.patch > upgrade httpclient, httpcore to match updated hadoop dependency > --- > > Key: HIVE-14455 > URL: https://issues.apache.org/jira/browse/HIVE-14455 > Project: Hive > Issue Type: Bug >Reporter: Thejas M Nair >Assignee: Thejas M Nair > Attachments: HIVE-14455.1.patch > > > Hive was having a newer version of httpclient and httpcore since 1.2.0 > (HIVE-9709), when compared to Hadoop 2.x versions, to be able to make use of > newer apis in httpclient 4.4. > There was security issue in the older version of httpclient and httpcore > that hadoop was using, and as a result moved to httpclient 4.5.2 and > httpcore 4.4.4 (HADOOP-12767). > As hadoop was using the older version of these libraries and they often end > up earlier in the classpath, we have had bunch of difficulties in different > environments with class/method not found errors. > Now, that hadoops dependencies in versions with security fix are newer and > have the API that hive needs, we can be on the same version. For older > versions of hadoop this version update doesn't matter as the difference is > already there. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14390) Wrong Table alias when CBO is on
[ https://issues.apache.org/jira/browse/HIVE-14390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15410754#comment-15410754 ] Pengcheng Xiong commented on HIVE-14390: Thanks a lot [~ashutoshc] for taking this. > Wrong Table alias when CBO is on > > > Key: HIVE-14390 > URL: https://issues.apache.org/jira/browse/HIVE-14390 > Project: Hive > Issue Type: Bug > Components: CBO >Affects Versions: 1.2.1 >Reporter: Nemon Lou >Assignee: Nemon Lou >Priority: Minor > Fix For: 2.2.0 > > Attachments: HIVE-14390.patch, explain.rar > > > There are 5 web_sales references in query95 of tpcds ,with alias ws1-ws5. > But the query plan only has ws1 when CBO is on. > query95 : > {noformat} > SELECT count(distinct ws1.ws_order_number) as order_count, >sum(ws1.ws_ext_ship_cost) as total_shipping_cost, >sum(ws1.ws_net_profit) as total_net_profit > FROM web_sales ws1 > JOIN customer_address ca ON (ws1.ws_ship_addr_sk = ca.ca_address_sk) > JOIN web_site s ON (ws1.ws_web_site_sk = s.web_site_sk) > JOIN date_dim d ON (ws1.ws_ship_date_sk = d.d_date_sk) > LEFT SEMI JOIN (SELECT ws2.ws_order_number as ws_order_number >FROM web_sales ws2 JOIN web_sales ws3 >ON (ws2.ws_order_number = ws3.ws_order_number) >WHERE ws2.ws_warehouse_sk <> > ws3.ws_warehouse_sk > ) ws_wh1 > ON (ws1.ws_order_number = ws_wh1.ws_order_number) > LEFT SEMI JOIN (SELECT wr_order_number >FROM web_returns wr >JOIN (SELECT ws4.ws_order_number as > ws_order_number > FROM web_sales ws4 JOIN web_sales > ws5 > ON (ws4.ws_order_number = > ws5.ws_order_number) > WHERE ws4.ws_warehouse_sk <> > ws5.ws_warehouse_sk > ) ws_wh2 >ON (wr.wr_order_number = > ws_wh2.ws_order_number)) tmp1 > ON (ws1.ws_order_number = tmp1.wr_order_number) > WHERE d.d_date between '2002-05-01' and '2002-06-30' and >ca.ca_state = 'GA' and >s.web_company_name = 'pri'; > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14390) Wrong Table alias when CBO is on
[ https://issues.apache.org/jira/browse/HIVE-14390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-14390: Resolution: Fixed Fix Version/s: 2.2.0 Status: Resolved (was: Patch Available) Pushed to master. Thanks, Nemon! > Wrong Table alias when CBO is on > > > Key: HIVE-14390 > URL: https://issues.apache.org/jira/browse/HIVE-14390 > Project: Hive > Issue Type: Bug > Components: CBO >Affects Versions: 1.2.1 >Reporter: Nemon Lou >Assignee: Nemon Lou >Priority: Minor > Fix For: 2.2.0 > > Attachments: HIVE-14390.patch, explain.rar > > > There are 5 web_sales references in query95 of tpcds ,with alias ws1-ws5. > But the query plan only has ws1 when CBO is on. > query95 : > {noformat} > SELECT count(distinct ws1.ws_order_number) as order_count, >sum(ws1.ws_ext_ship_cost) as total_shipping_cost, >sum(ws1.ws_net_profit) as total_net_profit > FROM web_sales ws1 > JOIN customer_address ca ON (ws1.ws_ship_addr_sk = ca.ca_address_sk) > JOIN web_site s ON (ws1.ws_web_site_sk = s.web_site_sk) > JOIN date_dim d ON (ws1.ws_ship_date_sk = d.d_date_sk) > LEFT SEMI JOIN (SELECT ws2.ws_order_number as ws_order_number >FROM web_sales ws2 JOIN web_sales ws3 >ON (ws2.ws_order_number = ws3.ws_order_number) >WHERE ws2.ws_warehouse_sk <> > ws3.ws_warehouse_sk > ) ws_wh1 > ON (ws1.ws_order_number = ws_wh1.ws_order_number) > LEFT SEMI JOIN (SELECT wr_order_number >FROM web_returns wr >JOIN (SELECT ws4.ws_order_number as > ws_order_number > FROM web_sales ws4 JOIN web_sales > ws5 > ON (ws4.ws_order_number = > ws5.ws_order_number) > WHERE ws4.ws_warehouse_sk <> > ws5.ws_warehouse_sk > ) ws_wh2 >ON (wr.wr_order_number = > ws_wh2.ws_order_number)) tmp1 > ON (ws1.ws_order_number = tmp1.wr_order_number) > WHERE d.d_date between '2002-05-01' and '2002-06-30' and >ca.ca_state = 'GA' and >s.web_company_name = 'pri'; > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14436) Hive 1.2.1/Hitting "ql.Driver: FAILED: IllegalArgumentException Error: , expected at the end of 'decimal(9'" after enabling hive.optimize.skewjoin and with MR engine
[ https://issues.apache.org/jira/browse/HIVE-14436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15410742#comment-15410742 ] Hive QA commented on HIVE-14436: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12822426/HIVE-14436.1.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 10441 tests executed *Failed tests:* {noformat} TestMsgBusConnection - did not produce a TEST-*.xml file TestQueryLifeTimeHook - did not produce a TEST-*.xml file org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskSchedulerService.testForcedLocalityMultiplePreemptionsSameHost2 org.apache.hive.jdbc.TestJdbcWithMiniHS2.testAddJarConstructorUnCaching {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/801/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/801/console Test logs: http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-801/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 4 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12822426 - PreCommit-HIVE-MASTER-Build > Hive 1.2.1/Hitting "ql.Driver: FAILED: IllegalArgumentException Error: , > expected at the end of 'decimal(9'" after enabling hive.optimize.skewjoin and > with MR engine > - > > Key: HIVE-14436 > URL: https://issues.apache.org/jira/browse/HIVE-14436 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 1.2.1 > Environment: HDP 2.4.2/ Hive 1.2.1 >Reporter: Ratish Maruthiyodan >Assignee: Daniel Dai > Labels: code > Attachments: HIVE-14436.1.patch > > > PROBLEM: > The following Query run with MapReduce engine with "hive.optimize.skewjoin = > true" fails with error: > "FAILED: IllegalArgumentException Error: , expected at the end of > 'decimal(9'" > > SELECT a.col1 FROM db.tableA a INNER JOIN db.tableB b ON b.key=a.key > > limit 5; > FAILED: IllegalArgumentException Error: , expected at the end of 'decimal(9' > 16/08/04 12:47:50 [main]: ERROR ql.Driver: FAILED: IllegalArgumentException > Error: , expected at the end of 'decimal(9' > java.lang.IllegalArgumentException: Error: , expected at the end of > 'decimal(9' > at > org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils$TypeInfoParser.expect(TypeInfoUtils.java:336) > at > org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils$TypeInfoParser.parseParams(TypeInfoUtils.java:378) > at > org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils$TypeInfoParser.parsePrimitiveParts(TypeInfoUtils.java:518) > at > org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils.parsePrimitiveParts(TypeInfoUtils.java:533) > at > org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory.createPrimitiveTypeInfo(TypeInfoFactory.java:136) > at > org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory.getPrimitiveTypeInfo(TypeInfoFactory.java:109) > at > org.apache.hadoop.hive.ql.optimizer.physical.GenMRSkewJoinProcessor.processSkewJoin(GenMRSkewJoinProcessor.java:214) > at > org.apache.hadoop.hive.ql.optimizer.physical.SkewJoinProcFactory$SkewJoinJoinProcessor.process(SkewJoinProcFactory.java:60) > at > org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:95) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:79) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:133) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:110) > at > org.apache.hadoop.hive.ql.optimizer.physical.SkewJoinResolver$SkewJoinTaskDispatcher.dispatch(SkewJoinResolver.java:100) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:95) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:79) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:133) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:110) > at >
[jira] [Updated] (HIVE-14455) upgrade httpclient, httpcore to match updated hadoop dependency
[ https://issues.apache.org/jira/browse/HIVE-14455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-14455: - Summary: upgrade httpclient, httpcore to match updated hadoop dependency (was: upgrade httpclient, httpcore to match update hadoop dependency) > upgrade httpclient, httpcore to match updated hadoop dependency > --- > > Key: HIVE-14455 > URL: https://issues.apache.org/jira/browse/HIVE-14455 > Project: Hive > Issue Type: Bug >Reporter: Thejas M Nair >Assignee: Thejas M Nair > > Hive was having a newer version of httpclient and httpcore since 1.2.0 > (HIVE-9709), when compared to Hadoop 2.x versions, to be able to make use of > newer apis in httpclient 4.4. > There was security issue in the older version of httpclient and httpcore > that hadoop was using, and as a result moved to httpclient 4.5.2 and > httpcore 4.4.4 (HADOOP-12767). > As hadoop was using the older version of these libraries and they often end > up earlier in the classpath, we have had bunch of difficulties in different > environments with class/method not found errors. > Now, that hadoops dependencies in versions with security fix are newer and > have the API that hive needs, we can be on the same version. For older > versions of hadoop this version update doesn't matter as the difference is > already there. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14453) refactor physical writing of ORC data and metadata to FS from the logical writers
[ https://issues.apache.org/jira/browse/HIVE-14453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15410720#comment-15410720 ] Hive QA commented on HIVE-14453: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12822409/HIVE-14453.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 10440 tests executed *Failed tests:* {noformat} TestMsgBusConnection - did not produce a TEST-*.xml file TestQueryLifeTimeHook - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_orc_llap_counters {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/800/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/800/console Test logs: http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-800/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 3 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12822409 - PreCommit-HIVE-MASTER-Build > refactor physical writing of ORC data and metadata to FS from the logical > writers > - > > Key: HIVE-14453 > URL: https://issues.apache.org/jira/browse/HIVE-14453 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-14453.patch > > > ORC data doesn't have to go directly into an HDFS stream via buffers, it can > go somewhere else (e.g. a write-thru cache, or an addressable system that > doesn't require the stream blocks to be held in memory before writing them > all together). > To that effect, it would be nice to abstract the data block/metadata > structure creating from the physical file concerns. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-12181) Change hive.stats.fetch.column.stats value to true for MiniTezCliDriver
[ https://issues.apache.org/jira/browse/HIVE-12181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15410684#comment-15410684 ] Hive QA commented on HIVE-12181: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12822407/HIVE-12181.9.patch {color:green}SUCCESS:{color} +1 due to 12 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 98 failed/errored test(s), 10440 tests executed *Failed tests:* {noformat} TestMsgBusConnection - did not produce a TEST-*.xml file TestQueryLifeTimeHook - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_mapjoin_mapjoin org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_unionDistinct_1 org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_dynamic_partition_pruning org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_orc_llap_counters org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_tez_dynpart_hashjoin_1 org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_vectorized_dynamic_partition_pruning org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_1 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_11 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_12 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_2 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_3 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_4 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_7 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_8 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_bucket_map_join_tez1 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_bucket_map_join_tez2 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_bucketpruning1 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_cte_mat_1 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_cte_mat_2 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_dynamic_partition_pruning_2 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_dynpart_sort_opt_vectorization org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_explainuser_1 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_explainuser_2 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_explainuser_3 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_explainuser_4 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_filter_join_breaktask org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_having org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_hybridgrace_hashjoin_1 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_hybridgrace_hashjoin_2 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_insert_into2 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_limit_pushdown org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_mapjoin_mapjoin org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_merge1 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_merge2 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_mergejoin org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_metadata_only_queries org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_optimize_nullscan org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_orc_merge3 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_orc_merge4 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_orc_nonvec_mapwork_part_all_complex org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_orc_nonvec_mapwork_part_all_primitive org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_orc_vec_mapwork_part org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_orc_vec_mapwork_part_all_complex org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_orc_vec_mapwork_part_all_primitive org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_text_nonvec_mapwork_part_all_complex org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_text_vec_mapwork_part org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_text_vec_mapwork_part_all_complex org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_text_vecrow_mapwork_part_all_primitive org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_tez_bmj_schema_evolution org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_tez_dynpart_hashjoin_1 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_tez_dynpart_hashjoin_2
[jira] [Commented] (HIVE-14447) Set HIVE_TRANSACTIONAL_TABLE_SCAN to the correct job conf for FetchOperator
[ https://issues.apache.org/jira/browse/HIVE-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15410646#comment-15410646 ] Hive QA commented on HIVE-14447: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12822396/HIVE-14447.1.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 10441 tests executed *Failed tests:* {noformat} TestMsgBusConnection - did not produce a TEST-*.xml file TestQueryLifeTimeHook - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_orc_llap_counters org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskSchedulerService.testDelayedLocalityNodeCommErrorImmediateAllocation org.apache.hive.jdbc.TestJdbcWithMiniHS2.testAddJarConstructorUnCaching {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/798/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/798/console Test logs: http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-798/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 5 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12822396 - PreCommit-HIVE-MASTER-Build > Set HIVE_TRANSACTIONAL_TABLE_SCAN to the correct job conf for FetchOperator > --- > > Key: HIVE-14447 > URL: https://issues.apache.org/jira/browse/HIVE-14447 > Project: Hive > Issue Type: Bug > Components: Hive, Transactions >Affects Versions: 1.3.0, 2.2.0, 2.1.1 >Reporter: Wei Zheng >Assignee: Prasanth Jayachandran > Attachments: HIVE-14447.1.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14435) Vectorization: missed vectorization for const varchar()
[ https://issues.apache.org/jira/browse/HIVE-14435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15410618#comment-15410618 ] Hive QA commented on HIVE-14435: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12822385/HIVE-14435.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 10439 tests executed *Failed tests:* {noformat} TestMsgBusConnection - did not produce a TEST-*.xml file TestQueryLifeTimeHook - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_orc_llap_counters org.apache.hive.hcatalog.hbase.TestPigHBaseStorageHandler.org.apache.hive.hcatalog.hbase.TestPigHBaseStorageHandler org.apache.hive.jdbc.TestJdbcWithMiniHS2.testAddJarConstructorUnCaching {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/797/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/797/console Test logs: http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-797/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 5 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12822385 - PreCommit-HIVE-MASTER-Build > Vectorization: missed vectorization for const varchar() > --- > > Key: HIVE-14435 > URL: https://issues.apache.org/jira/browse/HIVE-14435 > Project: Hive > Issue Type: Bug > Components: Vectorization >Affects Versions: 2.2.0 >Reporter: Gopal V >Assignee: Gopal V > Attachments: HIVE-14435.patch > > > {code} > 2016-08-05T09:45:16,488 INFO [main] physical.Vectorizer: Failed to vectorize > 2016-08-05T09:45:16,488 INFO [main] physical.Vectorizer: Cannot vectorize > select expression: Const varchar(1) f > {code} > The constant throws an illegal argument because the varchar precision is lost > in the pipeline. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14433) refactor LLAP plan cache avoidance and fix issue in merge processor
[ https://issues.apache.org/jira/browse/HIVE-14433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15410594#comment-15410594 ] Hive QA commented on HIVE-14433: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12822362/HIVE-14433.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/796/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/796/console Test logs: http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-796/ Messages: {noformat} This message was trimmed, see log for full details [INFO] [INFO] --- maven-jar-plugin:2.2:jar (default-jar) @ spark-client --- [INFO] Building jar: /data/hive-ptest/working/apache-github-source-source/spark-client/target/spark-client-2.2.0-SNAPSHOT.jar [INFO] [INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ spark-client --- [INFO] [INFO] --- maven-install-plugin:2.4:install (default-install) @ spark-client --- [INFO] Installing /data/hive-ptest/working/apache-github-source-source/spark-client/target/spark-client-2.2.0-SNAPSHOT.jar to /data/hive-ptest/working/maven/org/apache/hive/spark-client/2.2.0-SNAPSHOT/spark-client-2.2.0-SNAPSHOT.jar [INFO] Installing /data/hive-ptest/working/apache-github-source-source/spark-client/pom.xml to /data/hive-ptest/working/maven/org/apache/hive/spark-client/2.2.0-SNAPSHOT/spark-client-2.2.0-SNAPSHOT.pom [INFO] [INFO] [INFO] Building Hive Query Language 2.2.0-SNAPSHOT [INFO] [INFO] [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hive-exec --- [INFO] Deleting /data/hive-ptest/working/apache-github-source-source/ql/target [INFO] Deleting /data/hive-ptest/working/apache-github-source-source/ql (includes = [datanucleus.log, derby.log], excludes = []) [INFO] [INFO] --- maven-enforcer-plugin:1.3.1:enforce (enforce-no-snapshots) @ hive-exec --- [INFO] [INFO] --- maven-antrun-plugin:1.7:run (generate-sources) @ hive-exec --- [INFO] Executing tasks main: [mkdir] Created dir: /data/hive-ptest/working/apache-github-source-source/ql/target/generated-sources/java/org/apache/hadoop/hive/ql/exec/vector/expressions/gen [mkdir] Created dir: /data/hive-ptest/working/apache-github-source-source/ql/target/generated-sources/java/org/apache/hadoop/hive/ql/exec/vector/expressions/aggregates/gen [mkdir] Created dir: /data/hive-ptest/working/apache-github-source-source/ql/target/generated-test-sources/java/org/apache/hadoop/hive/ql/exec/vector/expressions/gen Generating vector expression code Generating vector expression test code [INFO] Executed tasks [INFO] [INFO] --- build-helper-maven-plugin:1.8:add-source (add-source) @ hive-exec --- [INFO] Source directory: /data/hive-ptest/working/apache-github-source-source/ql/src/gen/thrift/gen-javabean added. [INFO] Source directory: /data/hive-ptest/working/apache-github-source-source/ql/target/generated-sources/java added. [INFO] [INFO] --- antlr3-maven-plugin:3.4:antlr (default) @ hive-exec --- [INFO] ANTLR: Processing source directory /data/hive-ptest/working/apache-github-source-source/ql/src/java ANTLR Parser Generator Version 3.4 org/apache/hadoop/hive/ql/parse/HiveLexer.g org/apache/hadoop/hive/ql/parse/HiveParser.g [INFO] [INFO] --- maven-remote-resources-plugin:1.5:process (default) @ hive-exec --- [INFO] [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ hive-exec --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] Copying 4 resources [INFO] Copying 3 resources [INFO] [INFO] --- maven-antrun-plugin:1.7:run (define-classpath) @ hive-exec --- [INFO] Executing tasks main: [INFO] Executed tasks [INFO] [INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hive-exec --- [INFO] Compiling 2654 source files to /data/hive-ptest/working/apache-github-source-source/ql/target/classes [INFO] - [ERROR] COMPILATION ERROR : [INFO] - [ERROR] /data/hive-ptest/working/apache-github-source-source/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/MapRecordProcessor.java:[107,3] illegal start of expression [ERROR] /data/hive-ptest/working/apache-github-source-source/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/MapRecordProcessor.java:[107,11] illegal start of expression [ERROR] /data/hive-ptest/working/apache-github-source-source/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/MapRecordProcessor.java:[107,35] ';' expected [ERROR]
[jira] [Commented] (HIVE-14442) CBO: Calcite Operator To Hive Operator(Calcite Return Path): Wrong result/plan in group by with hive.map.aggr=false
[ https://issues.apache.org/jira/browse/HIVE-14442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15410593#comment-15410593 ] Hive QA commented on HIVE-14442: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12822354/HIVE-14442.1.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 10425 tests executed *Failed tests:* {noformat} TestMiniTezCliDriver-dynamic_partition_pruning.q-vector_char_mapjoin1.q-unionDistinct_2.q-and-12-more - did not produce a TEST-*.xml file TestMsgBusConnection - did not produce a TEST-*.xml file TestQueryLifeTimeHook - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_count org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_orc_llap_counters org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_count org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_count org.apache.hive.jdbc.TestJdbcWithMiniHS2.testAddJarConstructorUnCaching {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/795/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/795/console Test logs: http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-795/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 8 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12822354 - PreCommit-HIVE-MASTER-Build > CBO: Calcite Operator To Hive Operator(Calcite Return Path): Wrong > result/plan in group by with hive.map.aggr=false > --- > > Key: HIVE-14442 > URL: https://issues.apache.org/jira/browse/HIVE-14442 > Project: Hive > Issue Type: Sub-task > Components: CBO >Reporter: Vineet Garg >Assignee: Vineet Garg > Attachments: HIVE-14442.1.patch > > > Reproducer > {code} set hive.cbo.returnpath.hiveop=true > set hive.map.aggr=false > create table abcd (a int, b int, c int, d int); > LOAD DATA LOCAL INPATH '../../data/files/in4.txt' INTO TABLE abcd; > {code} > {code} explain select count(distinct a) from abcd group by b; {code} > {code} > STAGE PLANS: > Stage: Stage-1 > Map Reduce > Map Operator Tree: > TableScan > alias: abcd > Statistics: Num rows: 19 Data size: 78 Basic stats: COMPLETE > Column stats: NONE > Select Operator > expressions: a (type: int) > outputColumnNames: a > Statistics: Num rows: 19 Data size: 78 Basic stats: COMPLETE > Column stats: NONE > Reduce Output Operator > key expressions: a (type: int), a (type: int) > sort order: ++ > Map-reduce partition columns: a (type: int) > Statistics: Num rows: 19 Data size: 78 Basic stats: COMPLETE > Column stats: NONE > Reduce Operator Tree: > Group By Operator > aggregations: count(DISTINCT KEY._col1:0._col0) > keys: KEY._col0 (type: int) > mode: complete > outputColumnNames: b, $f1 > Statistics: Num rows: 9 Data size: 36 Basic stats: COMPLETE Column > stats: NONE > Select Operator > expressions: $f1 (type: bigint) > outputColumnNames: _o__c0 > Statistics: Num rows: 9 Data size: 36 Basic stats: COMPLETE > Column stats: NONE > File Output Operator > compressed: false > Statistics: Num rows: 9 Data size: 36 Basic stats: COMPLETE > Column stats: NONE > table: > input format: > org.apache.hadoop.mapred.SequenceFileInputFormat > output format: > org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat > serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe > {code} > {code} explain select count(distinct a) from abcd group by c; {code} > {code} > STAGE PLANS: > Stage: Stage-1 > Map Reduce > Map Operator Tree: > TableScan > alias: abcd > Statistics: Num rows: 19 Data size: 78 Basic stats: COMPLETE > Column stats: NONE > Select Operator > expressions: a (type: int) > outputColumnNames: a > Statistics: Num rows: 19 Data size: 78 Basic stats: COMPLETE > Column stats: NONE > Reduce Output Operator > key expressions:
[jira] [Commented] (HIVE-14422) LLAP IF: when using LLAP IF from multiple threads in secure cluster, tokens can get mixed up
[ https://issues.apache.org/jira/browse/HIVE-14422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15410564#comment-15410564 ] Hive QA commented on HIVE-14422: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12822352/HIVE-14422.01.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 10425 tests executed *Failed tests:* {noformat} TestMiniTezCliDriver-vector_data_types.q-schema_evol_text_vecrow_mapwork_part_all_primitive.q-bucket4.q-and-12-more - did not produce a TEST-*.xml file TestMsgBusConnection - did not produce a TEST-*.xml file TestQueryLifeTimeHook - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_orc_llap_counters {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/794/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/794/console Test logs: http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-794/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 4 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12822352 - PreCommit-HIVE-MASTER-Build > LLAP IF: when using LLAP IF from multiple threads in secure cluster, tokens > can get mixed up > - > > Key: HIVE-14422 > URL: https://issues.apache.org/jira/browse/HIVE-14422 > Project: Hive > Issue Type: Bug >Reporter: Jason Dere >Assignee: Sergey Shelukhin > Attachments: HIVE-14422.01.patch, HIVE-14422.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14436) Hive 1.2.1/Hitting "ql.Driver: FAILED: IllegalArgumentException Error: , expected at the end of 'decimal(9'" after enabling hive.optimize.skewjoin and with MR engine
[ https://issues.apache.org/jira/browse/HIVE-14436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-14436: -- Target Version/s: 2.2.0 > Hive 1.2.1/Hitting "ql.Driver: FAILED: IllegalArgumentException Error: , > expected at the end of 'decimal(9'" after enabling hive.optimize.skewjoin and > with MR engine > - > > Key: HIVE-14436 > URL: https://issues.apache.org/jira/browse/HIVE-14436 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 1.2.1 > Environment: HDP 2.4.2/ Hive 1.2.1 >Reporter: Ratish Maruthiyodan >Assignee: Daniel Dai > Labels: code > Attachments: HIVE-14436.1.patch > > > PROBLEM: > The following Query run with MapReduce engine with "hive.optimize.skewjoin = > true" fails with error: > "FAILED: IllegalArgumentException Error: , expected at the end of > 'decimal(9'" > > SELECT a.col1 FROM db.tableA a INNER JOIN db.tableB b ON b.key=a.key > > limit 5; > FAILED: IllegalArgumentException Error: , expected at the end of 'decimal(9' > 16/08/04 12:47:50 [main]: ERROR ql.Driver: FAILED: IllegalArgumentException > Error: , expected at the end of 'decimal(9' > java.lang.IllegalArgumentException: Error: , expected at the end of > 'decimal(9' > at > org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils$TypeInfoParser.expect(TypeInfoUtils.java:336) > at > org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils$TypeInfoParser.parseParams(TypeInfoUtils.java:378) > at > org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils$TypeInfoParser.parsePrimitiveParts(TypeInfoUtils.java:518) > at > org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils.parsePrimitiveParts(TypeInfoUtils.java:533) > at > org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory.createPrimitiveTypeInfo(TypeInfoFactory.java:136) > at > org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory.getPrimitiveTypeInfo(TypeInfoFactory.java:109) > at > org.apache.hadoop.hive.ql.optimizer.physical.GenMRSkewJoinProcessor.processSkewJoin(GenMRSkewJoinProcessor.java:214) > at > org.apache.hadoop.hive.ql.optimizer.physical.SkewJoinProcFactory$SkewJoinJoinProcessor.process(SkewJoinProcFactory.java:60) > at > org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:95) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:79) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:133) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:110) > at > org.apache.hadoop.hive.ql.optimizer.physical.SkewJoinResolver$SkewJoinTaskDispatcher.dispatch(SkewJoinResolver.java:100) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:95) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:79) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:133) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:110) > at > org.apache.hadoop.hive.ql.optimizer.physical.SkewJoinResolver.resolve(SkewJoinResolver.java:55) > at > org.apache.hadoop.hive.ql.optimizer.physical.PhysicalOptimizer.optimize(PhysicalOptimizer.java:107) > at > org.apache.hadoop.hive.ql.parse.MapReduceCompiler.optimizeTaskPlan(MapReduceCompiler.java:270) > at > org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:227) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10219) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:211) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:227) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:459) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:316) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1189) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1237) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1126) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1116) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:216) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:168) > at
[jira] [Updated] (HIVE-14436) Hive 1.2.1/Hitting "ql.Driver: FAILED: IllegalArgumentException Error: , expected at the end of 'decimal(9'" after enabling hive.optimize.skewjoin and with MR engine
[ https://issues.apache.org/jira/browse/HIVE-14436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-14436: -- Status: Patch Available (was: Open) > Hive 1.2.1/Hitting "ql.Driver: FAILED: IllegalArgumentException Error: , > expected at the end of 'decimal(9'" after enabling hive.optimize.skewjoin and > with MR engine > - > > Key: HIVE-14436 > URL: https://issues.apache.org/jira/browse/HIVE-14436 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 1.2.1 > Environment: HDP 2.4.2/ Hive 1.2.1 >Reporter: Ratish Maruthiyodan >Assignee: Daniel Dai > Labels: code > Attachments: HIVE-14436.1.patch > > > PROBLEM: > The following Query run with MapReduce engine with "hive.optimize.skewjoin = > true" fails with error: > "FAILED: IllegalArgumentException Error: , expected at the end of > 'decimal(9'" > > SELECT a.col1 FROM db.tableA a INNER JOIN db.tableB b ON b.key=a.key > > limit 5; > FAILED: IllegalArgumentException Error: , expected at the end of 'decimal(9' > 16/08/04 12:47:50 [main]: ERROR ql.Driver: FAILED: IllegalArgumentException > Error: , expected at the end of 'decimal(9' > java.lang.IllegalArgumentException: Error: , expected at the end of > 'decimal(9' > at > org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils$TypeInfoParser.expect(TypeInfoUtils.java:336) > at > org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils$TypeInfoParser.parseParams(TypeInfoUtils.java:378) > at > org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils$TypeInfoParser.parsePrimitiveParts(TypeInfoUtils.java:518) > at > org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils.parsePrimitiveParts(TypeInfoUtils.java:533) > at > org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory.createPrimitiveTypeInfo(TypeInfoFactory.java:136) > at > org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory.getPrimitiveTypeInfo(TypeInfoFactory.java:109) > at > org.apache.hadoop.hive.ql.optimizer.physical.GenMRSkewJoinProcessor.processSkewJoin(GenMRSkewJoinProcessor.java:214) > at > org.apache.hadoop.hive.ql.optimizer.physical.SkewJoinProcFactory$SkewJoinJoinProcessor.process(SkewJoinProcFactory.java:60) > at > org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:95) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:79) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:133) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:110) > at > org.apache.hadoop.hive.ql.optimizer.physical.SkewJoinResolver$SkewJoinTaskDispatcher.dispatch(SkewJoinResolver.java:100) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:95) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:79) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:133) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:110) > at > org.apache.hadoop.hive.ql.optimizer.physical.SkewJoinResolver.resolve(SkewJoinResolver.java:55) > at > org.apache.hadoop.hive.ql.optimizer.physical.PhysicalOptimizer.optimize(PhysicalOptimizer.java:107) > at > org.apache.hadoop.hive.ql.parse.MapReduceCompiler.optimizeTaskPlan(MapReduceCompiler.java:270) > at > org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:227) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10219) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:211) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:227) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:459) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:316) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1189) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1237) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1126) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1116) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:216) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:168) > at
[jira] [Updated] (HIVE-14436) Hive 1.2.1/Hitting "ql.Driver: FAILED: IllegalArgumentException Error: , expected at the end of 'decimal(9'" after enabling hive.optimize.skewjoin and with MR engine
[ https://issues.apache.org/jira/browse/HIVE-14436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-14436: -- Attachment: (was: HIVE-14436.1.patch) > Hive 1.2.1/Hitting "ql.Driver: FAILED: IllegalArgumentException Error: , > expected at the end of 'decimal(9'" after enabling hive.optimize.skewjoin and > with MR engine > - > > Key: HIVE-14436 > URL: https://issues.apache.org/jira/browse/HIVE-14436 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 1.2.1 > Environment: HDP 2.4.2/ Hive 1.2.1 >Reporter: Ratish Maruthiyodan >Assignee: Daniel Dai > Labels: code > Attachments: HIVE-14436.1.patch > > > PROBLEM: > The following Query run with MapReduce engine with "hive.optimize.skewjoin = > true" fails with error: > "FAILED: IllegalArgumentException Error: , expected at the end of > 'decimal(9'" > > SELECT a.col1 FROM db.tableA a INNER JOIN db.tableB b ON b.key=a.key > > limit 5; > FAILED: IllegalArgumentException Error: , expected at the end of 'decimal(9' > 16/08/04 12:47:50 [main]: ERROR ql.Driver: FAILED: IllegalArgumentException > Error: , expected at the end of 'decimal(9' > java.lang.IllegalArgumentException: Error: , expected at the end of > 'decimal(9' > at > org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils$TypeInfoParser.expect(TypeInfoUtils.java:336) > at > org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils$TypeInfoParser.parseParams(TypeInfoUtils.java:378) > at > org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils$TypeInfoParser.parsePrimitiveParts(TypeInfoUtils.java:518) > at > org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils.parsePrimitiveParts(TypeInfoUtils.java:533) > at > org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory.createPrimitiveTypeInfo(TypeInfoFactory.java:136) > at > org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory.getPrimitiveTypeInfo(TypeInfoFactory.java:109) > at > org.apache.hadoop.hive.ql.optimizer.physical.GenMRSkewJoinProcessor.processSkewJoin(GenMRSkewJoinProcessor.java:214) > at > org.apache.hadoop.hive.ql.optimizer.physical.SkewJoinProcFactory$SkewJoinJoinProcessor.process(SkewJoinProcFactory.java:60) > at > org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:95) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:79) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:133) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:110) > at > org.apache.hadoop.hive.ql.optimizer.physical.SkewJoinResolver$SkewJoinTaskDispatcher.dispatch(SkewJoinResolver.java:100) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:95) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:79) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:133) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:110) > at > org.apache.hadoop.hive.ql.optimizer.physical.SkewJoinResolver.resolve(SkewJoinResolver.java:55) > at > org.apache.hadoop.hive.ql.optimizer.physical.PhysicalOptimizer.optimize(PhysicalOptimizer.java:107) > at > org.apache.hadoop.hive.ql.parse.MapReduceCompiler.optimizeTaskPlan(MapReduceCompiler.java:270) > at > org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:227) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10219) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:211) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:227) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:459) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:316) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1189) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1237) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1126) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1116) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:216) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:168) > at
[jira] [Updated] (HIVE-14436) Hive 1.2.1/Hitting "ql.Driver: FAILED: IllegalArgumentException Error: , expected at the end of 'decimal(9'" after enabling hive.optimize.skewjoin and with MR engine
[ https://issues.apache.org/jira/browse/HIVE-14436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-14436: -- Attachment: HIVE-14436.1.patch > Hive 1.2.1/Hitting "ql.Driver: FAILED: IllegalArgumentException Error: , > expected at the end of 'decimal(9'" after enabling hive.optimize.skewjoin and > with MR engine > - > > Key: HIVE-14436 > URL: https://issues.apache.org/jira/browse/HIVE-14436 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 1.2.1 > Environment: HDP 2.4.2/ Hive 1.2.1 >Reporter: Ratish Maruthiyodan >Assignee: Daniel Dai > Labels: code > Attachments: HIVE-14436.1.patch, HIVE-14436.1.patch > > > PROBLEM: > The following Query run with MapReduce engine with "hive.optimize.skewjoin = > true" fails with error: > "FAILED: IllegalArgumentException Error: , expected at the end of > 'decimal(9'" > > SELECT a.col1 FROM db.tableA a INNER JOIN db.tableB b ON b.key=a.key > > limit 5; > FAILED: IllegalArgumentException Error: , expected at the end of 'decimal(9' > 16/08/04 12:47:50 [main]: ERROR ql.Driver: FAILED: IllegalArgumentException > Error: , expected at the end of 'decimal(9' > java.lang.IllegalArgumentException: Error: , expected at the end of > 'decimal(9' > at > org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils$TypeInfoParser.expect(TypeInfoUtils.java:336) > at > org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils$TypeInfoParser.parseParams(TypeInfoUtils.java:378) > at > org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils$TypeInfoParser.parsePrimitiveParts(TypeInfoUtils.java:518) > at > org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils.parsePrimitiveParts(TypeInfoUtils.java:533) > at > org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory.createPrimitiveTypeInfo(TypeInfoFactory.java:136) > at > org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory.getPrimitiveTypeInfo(TypeInfoFactory.java:109) > at > org.apache.hadoop.hive.ql.optimizer.physical.GenMRSkewJoinProcessor.processSkewJoin(GenMRSkewJoinProcessor.java:214) > at > org.apache.hadoop.hive.ql.optimizer.physical.SkewJoinProcFactory$SkewJoinJoinProcessor.process(SkewJoinProcFactory.java:60) > at > org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:95) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:79) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:133) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:110) > at > org.apache.hadoop.hive.ql.optimizer.physical.SkewJoinResolver$SkewJoinTaskDispatcher.dispatch(SkewJoinResolver.java:100) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:95) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:79) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:133) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:110) > at > org.apache.hadoop.hive.ql.optimizer.physical.SkewJoinResolver.resolve(SkewJoinResolver.java:55) > at > org.apache.hadoop.hive.ql.optimizer.physical.PhysicalOptimizer.optimize(PhysicalOptimizer.java:107) > at > org.apache.hadoop.hive.ql.parse.MapReduceCompiler.optimizeTaskPlan(MapReduceCompiler.java:270) > at > org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:227) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10219) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:211) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:227) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:459) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:316) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1189) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1237) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1126) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1116) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:216) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:168) > at
[jira] [Assigned] (HIVE-14436) Hive 1.2.1/Hitting "ql.Driver: FAILED: IllegalArgumentException Error: , expected at the end of 'decimal(9'" after enabling hive.optimize.skewjoin and with MR engine
[ https://issues.apache.org/jira/browse/HIVE-14436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai reassigned HIVE-14436: - Assignee: Daniel Dai > Hive 1.2.1/Hitting "ql.Driver: FAILED: IllegalArgumentException Error: , > expected at the end of 'decimal(9'" after enabling hive.optimize.skewjoin and > with MR engine > - > > Key: HIVE-14436 > URL: https://issues.apache.org/jira/browse/HIVE-14436 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 1.2.1 > Environment: HDP 2.4.2/ Hive 1.2.1 >Reporter: Ratish Maruthiyodan >Assignee: Daniel Dai > Labels: code > Attachments: HIVE-14436.1.patch, HIVE-14436.1.patch > > > PROBLEM: > The following Query run with MapReduce engine with "hive.optimize.skewjoin = > true" fails with error: > "FAILED: IllegalArgumentException Error: , expected at the end of > 'decimal(9'" > > SELECT a.col1 FROM db.tableA a INNER JOIN db.tableB b ON b.key=a.key > > limit 5; > FAILED: IllegalArgumentException Error: , expected at the end of 'decimal(9' > 16/08/04 12:47:50 [main]: ERROR ql.Driver: FAILED: IllegalArgumentException > Error: , expected at the end of 'decimal(9' > java.lang.IllegalArgumentException: Error: , expected at the end of > 'decimal(9' > at > org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils$TypeInfoParser.expect(TypeInfoUtils.java:336) > at > org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils$TypeInfoParser.parseParams(TypeInfoUtils.java:378) > at > org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils$TypeInfoParser.parsePrimitiveParts(TypeInfoUtils.java:518) > at > org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils.parsePrimitiveParts(TypeInfoUtils.java:533) > at > org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory.createPrimitiveTypeInfo(TypeInfoFactory.java:136) > at > org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory.getPrimitiveTypeInfo(TypeInfoFactory.java:109) > at > org.apache.hadoop.hive.ql.optimizer.physical.GenMRSkewJoinProcessor.processSkewJoin(GenMRSkewJoinProcessor.java:214) > at > org.apache.hadoop.hive.ql.optimizer.physical.SkewJoinProcFactory$SkewJoinJoinProcessor.process(SkewJoinProcFactory.java:60) > at > org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:95) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:79) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:133) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:110) > at > org.apache.hadoop.hive.ql.optimizer.physical.SkewJoinResolver$SkewJoinTaskDispatcher.dispatch(SkewJoinResolver.java:100) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:95) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:79) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:133) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:110) > at > org.apache.hadoop.hive.ql.optimizer.physical.SkewJoinResolver.resolve(SkewJoinResolver.java:55) > at > org.apache.hadoop.hive.ql.optimizer.physical.PhysicalOptimizer.optimize(PhysicalOptimizer.java:107) > at > org.apache.hadoop.hive.ql.parse.MapReduceCompiler.optimizeTaskPlan(MapReduceCompiler.java:270) > at > org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:227) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10219) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:211) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:227) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:459) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:316) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1189) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1237) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1126) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1116) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:216) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:168) > at
[jira] [Updated] (HIVE-14436) Hive 1.2.1/Hitting "ql.Driver: FAILED: IllegalArgumentException Error: , expected at the end of 'decimal(9'" after enabling hive.optimize.skewjoin and with MR engine
[ https://issues.apache.org/jira/browse/HIVE-14436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-14436: -- Attachment: HIVE-14436.1.patch > Hive 1.2.1/Hitting "ql.Driver: FAILED: IllegalArgumentException Error: , > expected at the end of 'decimal(9'" after enabling hive.optimize.skewjoin and > with MR engine > - > > Key: HIVE-14436 > URL: https://issues.apache.org/jira/browse/HIVE-14436 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 1.2.1 > Environment: HDP 2.4.2/ Hive 1.2.1 >Reporter: Ratish Maruthiyodan >Assignee: Daniel Dai > Labels: code > Attachments: HIVE-14436.1.patch, HIVE-14436.1.patch > > > PROBLEM: > The following Query run with MapReduce engine with "hive.optimize.skewjoin = > true" fails with error: > "FAILED: IllegalArgumentException Error: , expected at the end of > 'decimal(9'" > > SELECT a.col1 FROM db.tableA a INNER JOIN db.tableB b ON b.key=a.key > > limit 5; > FAILED: IllegalArgumentException Error: , expected at the end of 'decimal(9' > 16/08/04 12:47:50 [main]: ERROR ql.Driver: FAILED: IllegalArgumentException > Error: , expected at the end of 'decimal(9' > java.lang.IllegalArgumentException: Error: , expected at the end of > 'decimal(9' > at > org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils$TypeInfoParser.expect(TypeInfoUtils.java:336) > at > org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils$TypeInfoParser.parseParams(TypeInfoUtils.java:378) > at > org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils$TypeInfoParser.parsePrimitiveParts(TypeInfoUtils.java:518) > at > org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils.parsePrimitiveParts(TypeInfoUtils.java:533) > at > org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory.createPrimitiveTypeInfo(TypeInfoFactory.java:136) > at > org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory.getPrimitiveTypeInfo(TypeInfoFactory.java:109) > at > org.apache.hadoop.hive.ql.optimizer.physical.GenMRSkewJoinProcessor.processSkewJoin(GenMRSkewJoinProcessor.java:214) > at > org.apache.hadoop.hive.ql.optimizer.physical.SkewJoinProcFactory$SkewJoinJoinProcessor.process(SkewJoinProcFactory.java:60) > at > org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:95) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:79) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:133) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:110) > at > org.apache.hadoop.hive.ql.optimizer.physical.SkewJoinResolver$SkewJoinTaskDispatcher.dispatch(SkewJoinResolver.java:100) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:95) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:79) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:133) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:110) > at > org.apache.hadoop.hive.ql.optimizer.physical.SkewJoinResolver.resolve(SkewJoinResolver.java:55) > at > org.apache.hadoop.hive.ql.optimizer.physical.PhysicalOptimizer.optimize(PhysicalOptimizer.java:107) > at > org.apache.hadoop.hive.ql.parse.MapReduceCompiler.optimizeTaskPlan(MapReduceCompiler.java:270) > at > org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:227) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10219) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:211) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:227) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:459) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:316) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1189) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1237) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1126) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1116) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:216) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:168) > at
[jira] [Commented] (HIVE-14342) Beeline output is garbled when executed from a remote shell
[ https://issues.apache.org/jira/browse/HIVE-14342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15410524#comment-15410524 ] Hive QA commented on HIVE-14342: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12822344/HIVE-14342.2.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 10440 tests executed *Failed tests:* {noformat} TestMsgBusConnection - did not produce a TEST-*.xml file TestQueryLifeTimeHook - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_orc_llap_counters org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskSchedulerService.testDelayedLocalityNodeCommErrorImmediateAllocation org.apache.hive.jdbc.TestJdbcWithMiniHS2.testAddJarConstructorUnCaching {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/793/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/793/console Test logs: http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-793/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 5 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12822344 - PreCommit-HIVE-MASTER-Build > Beeline output is garbled when executed from a remote shell > --- > > Key: HIVE-14342 > URL: https://issues.apache.org/jira/browse/HIVE-14342 > Project: Hive > Issue Type: Bug > Components: Beeline >Affects Versions: 2.0.0 >Reporter: Naveen Gangam >Assignee: Naveen Gangam > Attachments: HIVE-14342.2.patch, HIVE-14342.patch, HIVE-14342.patch > > > {code} > use default; > create table clitest (key int, name String, value String); > insert into table clitest values > (1,"TRUE","1"),(2,"TRUE","1"),(3,"TRUE","1"),(4,"TRUE","1"),(5,"FALSE","0"),(6,"FALSE","0"),(7,"FALSE","0"); > {code} > then run a select query > {code} > # cat /tmp/select.sql > set hive.execution.engine=mr; > select key,name,value > from clitest > where value="1" limit 1; > {code} > Then run beeline via a remote shell, for example > {code} > $ ssh -l root "sudo -u hive beeline -u > jdbc:hive2://localhost:1 -n hive -p hive --silent=true > --outputformat=csv2 -f /tmp/select.sql" > root@'s password: > 16/07/12 14:59:22 WARN mapreduce.TableMapReduceUtil: The hbase-prefix-tree > module jar containing PrefixTreeCodec is not present. Continuing without it. > nullkey,name,value > 1,TRUE,1 > null > $ > {code} > In older releases that the output is as follows > {code} > $ ssh -l root "sudo -u hive beeline -u > jdbc:hive2://localhost:1 -n hive -p hive --silent=true > --outputformat=csv2 -f /tmp/run.sql" > Are you sure you want to continue connecting (yes/no)? yes > root@'s password: > 16/07/12 14:57:55 WARN mapreduce.TableMapReduceUtil: The hbase-prefix-tree > module jar containing PrefixTreeCodec is not present. Continuing without it. > key,name,value > 1,TRUE,1 > $ > {code} > The output contains nulls instead of blank lines. This is due to the use of > -Djline.terminal=jline.UnsupportedTerminal introduced in HIVE-6758 to be able > to run beeline as a background process. But this is the unfortunate side > effect of that fix. > Running beeline in background also produces garbled output. > {code} > # beeline -u "jdbc:hive2://localhost:1" -n hive -p hive --silent=true > --outputformat=csv2 --showHeader=false -f /tmp/run.sql 2>&1 > > /tmp/beeline.txt & > # cat /tmp/beeline.txt > null1,TRUE,1 > # > {code} > So I think the use of jline.UnsupportedTerminal should be documented but not > used automatically by beeline under the covers. -- This message was sent by Atlassian JIRA (v6.3.4#6332)