[jira] [Commented] (HIVE-17721) with Postgres rdbms for metastore and dbnotification enabled, hive DDL SQL query fails
[ https://issues.apache.org/jira/browse/HIVE-17721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194188#comment-16194188 ] Thejas M Nair commented on HIVE-17721: -- +1 > with Postgres rdbms for metastore and dbnotification enabled, hive DDL SQL > query fails > --- > > Key: HIVE-17721 > URL: https://issues.apache.org/jira/browse/HIVE-17721 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 3.0.0 >Reporter: anishek >Assignee: anishek > Fix For: 3.0.0 > > Attachments: HIVE-17721.0.patch > > > with postgres rdbms for hive-metastore any DDL fails when dbnotification is > enabled, the reason being a lock on the notification sequence is required, > which for Postgres requires the column-names and table names to enclosed in > "(double quotes) as we are using direct SQL and not going through datanucleus > and postgres is case sensitive. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17721) with Postgres rdbms for metastore and dbnotification enabled, hive DDL SQL query fails
[ https://issues.apache.org/jira/browse/HIVE-17721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] anishek updated HIVE-17721: --- Attachment: HIVE-17721.0.patch > with Postgres rdbms for metastore and dbnotification enabled, hive DDL SQL > query fails > --- > > Key: HIVE-17721 > URL: https://issues.apache.org/jira/browse/HIVE-17721 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 3.0.0 >Reporter: anishek >Assignee: anishek > Fix For: 3.0.0 > > Attachments: HIVE-17721.0.patch > > > with postgres rdbms for hive-metastore any DDL fails when dbnotification is > enabled, the reason being a lock on the notification sequence is required, > which for Postgres requires the column-names and table names to enclosed in > "(double quotes) as we are using direct SQL and not going through datanucleus > and postgres is case sensitive. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HIVE-17721) with Postgres rdbms for metastore and dbnotification enabled, hive DDL SQL query fails
[ https://issues.apache.org/jira/browse/HIVE-17721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] anishek reassigned HIVE-17721: -- > with Postgres rdbms for metastore and dbnotification enabled, hive DDL SQL > query fails > --- > > Key: HIVE-17721 > URL: https://issues.apache.org/jira/browse/HIVE-17721 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 3.0.0 >Reporter: anishek >Assignee: anishek > Fix For: 3.0.0 > > > with postgres rdbms for hive-metastore any DDL fails when dbnotification is > enabled, the reason being a lock on the notification sequence is required, > which for Postgres requires the column-names and table names to enclosed in > "(double quotes) as we are using direct SQL and not going through datanucleus > and postgres is case sensitive. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17701) Added restriction to historic queries on web UI
[ https://issues.apache.org/jira/browse/HIVE-17701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194174#comment-16194174 ] Hive QA commented on HIVE-17701: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12890639/HIVE-17701.2.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 11200 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_predicate_pushdown] (batchId=232) org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_single_sourced_multi_insert] (batchId=232) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[unionDistinct_1] (batchId=144) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[optimize_nullscan] (batchId=162) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=157) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_explainuser_1] (batchId=171) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query14] (batchId=240) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7150/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7150/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7150/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 7 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12890639 - PreCommit-HIVE-Build > Added restriction to historic queries on web UI > --- > > Key: HIVE-17701 > URL: https://issues.apache.org/jira/browse/HIVE-17701 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Thejas M Nair >Assignee: Tao Li > Attachments: HIVE-17701.1.patch, HIVE-17701.2.patch > > > The HiveServer2 Web UI (HIVE-12550) shows recently completed queries. > However, a user can see the queries run by other users as well, and that is a > security/privacy concern. > Only admin users should be allowed to see queries from other users (similar > to behavior of display for configs, stack trace etc). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17371) Move tokenstores to metastore module
[ https://issues.apache.org/jira/browse/HIVE-17371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194175#comment-16194175 ] Hive QA commented on HIVE-17371: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12890640/HIVE-17371.02.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7151/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7151/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7151/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2017-10-06 05:39:57.988 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-7151/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2017-10-06 05:39:57.990 + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at 2902c7c HIVE-17679: http-generic-click-jacking for WebHcat server (Aihua Xu reviewed by Yongzhi Chen) + git clean -f -d Removing standalone-metastore/src/gen/org/ + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at 2902c7c HIVE-17679: http-generic-click-jacking for WebHcat server (Aihua Xu reviewed by Yongzhi Chen) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2017-10-06 05:39:58.506 + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: a/beeline/src/test/org/apache/hive/beeline/ProxyAuthTest.java: No such file or directory error: a/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java: No such file or directory error: a/hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/Security.java: No such file or directory error: a/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/TempletonControllerJob.java: No such file or directory error: a/itests/hive-minikdc/src/test/java/org/apache/hive/minikdc/TestHiveAuthFactory.java: No such file or directory error: a/itests/hive-minikdc/src/test/java/org/apache/hive/minikdc/TestJdbcWithDBTokenStore.java: No such file or directory error: a/itests/hive-minikdc/src/test/java/org/apache/hive/minikdc/TestJdbcWithMiniKdc.java: No such file or directory error: a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/thrift/TestDBTokenStore.java: No such file or directory error: a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/thrift/TestZooKeeperTokenStore.java: No such file or directory error: a/jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java: No such file or directory error: a/jdbc/src/java/org/apache/hive/jdbc/Utils.java: No such file or directory error: a/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java: No such file or directory error: a/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java: No such file or directory error: a/metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java: No such file or directory error: a/service/src/java/org/apache/hive/service/auth/HiveAuthFactory.java: No such file or directory error: a/service/src/java/org/apache/hive/service/auth/HttpAuthUtils.java: No such file or directory error: a/service/src/java/org/apache/hive/service/auth/KerberosSaslHelper.java: No such file or directory error: a/service/src/java/org/apache/hive/service/cli/session/HiveSessionImplwithUGI.java: No such file or directory error: a/shims/0.23/src/main/java/org/apache/hadoop/hive/thrift/HadoopThriftAuthBridge23.java: No such file or directory error:
[jira] [Commented] (HIVE-17720) Bitvectors are not shown in describe statement on beeline
[ https://issues.apache.org/jira/browse/HIVE-17720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194171#comment-16194171 ] Ashutosh Chauhan commented on HIVE-17720: - [~kgyrtkirk] Can you please review? > Bitvectors are not shown in describe statement on beeline > - > > Key: HIVE-17720 > URL: https://issues.apache.org/jira/browse/HIVE-17720 > Project: Hive > Issue Type: Bug > Components: Beeline, Diagnosability >Affects Versions: 3.0.0 >Reporter: Aswathy Chellammal Sreekumar >Assignee: Ashutosh Chauhan > Attachments: HIVE-17720.patch > > > Describe statement takes different codepath for HS2 where bit vectors weren't > displayed. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17720) Bitvectors are not shown in describe statement on beeline
[ https://issues.apache.org/jira/browse/HIVE-17720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-17720: Status: Patch Available (was: Open) > Bitvectors are not shown in describe statement on beeline > - > > Key: HIVE-17720 > URL: https://issues.apache.org/jira/browse/HIVE-17720 > Project: Hive > Issue Type: Bug > Components: Beeline, Diagnosability >Affects Versions: 3.0.0 >Reporter: Aswathy Chellammal Sreekumar >Assignee: Ashutosh Chauhan > Attachments: HIVE-17720.patch > > > Describe statement takes different codepath for HS2 where bit vectors weren't > displayed. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17720) Bitvectors are not shown in describe statement on beeline
[ https://issues.apache.org/jira/browse/HIVE-17720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-17720: Attachment: HIVE-17720.patch This patch additionally fixes headers for desc statement, where comment & bitvector was flipped in header. > Bitvectors are not shown in describe statement on beeline > - > > Key: HIVE-17720 > URL: https://issues.apache.org/jira/browse/HIVE-17720 > Project: Hive > Issue Type: Bug > Components: Beeline, Diagnosability >Affects Versions: 3.0.0 >Reporter: Aswathy Chellammal Sreekumar >Assignee: Ashutosh Chauhan > Attachments: HIVE-17720.patch > > > Describe statement takes different codepath for HS2 where bit vectors weren't > displayed. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17566) Create schema required for workload management.
[ https://issues.apache.org/jira/browse/HIVE-17566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194166#comment-16194166 ] Prasanth Jayachandran commented on HIVE-17566: -- I think we can stick to TRIGGER for now. Looking at other DBs they support triggers other DML, DDL triggers. Maybe we can call them "execution triggers" since we trigger an action based on some execution events (counters mostly). In future this can be extended to DML, DDL triggers as well. So yeah WM_TRIGGER looks good to me. > Create schema required for workload management. > --- > > Key: HIVE-17566 > URL: https://issues.apache.org/jira/browse/HIVE-17566 > Project: Hive > Issue Type: Sub-task >Reporter: Harish Jaiprakash >Assignee: Harish Jaiprakash > Attachments: HIVE-17566.01.patch, HIVE-17566.02.patch, > HIVE-17566.03.patch, HIVE-17566.04.patch, HIVE-17566.05.patch, > HIVE-17566.06.patch > > > Schema + model changes required for workload management. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HIVE-17720) Bitvectors are not shown in describe statement on beeline
[ https://issues.apache.org/jira/browse/HIVE-17720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan reassigned HIVE-17720: --- > Bitvectors are not shown in describe statement on beeline > - > > Key: HIVE-17720 > URL: https://issues.apache.org/jira/browse/HIVE-17720 > Project: Hive > Issue Type: Bug > Components: Beeline, Diagnosability >Affects Versions: 3.0.0 >Reporter: Aswathy Chellammal Sreekumar >Assignee: Ashutosh Chauhan > > Describe statement takes different codepath for HS2 where bit vectors weren't > displayed. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17566) Create schema required for workload management.
[ https://issues.apache.org/jira/browse/HIVE-17566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194157#comment-16194157 ] Harish Jaiprakash commented on HIVE-17566: -- Thanks [~sershe]. I'll make parent_pool_id nullable, its nullable in the schema. Made a mistake in package.jdo. Thanks [~prasanth_j]. * Yes path is to store hierarchical paths. This is because its easier to query with that. I can remove parent_pool_id, since we can query for all children using prefix match. * MAPPINGS are rules too for a resource_plan. I thought the name trigger was more appropriated to describe what this table holds. I can call it trigger_rule if that is alright. * All objects are under a resource plan, hence the rp_id in every table. The relationship b/w rules and pool are many-to-many. There is a different table, WM_POOL_TO_TRIGGER, to manage that. > Create schema required for workload management. > --- > > Key: HIVE-17566 > URL: https://issues.apache.org/jira/browse/HIVE-17566 > Project: Hive > Issue Type: Sub-task >Reporter: Harish Jaiprakash >Assignee: Harish Jaiprakash > Attachments: HIVE-17566.01.patch, HIVE-17566.02.patch, > HIVE-17566.03.patch, HIVE-17566.04.patch, HIVE-17566.05.patch, > HIVE-17566.06.patch > > > Schema + model changes required for workload management. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17609) Tool to manipulate delegation tokens
[ https://issues.apache.org/jira/browse/HIVE-17609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194139#comment-16194139 ] Mithun Radhakrishnan commented on HIVE-17609: - The failures seem to be the usual suspects. [~owen.omalley], if the addition of the {{serverMode}} argument looks alright to you, and there are no other objections, I'll check this in. > Tool to manipulate delegation tokens > > > Key: HIVE-17609 > URL: https://issues.apache.org/jira/browse/HIVE-17609 > Project: Hive > Issue Type: Improvement > Components: Metastore, Security >Affects Versions: 2.2.0, 3.0.0 >Reporter: Mithun Radhakrishnan >Assignee: Mithun Radhakrishnan > Attachments: HIVE-17609.1-branch-2.2.patch, > HIVE-17609.1-branch-2.patch, HIVE-17609.1.patch, HIVE-17609.2.patch > > > This was precipitated by OOZIE-2797. We had a case in production where the > number of active metastore delegation tokens outstripped the ZooKeeper > {{jute.maxBuffer}} size. Delegation tokens could neither be fetched, nor be > cancelled. > The root-cause turned out to be a miscommunication, causing delegation tokens > fetched by Oozie *not* to be cancelled automatically from HCat. This was > sorted out as part of OOZIE-2797. > The issue exposed how poor the log-messages were, in the code pertaining to > token fetch/cancellation. We also found need for a tool to query/list/purge > delegation tokens that might have expired already. This patch introduces such > a tool, and improves the log-messages. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17566) Create schema required for workload management.
[ https://issues.apache.org/jira/browse/HIVE-17566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194131#comment-16194131 ] Sergey Shelukhin commented on HIVE-17566: - Actually nm, there's many-to-many between rules and pools. > Create schema required for workload management. > --- > > Key: HIVE-17566 > URL: https://issues.apache.org/jira/browse/HIVE-17566 > Project: Hive > Issue Type: Sub-task >Reporter: Harish Jaiprakash >Assignee: Harish Jaiprakash > Attachments: HIVE-17566.01.patch, HIVE-17566.02.patch, > HIVE-17566.03.patch, HIVE-17566.04.patch, HIVE-17566.05.patch, > HIVE-17566.06.patch > > > Schema + model changes required for workload management. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17701) Added restriction to historic queries on web UI
[ https://issues.apache.org/jira/browse/HIVE-17701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194118#comment-16194118 ] Hive QA commented on HIVE-17701: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12890639/HIVE-17701.2.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 11200 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_predicate_pushdown] (batchId=232) org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_single_sourced_multi_insert] (batchId=232) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[optimize_nullscan] (batchId=162) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=157) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_explainuser_1] (batchId=171) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query14] (batchId=240) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7149/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7149/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7149/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 6 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12890639 - PreCommit-HIVE-Build > Added restriction to historic queries on web UI > --- > > Key: HIVE-17701 > URL: https://issues.apache.org/jira/browse/HIVE-17701 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Thejas M Nair >Assignee: Tao Li > Attachments: HIVE-17701.1.patch, HIVE-17701.2.patch > > > The HiveServer2 Web UI (HIVE-12550) shows recently completed queries. > However, a user can see the queries run by other users as well, and that is a > security/privacy concern. > Only admin users should be allowed to see queries from other users (similar > to behavior of display for configs, stack trace etc). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HIVE-17692) Block CONCATENATE on Acid tables
[ https://issues.apache.org/jira/browse/HIVE-17692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Yeom reassigned HIVE-17692: - Assignee: Seong Yeom (was: Steve Yeom) > Block CONCATENATE on Acid tables > > > Key: HIVE-17692 > URL: https://issues.apache.org/jira/browse/HIVE-17692 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Reporter: Eugene Koifman >Assignee: Seong Yeom >Priority: Critical > > See _DDLSemanticAnalzyer.analyzeAlterTablePartMergeFiles(ASTNode ast, > String tableName, HashMappartSpec)_ > This was fine before due to > {noformat} > // throw a HiveException if the table/partition is bucketized > if (bucketCols != null && bucketCols.size() > 0) { > throw new > SemanticException(ErrorMsg.CONCATENATE_UNSUPPORTED_TABLE_BUCKETED.getMsg()); > } > {noformat} > but now that we support unbucketed acid tables -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HIVE-17719) Add mapreduce.job.hdfs-servers, mapreduce.job.send-token-conf to sql std auth whitelist
[ https://issues.apache.org/jira/browse/HIVE-17719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair reassigned HIVE-17719: > Add mapreduce.job.hdfs-servers, mapreduce.job.send-token-conf to sql std auth > whitelist > --- > > Key: HIVE-17719 > URL: https://issues.apache.org/jira/browse/HIVE-17719 > Project: Hive > Issue Type: Bug >Reporter: Thejas M Nair >Assignee: Thejas M Nair > > mapreduce.job.hdfs-servers, mapreduce.job.send-token-conf can be needed to > access a remote cluster in HA config for hive replication v2. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-15016) Run tests with Hadoop 3.0.0-beta1
[ https://issues.apache.org/jira/browse/HIVE-15016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194071#comment-16194071 ] Hive QA commented on HIVE-15016: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12890614/HIVE-15016.3.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 220 failed/errored test(s), 10495 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_predicate_pushdown] (batchId=232) org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_single_sourced_multi_insert] (batchId=232) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] (batchId=47) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[druid_basic1] (batchId=8) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[druid_basic2] (batchId=11) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[druid_intervals] (batchId=22) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[druid_timeseries] (batchId=57) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[druid_topn] (batchId=3) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_ctas] (batchId=166) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_insert_partition_dynamic] (batchId=168) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_insert_partition_static] (batchId=165) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_join_with_different_encryption_keys] (batchId=169) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl] (batchId=166) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_select_read_only_encrypted_tbl] (batchId=167) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_select_read_only_unencrypted_tbl] (batchId=167) org.apache.hadoop.hive.cli.TestHBaseCliDriver.org.apache.hadoop.hive.cli.TestHBaseCliDriver (batchId=94) org.apache.hadoop.hive.cli.TestHBaseCliDriver.org.apache.hadoop.hive.cli.TestHBaseCliDriver (batchId=95) org.apache.hadoop.hive.cli.TestHBaseCliDriver.org.apache.hadoop.hive.cli.TestHBaseCliDriver (batchId=96) org.apache.hadoop.hive.cli.TestHBaseCliDriver.org.apache.hadoop.hive.cli.TestHBaseCliDriver (batchId=97) org.apache.hadoop.hive.cli.TestHBaseCliDriver.org.apache.hadoop.hive.cli.TestHBaseCliDriver (batchId=98) org.apache.hadoop.hive.cli.TestHBaseCliDriver.org.apache.hadoop.hive.cli.TestHBaseCliDriver (batchId=99) org.apache.hadoop.hive.cli.TestHBaseNegativeCliDriver.testCliDriver[cascade_dbdrop] (batchId=238) org.apache.hadoop.hive.cli.TestHBaseNegativeCliDriver.testCliDriver[cascade_dbdrop_hadoop20] (batchId=238) org.apache.hadoop.hive.cli.TestHBaseNegativeCliDriver.testCliDriver[generatehfiles_require_family_path] (batchId=238) org.apache.hadoop.hive.cli.TestHBaseNegativeCliDriver.testCliDriver[hbase_ddl] (batchId=238) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapCliDriver (batchId=142) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapCliDriver (batchId=143) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapCliDriver (batchId=144) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapCliDriver (batchId=145) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapCliDriver (batchId=146) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver (batchId=147) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver (batchId=148) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver (batchId=149) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver (batchId=150) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver (batchId=151) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver (batchId=153) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver (batchId=154) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver (batchId=155) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver (batchId=156)
[jira] [Commented] (HIVE-17701) Added restriction to historic queries on web UI
[ https://issues.apache.org/jira/browse/HIVE-17701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194053#comment-16194053 ] Tao Li commented on HIVE-17701: --- Potentially we can move hasAdministratorAccess all to the beginning of "Active session" section and remove the call for the following sections. Thus it should fail early in the session section and return the error page at that time. However the advantage of the uploaded patch is, if we change the logic of hasAdministratorAccess in future such that we just return false (when permission check fails) without returning error page, then we can skip rendering of the 3 sections. That logic is more clear. > Added restriction to historic queries on web UI > --- > > Key: HIVE-17701 > URL: https://issues.apache.org/jira/browse/HIVE-17701 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Thejas M Nair >Assignee: Tao Li > Attachments: HIVE-17701.1.patch, HIVE-17701.2.patch > > > The HiveServer2 Web UI (HIVE-12550) shows recently completed queries. > However, a user can see the queries run by other users as well, and that is a > security/privacy concern. > Only admin users should be allowed to see queries from other users (similar > to behavior of display for configs, stack trace etc). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17553) CBO wrongly type cast decimal literal to int
[ https://issues.apache.org/jira/browse/HIVE-17553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194051#comment-16194051 ] Hive QA commented on HIVE-17553: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12890612/HIVE-17553.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 21 failed/errored test(s), 11200 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_predicate_pushdown] (batchId=232) org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_single_sourced_multi_insert] (batchId=232) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[annotate_stats_select] (batchId=60) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[decimal_precision2] (batchId=50) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[fp_literal_arithmetic] (batchId=67) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[literal_decimal] (batchId=13) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_decimal_math_funcs] (batchId=22) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorized_math_funcs] (batchId=19) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[optimize_nullscan] (batchId=162) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_decimal_math_funcs] (batchId=151) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_math_funcs] (batchId=150) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_explainuser_1] (batchId=171) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[wrong_column_type] (batchId=91) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[vectorized_math_funcs] (batchId=111) org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query21] (batchId=242) org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query75] (batchId=242) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query14] (batchId=240) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query21] (batchId=240) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query23] (batchId=240) org.apache.hadoop.hive.metastore.cache.TestCachedStore.testTableOps (batchId=201) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7147/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7147/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7147/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 21 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12890612 - PreCommit-HIVE-Build > CBO wrongly type cast decimal literal to int > > > Key: HIVE-17553 > URL: https://issues.apache.org/jira/browse/HIVE-17553 > Project: Hive > Issue Type: Bug > Components: Query Planning >Reporter: Vineet Garg >Assignee: Vineet Garg > Attachments: HIVE-17553.1.patch > > > {code:sql}explain select 100.000BD from f{code} > {noformat} > STAGE PLANS: > Stage: Stage-0 > Fetch Operator > limit: -1 > Processor Tree: > TableScan > alias: f > Select Operator > expressions: 100 (type: int) > outputColumnNames: _col0 > ListSink > {noformat} > Notice that the expression 100.000BD is of type int instead of decimal. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Comment Edited] (HIVE-17371) Move tokenstores to metastore module
[ https://issues.apache.org/jira/browse/HIVE-17371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194048#comment-16194048 ] Vihang Karajgaonkar edited comment on HIVE-17371 at 10/6/17 2:15 AM: - Attaching second version of the patch which moves {{DelegationTokenManager}} and associated classes to standalone metastore module as per [~thejas]'s suggestion. I had to copy a few utils methods which were being used in both HS2, HiveConnection and Metastore into separate Util classes for HS2 and HMS. I also abandoned the idea of making HMS work with {{AbstractDelegationTokenIdentifier}} since the code change was getting bigger and more complex. Since as of now I don't see any other application making using of DelegationTokenStores other than Hive itself, I think this is fine. With this patch all the DelegationToken related logic will be moved to standalone metastore instead of keeping a duplicate copy of each class in both HS2 and HMS. Also, removed HadoopAuthThrift class from shims since it was not needed anymore. HS2 can reuse the one already present in metastore module. This is needed otherwise we will have a dependency on metastore module for the shims module. Updated the review board as well. was (Author: vihangk1): Attaching second version of the patch which moves {{DelegationTokenManager}} and associated classes to standalone metastore module as per [~thejas]'s suggestion. I had to copy a few utils methods which were being used in both HS2, HiveConnection and Metastore into separate Util classes for HS2 and HMS. I also abandoned the idea of making HMS work with {{AbstractDelegationTokenIdentifier}} since the code change was getting bigger and more complex. Since as of now I don't see any other application making using of DelegationTokenStores other than Hive itself, I think this is fine. With this patch all the DelegationToken related logic will be moved to standalone metastore instead of keeping a duplicate copy of each class in both HS2 and HMS. > Move tokenstores to metastore module > > > Key: HIVE-17371 > URL: https://issues.apache.org/jira/browse/HIVE-17371 > Project: Hive > Issue Type: Sub-task > Components: Metastore >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar > Attachments: HIVE-17371.01.patch, HIVE-17371.02.patch > > > The {{getTokenStore}} method will not work for the {{DBTokenStore}} and > {{ZKTokenStore}} since they implement > {{org.apache.hadoop.hive.thrift.DelegationTokenStore}} instead of > {{org.apache.hadoop.hive.metastore.security.DelegationTokenStore}} > {code} > private DelegationTokenStore getTokenStore(Configuration conf) throws > IOException { > String tokenStoreClassName = > MetastoreConf.getVar(conf, > MetastoreConf.ConfVars.DELEGATION_TOKEN_STORE_CLS, ""); > // The second half of this if is to catch cases where users are passing > in a HiveConf for > // configuration. It will have set the default value of > // "hive.cluster.delegation.token.store .class" to > // "org.apache.hadoop.hive.thrift.MemoryTokenStore" as part of its > construction. But this is > // the hive-shims version of the memory store. We want to convert this > to our default value. > if (StringUtils.isBlank(tokenStoreClassName) || > > "org.apache.hadoop.hive.thrift.MemoryTokenStore".equals(tokenStoreClassName)) > { > return new MemoryTokenStore(); > } > try { > Class storeClass = > > Class.forName(tokenStoreClassName).asSubclass(DelegationTokenStore.class); > return ReflectionUtils.newInstance(storeClass, conf); > } catch (ClassNotFoundException e) { > throw new IOException("Error initializing delegation token store: " + > tokenStoreClassName, e); > } > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17371) Move tokenstores to metastore module
[ https://issues.apache.org/jira/browse/HIVE-17371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vihang Karajgaonkar updated HIVE-17371: --- Attachment: HIVE-17371.02.patch Attaching second version of the patch which moves {{DelegationTokenManager}} and associated classes to standalone metastore module as per [~thejas]'s suggestion. I had to copy a few utils methods which were being used in both HS2, HiveConnection and Metastore into separate Util classes for HS2 and HMS. I also abandoned the idea of making HMS work with {{AbstractDelegationTokenIdentifier}} since the code change was getting bigger and more complex. Since as of now I don't see any other application making using of DelegationTokenStores other than Hive itself, I think this is fine. With this patch all the DelegationToken related logic will be moved to standalone metastore instead of keeping a duplicate copy of each class in both HS2 and HMS. > Move tokenstores to metastore module > > > Key: HIVE-17371 > URL: https://issues.apache.org/jira/browse/HIVE-17371 > Project: Hive > Issue Type: Sub-task > Components: Metastore >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar > Attachments: HIVE-17371.01.patch, HIVE-17371.02.patch > > > The {{getTokenStore}} method will not work for the {{DBTokenStore}} and > {{ZKTokenStore}} since they implement > {{org.apache.hadoop.hive.thrift.DelegationTokenStore}} instead of > {{org.apache.hadoop.hive.metastore.security.DelegationTokenStore}} > {code} > private DelegationTokenStore getTokenStore(Configuration conf) throws > IOException { > String tokenStoreClassName = > MetastoreConf.getVar(conf, > MetastoreConf.ConfVars.DELEGATION_TOKEN_STORE_CLS, ""); > // The second half of this if is to catch cases where users are passing > in a HiveConf for > // configuration. It will have set the default value of > // "hive.cluster.delegation.token.store .class" to > // "org.apache.hadoop.hive.thrift.MemoryTokenStore" as part of its > construction. But this is > // the hive-shims version of the memory store. We want to convert this > to our default value. > if (StringUtils.isBlank(tokenStoreClassName) || > > "org.apache.hadoop.hive.thrift.MemoryTokenStore".equals(tokenStoreClassName)) > { > return new MemoryTokenStore(); > } > try { > Class storeClass = > > Class.forName(tokenStoreClassName).asSubclass(DelegationTokenStore.class); > return ReflectionUtils.newInstance(storeClass, conf); > } catch (ClassNotFoundException e) { > throw new IOException("Error initializing delegation token store: " + > tokenStoreClassName, e); > } > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17701) Added restriction to historic queries on web UI
[ https://issues.apache.org/jira/browse/HIVE-17701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194047#comment-16194047 ] Tao Li commented on HIVE-17701: --- So with the patch, if the auth is enabled and user is not admin, then we return the error page. If auth is disabled, then any user can view all the queries. > Added restriction to historic queries on web UI > --- > > Key: HIVE-17701 > URL: https://issues.apache.org/jira/browse/HIVE-17701 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Thejas M Nair >Assignee: Tao Li > Attachments: HIVE-17701.1.patch, HIVE-17701.2.patch > > > The HiveServer2 Web UI (HIVE-12550) shows recently completed queries. > However, a user can see the queries run by other users as well, and that is a > security/privacy concern. > Only admin users should be allowed to see queries from other users (similar > to behavior of display for configs, stack trace etc). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17701) Added restriction to historic queries on web UI
[ https://issues.apache.org/jira/browse/HIVE-17701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tao Li updated HIVE-17701: -- Attachment: HIVE-17701.2.patch > Added restriction to historic queries on web UI > --- > > Key: HIVE-17701 > URL: https://issues.apache.org/jira/browse/HIVE-17701 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Thejas M Nair >Assignee: Tao Li > Attachments: HIVE-17701.1.patch, HIVE-17701.2.patch > > > The HiveServer2 Web UI (HIVE-12550) shows recently completed queries. > However, a user can see the queries run by other users as well, and that is a > security/privacy concern. > Only admin users should be allowed to see queries from other users (similar > to behavior of display for configs, stack trace etc). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17718) HS2 Logs print unnecessary stack trace when HoS query is cancelled
[ https://issues.apache.org/jira/browse/HIVE-17718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HIVE-17718: Attachment: HIVE-17718.1.patch > HS2 Logs print unnecessary stack trace when HoS query is cancelled > -- > > Key: HIVE-17718 > URL: https://issues.apache.org/jira/browse/HIVE-17718 > Project: Hive > Issue Type: Bug > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar > Attachments: HIVE-17718.1.patch > > > Example: > {code} > 2017-10-05 17:47:11,881 ERROR > org.apache.hadoop.hive.ql.exec.spark.status.SparkJobMonitor: > [HiveServer2-Background-Pool: Thread-131]: Failed to monitor Job[ 2] with > exception 'java.lang.InterruptedException(sleep interrupted)' > java.lang.InterruptedException: sleep interrupted > at java.lang.Thread.sleep(Native Method) > at > org.apache.hadoop.hive.ql.exec.spark.status.RemoteSparkJobMonitor.startMonitor(RemoteSparkJobMonitor.java:124) > at > org.apache.hadoop.hive.ql.exec.spark.status.impl.RemoteSparkJobRef.monitorJob(RemoteSparkJobRef.java:60) > at > org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:111) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:214) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:99) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2052) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1748) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1501) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1285) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1280) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:236) > at > org.apache.hive.service.cli.operation.SQLOperation.access$300(SQLOperation.java:89) > at > org.apache.hive.service.cli.operation.SQLOperation$3$1.run(SQLOperation.java:301) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917) > at > org.apache.hive.service.cli.operation.SQLOperation$3.run(SQLOperation.java:314) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > 2017-10-05 17:47:11,881 WARN org.apache.hadoop.hive.ql.Driver: > [HiveServer2-Handler-Pool: Thread-105]: Shutting down task : Stage-2:MAPRED > 2017-10-05 17:47:11,882 ERROR > org.apache.hadoop.hive.ql.exec.spark.status.SparkJobMonitor: > [HiveServer2-Background-Pool: Thread-131]: Failed to monitor Job[ 2] with > exception 'java.lang.InterruptedException(sleep interrupted)' > java.lang.InterruptedException: sleep interrupted > at java.lang.Thread.sleep(Native Method) > at > org.apache.hadoop.hive.ql.exec.spark.status.RemoteSparkJobMonitor.startMonitor(RemoteSparkJobMonitor.java:124) > at > org.apache.hadoop.hive.ql.exec.spark.status.impl.RemoteSparkJobRef.monitorJob(RemoteSparkJobRef.java:60) > at > org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:111) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:214) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:99) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2052) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1748) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1501) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1285) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1280) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:236) > at > org.apache.hive.service.cli.operation.SQLOperation.access$300(SQLOperation.java:89) > at > org.apache.hive.service.cli.operation.SQLOperation$3$1.run(SQLOperation.java:301) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917) > at > org.apache.hive.service.cli.operation.SQLOperation$3.run(SQLOperation.java:314) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) >
[jira] [Assigned] (HIVE-17718) HS2 Logs print unnecessary stack trace when HoS query is cancelled
[ https://issues.apache.org/jira/browse/HIVE-17718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar reassigned HIVE-17718: --- > HS2 Logs print unnecessary stack trace when HoS query is cancelled > -- > > Key: HIVE-17718 > URL: https://issues.apache.org/jira/browse/HIVE-17718 > Project: Hive > Issue Type: Bug > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar > > Example: > {code} > 2017-10-05 17:47:11,881 ERROR > org.apache.hadoop.hive.ql.exec.spark.status.SparkJobMonitor: > [HiveServer2-Background-Pool: Thread-131]: Failed to monitor Job[ 2] with > exception 'java.lang.InterruptedException(sleep interrupted)' > java.lang.InterruptedException: sleep interrupted > at java.lang.Thread.sleep(Native Method) > at > org.apache.hadoop.hive.ql.exec.spark.status.RemoteSparkJobMonitor.startMonitor(RemoteSparkJobMonitor.java:124) > at > org.apache.hadoop.hive.ql.exec.spark.status.impl.RemoteSparkJobRef.monitorJob(RemoteSparkJobRef.java:60) > at > org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:111) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:214) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:99) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2052) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1748) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1501) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1285) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1280) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:236) > at > org.apache.hive.service.cli.operation.SQLOperation.access$300(SQLOperation.java:89) > at > org.apache.hive.service.cli.operation.SQLOperation$3$1.run(SQLOperation.java:301) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917) > at > org.apache.hive.service.cli.operation.SQLOperation$3.run(SQLOperation.java:314) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > 2017-10-05 17:47:11,881 WARN org.apache.hadoop.hive.ql.Driver: > [HiveServer2-Handler-Pool: Thread-105]: Shutting down task : Stage-2:MAPRED > 2017-10-05 17:47:11,882 ERROR > org.apache.hadoop.hive.ql.exec.spark.status.SparkJobMonitor: > [HiveServer2-Background-Pool: Thread-131]: Failed to monitor Job[ 2] with > exception 'java.lang.InterruptedException(sleep interrupted)' > java.lang.InterruptedException: sleep interrupted > at java.lang.Thread.sleep(Native Method) > at > org.apache.hadoop.hive.ql.exec.spark.status.RemoteSparkJobMonitor.startMonitor(RemoteSparkJobMonitor.java:124) > at > org.apache.hadoop.hive.ql.exec.spark.status.impl.RemoteSparkJobRef.monitorJob(RemoteSparkJobRef.java:60) > at > org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:111) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:214) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:99) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2052) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1748) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1501) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1285) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1280) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:236) > at > org.apache.hive.service.cli.operation.SQLOperation.access$300(SQLOperation.java:89) > at > org.apache.hive.service.cli.operation.SQLOperation$3$1.run(SQLOperation.java:301) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917) > at > org.apache.hive.service.cli.operation.SQLOperation$3.run(SQLOperation.java:314) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at >
[jira] [Updated] (HIVE-17701) Added restriction to historic queries on web UI
[ https://issues.apache.org/jira/browse/HIVE-17701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tao Li updated HIVE-17701: -- Status: Patch Available (was: Open) > Added restriction to historic queries on web UI > --- > > Key: HIVE-17701 > URL: https://issues.apache.org/jira/browse/HIVE-17701 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Thejas M Nair >Assignee: Tao Li > Attachments: HIVE-17701.1.patch > > > The HiveServer2 Web UI (HIVE-12550) shows recently completed queries. > However, a user can see the queries run by other users as well, and that is a > security/privacy concern. > Only admin users should be allowed to see queries from other users (similar > to behavior of display for configs, stack trace etc). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17701) Added restriction to historic queries on web UI
[ https://issues.apache.org/jira/browse/HIVE-17701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tao Li updated HIVE-17701: -- Attachment: HIVE-17701.1.patch > Added restriction to historic queries on web UI > --- > > Key: HIVE-17701 > URL: https://issues.apache.org/jira/browse/HIVE-17701 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Thejas M Nair >Assignee: Tao Li > Attachments: HIVE-17701.1.patch > > > The HiveServer2 Web UI (HIVE-12550) shows recently completed queries. > However, a user can see the queries run by other users as well, and that is a > security/privacy concern. > Only admin users should be allowed to see queries from other users (similar > to behavior of display for configs, stack trace etc). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17701) Added restriction to historic queries on web UI
[ https://issues.apache.org/jira/browse/HIVE-17701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tao Li updated HIVE-17701: -- Summary: Added restriction to historic queries on web UI (was: Show historic queries only for admin users) > Added restriction to historic queries on web UI > --- > > Key: HIVE-17701 > URL: https://issues.apache.org/jira/browse/HIVE-17701 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Thejas M Nair >Assignee: Tao Li > > The HiveServer2 Web UI (HIVE-12550) shows recently completed queries. > However, a user can see the queries run by other users as well, and that is a > security/privacy concern. > Only admin users should be allowed to see queries from other users (similar > to behavior of display for configs, stack trace etc). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17672) Upgrade Calcite version to 1.14
[ https://issues.apache.org/jira/browse/HIVE-17672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16194009#comment-16194009 ] Hive QA commented on HIVE-17672: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12890606/HIVE-17672.02.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 30 failed/errored test(s), 11200 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_predicate_pushdown] (batchId=232) org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_single_sourced_multi_insert] (batchId=232) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[annotate_stats_groupby] (batchId=47) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cbo_rp_annotate_stats_groupby] (batchId=82) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cbo_rp_auto_join1] (batchId=3) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[char_udf1] (batchId=87) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[limit_pushdown2] (batchId=16) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[view_cbo] (batchId=66) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[optimize_nullscan] (batchId=162) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_groupby_grouping_id2] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_groupby_grouping_sets1] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_groupby_grouping_sets_limit] (batchId=154) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_explainuser_1] (batchId=171) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_2] (batchId=101) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[limit_pushdown2] (batchId=109) org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query18] (batchId=242) org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query22] (batchId=242) org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query36] (batchId=242) org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query67] (batchId=242) org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query70] (batchId=242) org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query86] (batchId=242) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query14] (batchId=240) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query18] (batchId=240) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query22] (batchId=240) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query36] (batchId=240) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query67] (batchId=240) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query70] (batchId=240) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query86] (batchId=240) org.apache.hadoop.hive.ql.security.TestMultiAuthorizationPreEventListener.org.apache.hadoop.hive.ql.security.TestMultiAuthorizationPreEventListener (batchId=222) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7146/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7146/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7146/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 30 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12890606 - PreCommit-HIVE-Build > Upgrade Calcite version to 1.14 > --- > > Key: HIVE-17672 > URL: https://issues.apache.org/jira/browse/HIVE-17672 > Project: Hive > Issue Type: Task >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-17672.01.patch, HIVE-17672.02.patch > > > Calcite 1.14.0 has been recently released. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17672) Upgrade Calcite version to 1.14
[ https://issues.apache.org/jira/browse/HIVE-17672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193955#comment-16193955 ] Ashutosh Chauhan commented on HIVE-17672: - +1 (seems like few more golden files need update) > Upgrade Calcite version to 1.14 > --- > > Key: HIVE-17672 > URL: https://issues.apache.org/jira/browse/HIVE-17672 > Project: Hive > Issue Type: Task >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-17672.01.patch, HIVE-17672.02.patch > > > Calcite 1.14.0 has been recently released. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17672) Upgrade Calcite version to 1.14
[ https://issues.apache.org/jira/browse/HIVE-17672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193938#comment-16193938 ] Hive QA commented on HIVE-17672: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12890606/HIVE-17672.02.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 26 failed/errored test(s), 11200 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_predicate_pushdown] (batchId=232) org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_single_sourced_multi_insert] (batchId=232) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[annotate_stats_groupby] (batchId=47) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cbo_rp_annotate_stats_groupby] (batchId=82) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cbo_rp_auto_join1] (batchId=3) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[limit_pushdown2] (batchId=16) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[view_cbo] (batchId=66) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[optimize_nullscan] (batchId=162) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_groupby_grouping_id2] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_groupby_grouping_sets1] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_groupby_grouping_sets_limit] (batchId=154) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_explainuser_1] (batchId=171) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[limit_pushdown2] (batchId=109) org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query18] (batchId=242) org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query22] (batchId=242) org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query36] (batchId=242) org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query67] (batchId=242) org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query70] (batchId=242) org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query86] (batchId=242) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query18] (batchId=240) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query22] (batchId=240) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query36] (batchId=240) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query67] (batchId=240) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query70] (batchId=240) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query86] (batchId=240) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7145/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7145/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7145/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 26 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12890606 - PreCommit-HIVE-Build > Upgrade Calcite version to 1.14 > --- > > Key: HIVE-17672 > URL: https://issues.apache.org/jira/browse/HIVE-17672 > Project: Hive > Issue Type: Task >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-17672.01.patch, HIVE-17672.02.patch > > > Calcite 1.14.0 has been recently released. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-15016) Run tests with Hadoop 3.0.0-beta1
[ https://issues.apache.org/jira/browse/HIVE-15016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193919#comment-16193919 ] Aihua Xu commented on HIVE-15016: - patch-3: older commons-beanutils.jar is pulled in through transitive dependency. That causes hdfs to use the older api and causes the failures. Exclude them in this patch. > Run tests with Hadoop 3.0.0-beta1 > - > > Key: HIVE-15016 > URL: https://issues.apache.org/jira/browse/HIVE-15016 > Project: Hive > Issue Type: Sub-task > Components: Hive >Reporter: Sergio Peña >Assignee: Aihua Xu > Attachments: Hadoop3Upstream.patch, HIVE-15016.2.patch, > HIVE-15016.3.patch, HIVE-15016.patch > > > Hadoop 3.0.0-alpha1 was released back on Sep/16 to allow other components run > tests against this new version before GA. > We should start running tests with Hive to validate compatibility against > Hadoop 3.0. > NOTE: The patch used to test must not be committed to Hive until Hadoop 3.0 > GA is released. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-15016) Run tests with Hadoop 3.0.0-beta1
[ https://issues.apache.org/jira/browse/HIVE-15016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aihua Xu updated HIVE-15016: Attachment: HIVE-15016.3.patch > Run tests with Hadoop 3.0.0-beta1 > - > > Key: HIVE-15016 > URL: https://issues.apache.org/jira/browse/HIVE-15016 > Project: Hive > Issue Type: Sub-task > Components: Hive >Reporter: Sergio Peña >Assignee: Aihua Xu > Attachments: Hadoop3Upstream.patch, HIVE-15016.2.patch, > HIVE-15016.3.patch, HIVE-15016.patch > > > Hadoop 3.0.0-alpha1 was released back on Sep/16 to allow other components run > tests against this new version before GA. > We should start running tests with Hive to validate compatibility against > Hadoop 3.0. > NOTE: The patch used to test must not be committed to Hive until Hadoop 3.0 > GA is released. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17553) CBO wrongly type cast decimal literal to int
[ https://issues.apache.org/jira/browse/HIVE-17553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-17553: --- Status: Patch Available (was: Open) > CBO wrongly type cast decimal literal to int > > > Key: HIVE-17553 > URL: https://issues.apache.org/jira/browse/HIVE-17553 > Project: Hive > Issue Type: Bug > Components: Query Planning >Reporter: Vineet Garg >Assignee: Vineet Garg > Attachments: HIVE-17553.1.patch > > > {code:sql}explain select 100.000BD from f{code} > {noformat} > STAGE PLANS: > Stage: Stage-0 > Fetch Operator > limit: -1 > Processor Tree: > TableScan > alias: f > Select Operator > expressions: 100 (type: int) > outputColumnNames: _col0 > ListSink > {noformat} > Notice that the expression 100.000BD is of type int instead of decimal. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17553) CBO wrongly type cast decimal literal to int
[ https://issues.apache.org/jira/browse/HIVE-17553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-17553: --- Attachment: HIVE-17553.1.patch > CBO wrongly type cast decimal literal to int > > > Key: HIVE-17553 > URL: https://issues.apache.org/jira/browse/HIVE-17553 > Project: Hive > Issue Type: Bug > Components: Query Planning >Reporter: Vineet Garg >Assignee: Vineet Garg > Attachments: HIVE-17553.1.patch > > > {code:sql}explain select 100.000BD from f{code} > {noformat} > STAGE PLANS: > Stage: Stage-0 > Fetch Operator > limit: -1 > Processor Tree: > TableScan > alias: f > Select Operator > expressions: 100 (type: int) > outputColumnNames: _col0 > ListSink > {noformat} > Notice that the expression 100.000BD is of type int instead of decimal. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Comment Edited] (HIVE-17553) CBO wrongly type cast decimal literal to int
[ https://issues.apache.org/jira/browse/HIVE-17553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193898#comment-16193898 ] Vineet Garg edited comment on HIVE-17553 at 10/5/17 11:24 PM: -- This happens due to RexBuilder::makeExactLiteral (in calcite) creating Integer/BigInteger if scale happens to be zero. Hive probably need to create the type own its own and pass it to makeExactLiteral to preserve the type. was (Author: vgarg): This happens due to RexBuilder::makeExactLiteral creating Integer/BigInteger if scale happens to be zero. Hive probably need to create the type own its own and pass it to makeExactLiteral to preserve the type. > CBO wrongly type cast decimal literal to int > > > Key: HIVE-17553 > URL: https://issues.apache.org/jira/browse/HIVE-17553 > Project: Hive > Issue Type: Bug > Components: Query Planning >Reporter: Vineet Garg >Assignee: Vineet Garg > > {code:sql}explain select 100.000BD from f{code} > {noformat} > STAGE PLANS: > Stage: Stage-0 > Fetch Operator > limit: -1 > Processor Tree: > TableScan > alias: f > Select Operator > expressions: 100 (type: int) > outputColumnNames: _col0 > ListSink > {noformat} > Notice that the expression 100.000BD is of type int instead of decimal. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HIVE-17553) CBO wrongly type cast decimal literal to int
[ https://issues.apache.org/jira/browse/HIVE-17553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg reassigned HIVE-17553: -- Assignee: Vineet Garg > CBO wrongly type cast decimal literal to int > > > Key: HIVE-17553 > URL: https://issues.apache.org/jira/browse/HIVE-17553 > Project: Hive > Issue Type: Bug > Components: Query Planning >Reporter: Vineet Garg >Assignee: Vineet Garg > > {code:sql}explain select 100.000BD from f{code} > {noformat} > STAGE PLANS: > Stage: Stage-0 > Fetch Operator > limit: -1 > Processor Tree: > TableScan > alias: f > Select Operator > expressions: 100 (type: int) > outputColumnNames: _col0 > ListSink > {noformat} > Notice that the expression 100.000BD is of type int instead of decimal. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17553) CBO wrongly type cast decimal literal to int
[ https://issues.apache.org/jira/browse/HIVE-17553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193898#comment-16193898 ] Vineet Garg commented on HIVE-17553: This happens due to RexBuilder::makeExactLiteral creating Integer/BigInteger if scale happens to be zero. Hive probably need to create the type own its own and pass it to makeExactLiteral to preserve the type. > CBO wrongly type cast decimal literal to int > > > Key: HIVE-17553 > URL: https://issues.apache.org/jira/browse/HIVE-17553 > Project: Hive > Issue Type: Bug > Components: Query Planning >Reporter: Vineet Garg > > {code:sql}explain select 100.000BD from f{code} > {noformat} > STAGE PLANS: > Stage: Stage-0 > Fetch Operator > limit: -1 > Processor Tree: > TableScan > alias: f > Select Operator > expressions: 100 (type: int) > outputColumnNames: _col0 > ListSink > {noformat} > Notice that the expression 100.000BD is of type int instead of decimal. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17669) Cache to optimize SearchArgument deserialization
[ https://issues.apache.org/jira/browse/HIVE-17669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193872#comment-16193872 ] Mithun Radhakrishnan commented on HIVE-17669: - We're back to passing tests. [~prasanth_j], please let me know if this version is acceptable. > Cache to optimize SearchArgument deserialization > > > Key: HIVE-17669 > URL: https://issues.apache.org/jira/browse/HIVE-17669 > Project: Hive > Issue Type: Improvement > Components: ORC, Query Processor >Affects Versions: 2.2.0, 3.0.0 >Reporter: Mithun Radhakrishnan >Assignee: Mithun Radhakrishnan > Attachments: HIVE-17669.3.patch, HIVE-17669.4.patch, > HIVE-17699.1.patch, HIVE-17699.2.patch > > > And another, from [~selinazh] and [~cdrome]. (YHIVE-927) > When a mapper needs to process multiple ORC files, it might land up having > use essentially the same {{SearchArgument}} over several files. It would be > good not to have to deserialize from string, over and over again. Caching the > object against the string-form should speed things up. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Comment Edited] (HIVE-17576) Improve progress-reporting in TezProcessor
[ https://issues.apache.org/jira/browse/HIVE-17576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193858#comment-16193858 ] Mithun Radhakrishnan edited comment on HIVE-17576 at 10/5/17 10:44 PM: --- Hey, [~owen.omalley]. I'm trying to move the {{ReflectiveProgressHelper}} to the shims layer, as per your suggestion. It looks like I'll have to add a {{ProgressHelper}} interface to {{HadoopShims}}, and a {{ReflectiveProgressHelperImpl}} in {{HadoopShimsSecure}}. It's beginning to look a little convoluted, to go through a Shim, and then reflection, for this one call. Add to this the fact that we're mixing concerns by putting Tez stuff in {{HadoopShims}}. Edit: We should also consider that the reflection will likely be changed to a direct call, once Hive {{branch-2x}} is switched to a more current Tez version. I can post a patch to illustrate, but I'm beginning to wonder if this is overkill. :/ was (Author: mithun): Hey, [~owen.omalley]. I'm trying to move the {{ReflectiveProgressHelper}} to the shims layer, as per your suggestion. It looks like I'll have to add a {{ProgressHelper}} interface to {{HadoopShims}}, and a {{ReflectiveProgressHelperImpl}} in {{HadoopShimsSecure}}. It's beginning to look a little convoluted, to go through a Shim, and then reflection, for this one call. Add to this the fact that we're mixing concerns by putting Tez stuff in {{HadoopShims}}. I can post a patch to illustrate, but I'm beginning to wonder if this is overkill. :/ > Improve progress-reporting in TezProcessor > -- > > Key: HIVE-17576 > URL: https://issues.apache.org/jira/browse/HIVE-17576 > Project: Hive > Issue Type: Bug >Affects Versions: 2.2.0, 3.0.0, 2.4.0 >Reporter: Mithun Radhakrishnan >Assignee: Mithun Radhakrishnan > Attachments: HIVE-17576.1.patch, HIVE-17576.2-branch-2.patch, > HIVE-17576.2.patch > > > Another one on behalf of [~selinazh] and [~cdrome]. Following the example in > [Apache Tez's > {{MapProcessor}}|https://github.com/apache/tez/blob/247719d7314232f680f028f4e1a19370ffb7b1bb/tez-mapreduce/src/main/java/org/apache/tez/mapreduce/processor/map/MapProcessor.java#L88], > {{TezProcessor}} ought to use {{ProgressHelper}} to report progress for a > Tez task. As per [~kshukla]'s advice, > {quote} > Tez... provides {{getProgress()}} API for {{AbstractLogicalInput(s)}} which > will give the correct progress value for a given Input. The TezProcessor(s) > in Hive should use this to do something similar to what MapProcessor in Tez > does today, which is use/override ProgressHelper to get the input progress > and then set the progress on the processorContext. > ... > The default behavior of the ProgressHelper class sets the processor progress > to be the average of progress values from all inputs. > {quote} > This code is -whacked from- *inspired by* {{MapProcessor}}'s use of > {{ProgressHelper}}. > (For my reference, YHIVE-978.) -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17672) Upgrade Calcite version to 1.14
[ https://issues.apache.org/jira/browse/HIVE-17672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-17672: --- Attachment: HIVE-17672.02.patch > Upgrade Calcite version to 1.14 > --- > > Key: HIVE-17672 > URL: https://issues.apache.org/jira/browse/HIVE-17672 > Project: Hive > Issue Type: Task >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-17672.01.patch, HIVE-17672.02.patch > > > Calcite 1.14.0 has been recently released. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17717) Enable rule to push post-aggregations into Druid
[ https://issues.apache.org/jira/browse/HIVE-17717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-17717: --- Attachment: HIVE-17717.patch > Enable rule to push post-aggregations into Druid > > > Key: HIVE-17717 > URL: https://issues.apache.org/jira/browse/HIVE-17717 > Project: Hive > Issue Type: New Feature > Components: Druid integration >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-17717.patch > > > Enable rule created by CALCITE-1803. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17576) Improve progress-reporting in TezProcessor
[ https://issues.apache.org/jira/browse/HIVE-17576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193858#comment-16193858 ] Mithun Radhakrishnan commented on HIVE-17576: - Hey, [~owen.omalley]. I'm trying to move the {{ReflectiveProgressHelper}} to the shims layer, as per your suggestion. It looks like I'll have to add a {{ProgressHelper}} interface to {{HadoopShims}}, and a {{ReflectiveProgressHelperImpl}} in {{HadoopShimsSecure}}. It's beginning to look a little convoluted, to go through a Shim, and then reflection, for this one call. Add to this the fact that we're mixing concerns by putting Tez stuff in {{HadoopShims}}. I can post a patch to illustrate, but I'm beginning to wonder if this is overkill. :/ > Improve progress-reporting in TezProcessor > -- > > Key: HIVE-17576 > URL: https://issues.apache.org/jira/browse/HIVE-17576 > Project: Hive > Issue Type: Bug >Affects Versions: 2.2.0, 3.0.0, 2.4.0 >Reporter: Mithun Radhakrishnan >Assignee: Mithun Radhakrishnan > Attachments: HIVE-17576.1.patch, HIVE-17576.2-branch-2.patch, > HIVE-17576.2.patch > > > Another one on behalf of [~selinazh] and [~cdrome]. Following the example in > [Apache Tez's > {{MapProcessor}}|https://github.com/apache/tez/blob/247719d7314232f680f028f4e1a19370ffb7b1bb/tez-mapreduce/src/main/java/org/apache/tez/mapreduce/processor/map/MapProcessor.java#L88], > {{TezProcessor}} ought to use {{ProgressHelper}} to report progress for a > Tez task. As per [~kshukla]'s advice, > {quote} > Tez... provides {{getProgress()}} API for {{AbstractLogicalInput(s)}} which > will give the correct progress value for a given Input. The TezProcessor(s) > in Hive should use this to do something similar to what MapProcessor in Tez > does today, which is use/override ProgressHelper to get the input progress > and then set the progress on the processorContext. > ... > The default behavior of the ProgressHelper class sets the processor progress > to be the average of progress values from all inputs. > {quote} > This code is -whacked from- *inspired by* {{MapProcessor}}'s use of > {{ProgressHelper}}. > (For my reference, YHIVE-978.) -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17717) Enable rule to push post-aggregations into Druid
[ https://issues.apache.org/jira/browse/HIVE-17717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-17717: --- Description: Enable rule created by CALCITE-1803. > Enable rule to push post-aggregations into Druid > > > Key: HIVE-17717 > URL: https://issues.apache.org/jira/browse/HIVE-17717 > Project: Hive > Issue Type: New Feature > Components: Druid integration >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > > Enable rule created by CALCITE-1803. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HIVE-17717) Enable rule to push post-aggregations into Druid
[ https://issues.apache.org/jira/browse/HIVE-17717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez reassigned HIVE-17717: -- > Enable rule to push post-aggregations into Druid > > > Key: HIVE-17717 > URL: https://issues.apache.org/jira/browse/HIVE-17717 > Project: Hive > Issue Type: New Feature > Components: Druid integration >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17715) Exception when pushing postaggregates into Druid
[ https://issues.apache.org/jira/browse/HIVE-17715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-17715: --- Target Version/s: (was: 3.0.0) > Exception when pushing postaggregates into Druid > > > Key: HIVE-17715 > URL: https://issues.apache.org/jira/browse/HIVE-17715 > Project: Hive > Issue Type: Bug > Components: Druid integration >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > > After Calcite is upgraded to 1.14 and the rule to push post-aggregations to > Druid is enabled, the following query will fail: > {code} > EXPLAIN > SELECT language, robot, sum(added) - sum(delta) AS a > FROM druid_table_1 > WHERE extract (week from `__time`) IN (10,11) > AND robot='Bird Call' > GROUP BY language, robot; > {code} > The error we get is the following: > {code} > Cannot add expression of different type to set: > set type is RecordType(VARCHAR(2147483647) CHARACTER SET "UTF-16LE" COLLATE > "ISO-8859-1$en_US$primary" language, VARCHAR(2147483647) CHARACTER SET > "UTF-16LE" COLLATE "ISO-8859-1$en_US$primary" robot, DOUBLE a) NOT NULL > expression type is RecordType(VARCHAR(2147483647) CHARACTER SET "UTF-16LE" > COLLATE "ISO-8859-1$en_US$primary" language, DOUBLE postagg#0) NOT NULL > set is > rel#1507:HiveProject.HIVE.[](input=HepRelVertex#1514,language=$0,robot=CAST(_UTF-16LE'Bird > Call'):VARCHAR(2147483647) CHARACTER SET "UTF-16LE" COLLATE > "ISO-8859-1$en_US$primary",a=-($1, $2)) > expression is DruidQuery#1516 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17716) Not pushing postaggregations into Druid due to CAST on constant
[ https://issues.apache.org/jira/browse/HIVE-17716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-17716: --- Issue Type: Improvement (was: Bug) > Not pushing postaggregations into Druid due to CAST on constant > --- > > Key: HIVE-17716 > URL: https://issues.apache.org/jira/browse/HIVE-17716 > Project: Hive > Issue Type: Improvement > Components: Druid integration >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > > After Calcite is upgraded to 1.14 and the rule to push post-aggregations to > Druid is enabled, the following query fails to create a postaggregation: > {code} > EXPLAIN > SELECT language, sum(added) + 100 AS a > FROM druid_table_1 > GROUP BY language > ORDER BY a DESC; > {code} > Problem seems to be that CAST is getting on the way for the rule to be > applied. In particular, this is the final Calcite plan: > {code} > HiveSortLimit(sort0=[$1], dir0=[DESC-nulls-last]) > HiveProject(language=[$0], a=[+($1, CAST(100):DOUBLE)]) > DruidQuery(table=[[default.druid_table_1]], > intervals=[[1900-01-01T00:00:00.000/3000-01-01T00:00:00.000]], groups=[{6}], > aggs=[[sum($10)]]) > {code} > There are two different parts to explore to seek a solution: 1) why > {{CAST(100):DOUBLE)}} is not folded to {{100.0d}}, and 2) whether the rule to > push post-aggregations to Druid could handle the CAST in some particular > cases. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17669) Cache to optimize SearchArgument deserialization
[ https://issues.apache.org/jira/browse/HIVE-17669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193850#comment-16193850 ] Hive QA commented on HIVE-17669: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12890588/HIVE-17669.4.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 11200 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_predicate_pushdown] (batchId=232) org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_single_sourced_multi_insert] (batchId=232) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[optimize_nullscan] (batchId=162) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=157) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_explainuser_1] (batchId=171) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query14] (batchId=240) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7144/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7144/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7144/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 6 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12890588 - PreCommit-HIVE-Build > Cache to optimize SearchArgument deserialization > > > Key: HIVE-17669 > URL: https://issues.apache.org/jira/browse/HIVE-17669 > Project: Hive > Issue Type: Improvement > Components: ORC, Query Processor >Affects Versions: 2.2.0, 3.0.0 >Reporter: Mithun Radhakrishnan >Assignee: Mithun Radhakrishnan > Attachments: HIVE-17669.3.patch, HIVE-17669.4.patch, > HIVE-17699.1.patch, HIVE-17699.2.patch > > > And another, from [~selinazh] and [~cdrome]. (YHIVE-927) > When a mapper needs to process multiple ORC files, it might land up having > use essentially the same {{SearchArgument}} over several files. It would be > good not to have to deserialize from string, over and over again. Caching the > object against the string-form should speed things up. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HIVE-17716) Not pushing postaggregations into Druid due to CAST on constant
[ https://issues.apache.org/jira/browse/HIVE-17716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez reassigned HIVE-17716: -- > Not pushing postaggregations into Druid due to CAST on constant > --- > > Key: HIVE-17716 > URL: https://issues.apache.org/jira/browse/HIVE-17716 > Project: Hive > Issue Type: Bug > Components: Druid integration >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > > After Calcite is upgraded to 1.14 and the rule to push post-aggregations to > Druid is enabled, the following query fails to create a postaggregation: > {code} > EXPLAIN > SELECT language, sum(added) + 100 AS a > FROM druid_table_1 > GROUP BY language > ORDER BY a DESC; > {code} > Problem seems to be that CAST is getting on the way for the rule to be > applied. In particular, this is the final Calcite plan: > {code} > HiveSortLimit(sort0=[$1], dir0=[DESC-nulls-last]) > HiveProject(language=[$0], a=[+($1, CAST(100):DOUBLE)]) > DruidQuery(table=[[default.druid_table_1]], > intervals=[[1900-01-01T00:00:00.000/3000-01-01T00:00:00.000]], groups=[{6}], > aggs=[[sum($10)]]) > {code} > There are two different parts to explore to seek a solution: 1) why > {{CAST(100):DOUBLE)}} is not folded to {{100.0d}}, and 2) whether the rule to > push post-aggregations to Druid could handle the CAST in some particular > cases. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17566) Create schema required for workload management.
[ https://issues.apache.org/jira/browse/HIVE-17566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193847#comment-16193847 ] Sergey Shelukhin commented on HIVE-17566: - The rules are mapped to the pool. Actually yeah it's attached to RP now. This should be changed. > Create schema required for workload management. > --- > > Key: HIVE-17566 > URL: https://issues.apache.org/jira/browse/HIVE-17566 > Project: Hive > Issue Type: Sub-task >Reporter: Harish Jaiprakash >Assignee: Harish Jaiprakash > Attachments: HIVE-17566.01.patch, HIVE-17566.02.patch, > HIVE-17566.03.patch, HIVE-17566.04.patch, HIVE-17566.05.patch, > HIVE-17566.06.patch > > > Schema + model changes required for workload management. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17574) Avoid multiple copies of HDFS-based jars when localizing job-jars
[ https://issues.apache.org/jira/browse/HIVE-17574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193822#comment-16193822 ] Mithun Radhakrishnan commented on HIVE-17574: - bq. I'll have to add documentation for {{hive.resource.use.hdfs.location}} in Hive Documentation I've added this configuration to the table in the [Hive Configurations|https://cwiki.apache.org/confluence/display/Hive/AdminManual+Configuration#AdminManualConfiguration-HiveConfigurationVariables] page. > Avoid multiple copies of HDFS-based jars when localizing job-jars > - > > Key: HIVE-17574 > URL: https://issues.apache.org/jira/browse/HIVE-17574 > Project: Hive > Issue Type: Bug >Affects Versions: 2.2.0, 3.0.0, 2.4.0 >Reporter: Mithun Radhakrishnan >Assignee: Chris Drome > Attachments: HIVE-17574.1-branch-2.2.patch, > HIVE-17574.1-branch-2.patch, HIVE-17574.1.patch, HIVE-17574.2.patch > > > Raising this on behalf of [~selinazh]. (For my own reference: YHIVE-1035.) > This has to do with the classpaths of Hive actions run from Oozie, and > affects scripts that adds jars/resources from HDFS locations. > As part of Oozie's "sharelib" deploys, foundation jars (such as Hive jars) > tend to be stored in HDFS paths, as are any custom user-libraries used in > workflows. An {{ADD JAR|FILE|ARCHIVE}} statement in a Hive script causes the > following steps to occur: > # Files are downloaded from HDFS to local temp dir. > # UDFs are resolved/validated. > # All jars/files, including those just downloaded from HDFS, are shipped > right back to HDFS-based scratch-directories, for job submission. > For HDFS-based files, this is wasteful and time-consuming. #3 above should > skip shipping HDFS-based resources, and add those directly to the Tez session. > We have a patch that's being used internally at Yahoo. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17679) http-generic-click-jacking for WebHcat server
[ https://issues.apache.org/jira/browse/HIVE-17679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aihua Xu updated HIVE-17679: Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) Pushed to master. Thanks Yongzhi for reviewing. > http-generic-click-jacking for WebHcat server > - > > Key: HIVE-17679 > URL: https://issues.apache.org/jira/browse/HIVE-17679 > Project: Hive > Issue Type: Bug > Components: Security, WebHCat >Affects Versions: 2.1.1 >Reporter: Aihua Xu >Assignee: Aihua Xu > Fix For: 3.0.0 > > Attachments: HIVE-17679.1.patch, HIVE-17679.2.patch > > > The web UIs do not include the "X-Frame-Options" header to prevent the pages > from being framed from another site. > Reference: > https://www.owasp.org/index.php/Clickjacking > https://www.owasp.org/index.php/Clickjacking_Defense_Cheat_Sheet > https://developer.mozilla.org/en-US/docs/Web/HTTP/X-Frame-Options -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17566) Create schema required for workload management.
[ https://issues.apache.org/jira/browse/HIVE-17566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193814#comment-16193814 ] Prasanth Jayachandran commented on HIVE-17566: -- - What does path in Pool mean? Is to store hierarchical paths? something like /p1/p2 .. - WM_TRIGGER table can be renamed to WM_RULE. Since we are creating rule which contains a trigger expression. - Also should rules be mapped to pool or resource plan? If rules are mapped to RP, will all pools defined in the RP get same rules? What happens if a Rule (say R1) is mapped to RP and another rule (say R2) mapped to a Pool. Will the pool get both rules R1 and R2? > Create schema required for workload management. > --- > > Key: HIVE-17566 > URL: https://issues.apache.org/jira/browse/HIVE-17566 > Project: Hive > Issue Type: Sub-task >Reporter: Harish Jaiprakash >Assignee: Harish Jaiprakash > Attachments: HIVE-17566.01.patch, HIVE-17566.02.patch, > HIVE-17566.03.patch, HIVE-17566.04.patch, HIVE-17566.05.patch, > HIVE-17566.06.patch > > > Schema + model changes required for workload management. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HIVE-17715) Exception when pushing postaggregates into Druid
[ https://issues.apache.org/jira/browse/HIVE-17715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez reassigned HIVE-17715: -- > Exception when pushing postaggregates into Druid > > > Key: HIVE-17715 > URL: https://issues.apache.org/jira/browse/HIVE-17715 > Project: Hive > Issue Type: Bug > Components: Druid integration >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > > After Calcite is upgraded to 1.14 and the rule to push post-aggregations to > Druid is enabled, the following query will fail: > {code} > EXPLAIN > SELECT language, robot, sum(added) - sum(delta) AS a > FROM druid_table_1 > WHERE extract (week from `__time`) IN (10,11) > AND robot='Bird Call' > GROUP BY language, robot; > {code} > The error we get is the following: > {code} > Cannot add expression of different type to set: > set type is RecordType(VARCHAR(2147483647) CHARACTER SET "UTF-16LE" COLLATE > "ISO-8859-1$en_US$primary" language, VARCHAR(2147483647) CHARACTER SET > "UTF-16LE" COLLATE "ISO-8859-1$en_US$primary" robot, DOUBLE a) NOT NULL > expression type is RecordType(VARCHAR(2147483647) CHARACTER SET "UTF-16LE" > COLLATE "ISO-8859-1$en_US$primary" language, DOUBLE postagg#0) NOT NULL > set is > rel#1507:HiveProject.HIVE.[](input=HepRelVertex#1514,language=$0,robot=CAST(_UTF-16LE'Bird > Call'):VARCHAR(2147483647) CHARACTER SET "UTF-16LE" COLLATE > "ISO-8859-1$en_US$primary",a=-($1, $2)) > expression is DruidQuery#1516 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17574) Avoid multiple copies of HDFS-based jars when localizing job-jars
[ https://issues.apache.org/jira/browse/HIVE-17574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mithun Radhakrishnan updated HIVE-17574: Resolution: Fixed Status: Resolved (was: Patch Available) > Avoid multiple copies of HDFS-based jars when localizing job-jars > - > > Key: HIVE-17574 > URL: https://issues.apache.org/jira/browse/HIVE-17574 > Project: Hive > Issue Type: Bug >Affects Versions: 2.2.0, 3.0.0, 2.4.0 >Reporter: Mithun Radhakrishnan >Assignee: Chris Drome > Attachments: HIVE-17574.1-branch-2.2.patch, > HIVE-17574.1-branch-2.patch, HIVE-17574.1.patch, HIVE-17574.2.patch > > > Raising this on behalf of [~selinazh]. (For my own reference: YHIVE-1035.) > This has to do with the classpaths of Hive actions run from Oozie, and > affects scripts that adds jars/resources from HDFS locations. > As part of Oozie's "sharelib" deploys, foundation jars (such as Hive jars) > tend to be stored in HDFS paths, as are any custom user-libraries used in > workflows. An {{ADD JAR|FILE|ARCHIVE}} statement in a Hive script causes the > following steps to occur: > # Files are downloaded from HDFS to local temp dir. > # UDFs are resolved/validated. > # All jars/files, including those just downloaded from HDFS, are shipped > right back to HDFS-based scratch-directories, for job submission. > For HDFS-based files, this is wasteful and time-consuming. #3 above should > skip shipping HDFS-based resources, and add those directly to the Tez session. > We have a patch that's being used internally at Yahoo. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (HIVE-17700) Update committer list
[ https://issues.apache.org/jira/browse/HIVE-17700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sushanth Sowmyan resolved HIVE-17700. - Resolution: Fixed > Update committer list > - > > Key: HIVE-17700 > URL: https://issues.apache.org/jira/browse/HIVE-17700 > Project: Hive > Issue Type: Task >Reporter: Sushanth Sowmyan >Assignee: Sushanth Sowmyan >Priority: Trivial > Attachments: HIVE-17700.patch > > > Please update committer list for Sushanth to remove company name (and move to > emeritus list) -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17700) Update committer list
[ https://issues.apache.org/jira/browse/HIVE-17700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193791#comment-16193791 ] Sushanth Sowmyan commented on HIVE-17700: - Thank you, committed. :) > Update committer list > - > > Key: HIVE-17700 > URL: https://issues.apache.org/jira/browse/HIVE-17700 > Project: Hive > Issue Type: Task >Reporter: Sushanth Sowmyan >Assignee: Sushanth Sowmyan >Priority: Trivial > Attachments: HIVE-17700.patch > > > Please update committer list for Sushanth to remove company name (and move to > emeritus list) -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17574) Avoid multiple copies of HDFS-based jars when localizing job-jars
[ https://issues.apache.org/jira/browse/HIVE-17574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193774#comment-16193774 ] Mithun Radhakrishnan commented on HIVE-17574: - Hello, [~sershe]. Thank you for the review, and confirming about the test failures. I've checked this into {{master}}, {{branch-2}}, and {{branch-2.2}}. Thanks for working on this, [~cdrome]. I'll have to add documentation for {{hive.resource.use.hdfs.location}} in Hive Documentation > Avoid multiple copies of HDFS-based jars when localizing job-jars > - > > Key: HIVE-17574 > URL: https://issues.apache.org/jira/browse/HIVE-17574 > Project: Hive > Issue Type: Bug >Affects Versions: 2.2.0, 3.0.0, 2.4.0 >Reporter: Mithun Radhakrishnan >Assignee: Chris Drome > Attachments: HIVE-17574.1-branch-2.2.patch, > HIVE-17574.1-branch-2.patch, HIVE-17574.1.patch, HIVE-17574.2.patch > > > Raising this on behalf of [~selinazh]. (For my own reference: YHIVE-1035.) > This has to do with the classpaths of Hive actions run from Oozie, and > affects scripts that adds jars/resources from HDFS locations. > As part of Oozie's "sharelib" deploys, foundation jars (such as Hive jars) > tend to be stored in HDFS paths, as are any custom user-libraries used in > workflows. An {{ADD JAR|FILE|ARCHIVE}} statement in a Hive script causes the > following steps to occur: > # Files are downloaded from HDFS to local temp dir. > # UDFs are resolved/validated. > # All jars/files, including those just downloaded from HDFS, are shipped > right back to HDFS-based scratch-directories, for job submission. > For HDFS-based files, this is wasteful and time-consuming. #3 above should > skip shipping HDFS-based resources, and add those directly to the Tez session. > We have a patch that's being used internally at Yahoo. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17566) Create schema required for workload management.
[ https://issues.apache.org/jira/browse/HIVE-17566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193766#comment-16193766 ] Sergey Shelukhin commented on HIVE-17566: - +1 with a small caveat, that I can fix on commit; PARENT_POOL_ID should allow nulls. [~prasanth_j] do you have any comments? see package.jdo, the DB and thrift changes. > Create schema required for workload management. > --- > > Key: HIVE-17566 > URL: https://issues.apache.org/jira/browse/HIVE-17566 > Project: Hive > Issue Type: Sub-task >Reporter: Harish Jaiprakash >Assignee: Harish Jaiprakash > Attachments: HIVE-17566.01.patch, HIVE-17566.02.patch, > HIVE-17566.03.patch, HIVE-17566.04.patch, HIVE-17566.05.patch, > HIVE-17566.06.patch > > > Schema + model changes required for workload management. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-13282) GroupBy and select operator encounter ArrayIndexOutOfBoundsException
[ https://issues.apache.org/jira/browse/HIVE-13282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-13282: Attachment: HIVE-13282.02.patch Attempting to rebase... > GroupBy and select operator encounter ArrayIndexOutOfBoundsException > > > Key: HIVE-13282 > URL: https://issues.apache.org/jira/browse/HIVE-13282 > Project: Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 1.2.1, 2.0.0, 2.1.0 >Reporter: Vikram Dixit K >Assignee: Sergey Shelukhin >Priority: Blocker > Attachments: HIVE-13282.01.patch, HIVE-13282.02.patch, > smb_fail_issue.patch, smb_groupby.q, smb_groupby.q.out > > > The group by and select operators run into the ArrayIndexOutOfBoundsException > when they incorrectly initialize themselves with tag 0 but the incoming tag > id is different. > {code} > select count(*) from > (select rt1.id from > (select t1.key as id, t1.value as od from tab t1 group by key, value) rt1) vt1 > join > (select rt2.id from > (select t2.key as id, t2.value as od from tab_part t2 group by key, value) > rt2) vt2 > where vt1.id=vt2.id; > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HIVE-13282) GroupBy and select operator encounter ArrayIndexOutOfBoundsException
[ https://issues.apache.org/jira/browse/HIVE-13282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin reassigned HIVE-13282: --- Assignee: Matt McCline (was: Sergey Shelukhin) > GroupBy and select operator encounter ArrayIndexOutOfBoundsException > > > Key: HIVE-13282 > URL: https://issues.apache.org/jira/browse/HIVE-13282 > Project: Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 1.2.1, 2.0.0, 2.1.0 >Reporter: Vikram Dixit K >Assignee: Matt McCline >Priority: Blocker > Attachments: HIVE-13282.01.patch, HIVE-13282.02.patch, > smb_fail_issue.patch, smb_groupby.q, smb_groupby.q.out > > > The group by and select operators run into the ArrayIndexOutOfBoundsException > when they incorrectly initialize themselves with tag 0 but the incoming tag > id is different. > {code} > select count(*) from > (select rt1.id from > (select t1.key as id, t1.value as od from tab t1 group by key, value) rt1) vt1 > join > (select rt2.id from > (select t2.key as id, t2.value as od from tab_part t2 group by key, value) > rt2) vt2 > where vt1.id=vt2.id; > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HIVE-13282) GroupBy and select operator encounter ArrayIndexOutOfBoundsException
[ https://issues.apache.org/jira/browse/HIVE-13282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin reassigned HIVE-13282: --- Assignee: Sergey Shelukhin (was: Matt McCline) > GroupBy and select operator encounter ArrayIndexOutOfBoundsException > > > Key: HIVE-13282 > URL: https://issues.apache.org/jira/browse/HIVE-13282 > Project: Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 1.2.1, 2.0.0, 2.1.0 >Reporter: Vikram Dixit K >Assignee: Sergey Shelukhin >Priority: Blocker > Attachments: HIVE-13282.01.patch, smb_fail_issue.patch, > smb_groupby.q, smb_groupby.q.out > > > The group by and select operators run into the ArrayIndexOutOfBoundsException > when they incorrectly initialize themselves with tag 0 but the incoming tag > id is different. > {code} > select count(*) from > (select rt1.id from > (select t1.key as id, t1.value as od from tab t1 group by key, value) rt1) vt1 > join > (select rt2.id from > (select t2.key as id, t2.value as od from tab_part t2 group by key, value) > rt2) vt2 > where vt1.id=vt2.id; > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (HIVE-17711) Update committer list
[ https://issues.apache.org/jira/browse/HIVE-17711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg resolved HIVE-17711. Resolution: Fixed > Update committer list > - > > Key: HIVE-17711 > URL: https://issues.apache.org/jira/browse/HIVE-17711 > Project: Hive > Issue Type: Task >Reporter: Vineet Garg >Assignee: Vineet Garg > Attachments: HIVE-17711.2.patch, HIVE-17711.patch > > > Update hive committer list to include: > Name: Vineet Garg > Apache ID: vgarg > Organization: Hortonworks -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17566) Create schema required for workload management.
[ https://issues.apache.org/jira/browse/HIVE-17566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193737#comment-16193737 ] Hive QA commented on HIVE-17566: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12890571/HIVE-17566.06.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 11200 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_predicate_pushdown] (batchId=232) org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_single_sourced_multi_insert] (batchId=232) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[zero_rows_hdfs] (batchId=243) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[optimize_nullscan] (batchId=162) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=157) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_explainuser_1] (batchId=171) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query23] (batchId=240) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7143/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7143/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7143/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 7 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12890571 - PreCommit-HIVE-Build > Create schema required for workload management. > --- > > Key: HIVE-17566 > URL: https://issues.apache.org/jira/browse/HIVE-17566 > Project: Hive > Issue Type: Sub-task >Reporter: Harish Jaiprakash >Assignee: Harish Jaiprakash > Attachments: HIVE-17566.01.patch, HIVE-17566.02.patch, > HIVE-17566.03.patch, HIVE-17566.04.patch, HIVE-17566.05.patch, > HIVE-17566.06.patch > > > Schema + model changes required for workload management. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17711) Update committer list
[ https://issues.apache.org/jira/browse/HIVE-17711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193671#comment-16193671 ] Vineet Garg commented on HIVE-17711: With earlier change I accidentally added my name to PMC, latest patch should fix it. > Update committer list > - > > Key: HIVE-17711 > URL: https://issues.apache.org/jira/browse/HIVE-17711 > Project: Hive > Issue Type: Task >Reporter: Vineet Garg >Assignee: Vineet Garg > Attachments: HIVE-17711.2.patch, HIVE-17711.patch > > > Update hive committer list to include: > Name: Vineet Garg > Apache ID: vgarg > Organization: Hortonworks -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17711) Update committer list
[ https://issues.apache.org/jira/browse/HIVE-17711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-17711: --- Attachment: HIVE-17711.2.patch > Update committer list > - > > Key: HIVE-17711 > URL: https://issues.apache.org/jira/browse/HIVE-17711 > Project: Hive > Issue Type: Task >Reporter: Vineet Garg >Assignee: Vineet Garg > Attachments: HIVE-17711.2.patch, HIVE-17711.patch > > > Update hive committer list to include: > Name: Vineet Garg > Apache ID: vgarg > Organization: Hortonworks -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17669) Cache to optimize SearchArgument deserialization
[ https://issues.apache.org/jira/browse/HIVE-17669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mithun Radhakrishnan updated HIVE-17669: Attachment: HIVE-17669.4.patch Ok, it looks like the Guava {{CacheBuilder}} doesn't allow one to simultaneously use {{maximumWeight}} with {{maximumSize}}. Those are mutually exclusive, as reflected in the test-failures. I've removed the {{maximumSize}} setting, and kept the default {{maximumWeight}} at 10MB. A value of {{0}} disables the cache entirely. > Cache to optimize SearchArgument deserialization > > > Key: HIVE-17669 > URL: https://issues.apache.org/jira/browse/HIVE-17669 > Project: Hive > Issue Type: Improvement > Components: ORC, Query Processor >Affects Versions: 2.2.0, 3.0.0 >Reporter: Mithun Radhakrishnan >Assignee: Mithun Radhakrishnan > Attachments: HIVE-17669.3.patch, HIVE-17669.4.patch, > HIVE-17699.1.patch, HIVE-17699.2.patch > > > And another, from [~selinazh] and [~cdrome]. (YHIVE-927) > When a mapper needs to process multiple ORC files, it might land up having > use essentially the same {{SearchArgument}} over several files. It would be > good not to have to deserialize from string, over and over again. Caching the > object against the string-form should speed things up. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17669) Cache to optimize SearchArgument deserialization
[ https://issues.apache.org/jira/browse/HIVE-17669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mithun Radhakrishnan updated HIVE-17669: Status: Patch Available (was: Open) > Cache to optimize SearchArgument deserialization > > > Key: HIVE-17669 > URL: https://issues.apache.org/jira/browse/HIVE-17669 > Project: Hive > Issue Type: Improvement > Components: ORC, Query Processor >Affects Versions: 2.2.0, 3.0.0 >Reporter: Mithun Radhakrishnan >Assignee: Mithun Radhakrishnan > Attachments: HIVE-17669.3.patch, HIVE-17669.4.patch, > HIVE-17699.1.patch, HIVE-17699.2.patch > > > And another, from [~selinazh] and [~cdrome]. (YHIVE-927) > When a mapper needs to process multiple ORC files, it might land up having > use essentially the same {{SearchArgument}} over several files. It would be > good not to have to deserialize from string, over and over again. Caching the > object against the string-form should speed things up. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17714) move custom SerDe schema considerations into metastore from QL
[ https://issues.apache.org/jira/browse/HIVE-17714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193642#comment-16193642 ] Sergey Shelukhin commented on HIVE-17714: - cc [~alangates] > move custom SerDe schema considerations into metastore from QL > -- > > Key: HIVE-17714 > URL: https://issues.apache.org/jira/browse/HIVE-17714 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Alan Gates > > Columns in metastore for tables that use external schema don't have the type > information (since HIVE-11985) and may be entirely inconsistent (since > forever, due to issues like HIVE-17713; or for SerDes that allow an URL for > the schema, due to the change in the underlying file). > Currently, if you trace the usage of ConfVars.SERDESUSINGMETASTOREFORSCHEMA, > and to MetaStoreUtils.getFieldsFromDeserializer, you'd see that the code in > QL handles this in Hive. So, for the most part metastore just returns > whatever is stored for columns in the database. > One exception appears to be get_fields_with_environment_context, which is > interesting... so getTable will return incorrect columns (potentially), but > get_fields/get_schema will return correct ones from SerDe as far as I can > tell. > As part of separating the metastore, we should make sure all the APIs return > the correct schema for the columns; it's not a good idea to have everyone > reimplement getFieldsFromDeserializer. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17714) move custom SerDe schema considerations into metastore from QL
[ https://issues.apache.org/jira/browse/HIVE-17714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-17714: Description: Columns in metastore for tables that use external schema don't have the type information (since HIVE-11985) and may be entirely inconsistent (since forever, due to issues like HIVE-17713; or for SerDes that allow an URL for the schema, due to a change in the underlying file). Currently, if you trace the usage of ConfVars.SERDESUSINGMETASTOREFORSCHEMA, and to MetaStoreUtils.getFieldsFromDeserializer, you'd see that the code in QL handles this in Hive. So, for the most part metastore just returns whatever is stored for columns in the database. One exception appears to be get_fields_with_environment_context, which is interesting... so getTable will return incorrect columns (potentially), but get_fields/get_schema will return correct ones from SerDe as far as I can tell. As part of separating the metastore, we should make sure all the APIs return the correct schema for the columns; it's not a good idea to have everyone reimplement getFieldsFromDeserializer. was: Columns in metastore for tables that use external schema don't have the type information (since HIVE-11985) and may be entirely inconsistent (since forever, due to issues like HIVE-17713; or for SerDes that allow an URL for the schema, due to the change in the underlying file). Currently, if you trace the usage of ConfVars.SERDESUSINGMETASTOREFORSCHEMA, and to MetaStoreUtils.getFieldsFromDeserializer, you'd see that the code in QL handles this in Hive. So, for the most part metastore just returns whatever is stored for columns in the database. One exception appears to be get_fields_with_environment_context, which is interesting... so getTable will return incorrect columns (potentially), but get_fields/get_schema will return correct ones from SerDe as far as I can tell. As part of separating the metastore, we should make sure all the APIs return the correct schema for the columns; it's not a good idea to have everyone reimplement getFieldsFromDeserializer. > move custom SerDe schema considerations into metastore from QL > -- > > Key: HIVE-17714 > URL: https://issues.apache.org/jira/browse/HIVE-17714 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Alan Gates > > Columns in metastore for tables that use external schema don't have the type > information (since HIVE-11985) and may be entirely inconsistent (since > forever, due to issues like HIVE-17713; or for SerDes that allow an URL for > the schema, due to a change in the underlying file). > Currently, if you trace the usage of ConfVars.SERDESUSINGMETASTOREFORSCHEMA, > and to MetaStoreUtils.getFieldsFromDeserializer, you'd see that the code in > QL handles this in Hive. So, for the most part metastore just returns > whatever is stored for columns in the database. > One exception appears to be get_fields_with_environment_context, which is > interesting... so getTable will return incorrect columns (potentially), but > get_fields/get_schema will return correct ones from SerDe as far as I can > tell. > As part of separating the metastore, we should make sure all the APIs return > the correct schema for the columns; it's not a good idea to have everyone > reimplement getFieldsFromDeserializer. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17669) Cache to optimize SearchArgument deserialization
[ https://issues.apache.org/jira/browse/HIVE-17669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mithun Radhakrishnan updated HIVE-17669: Status: Open (was: Patch Available) > Cache to optimize SearchArgument deserialization > > > Key: HIVE-17669 > URL: https://issues.apache.org/jira/browse/HIVE-17669 > Project: Hive > Issue Type: Improvement > Components: ORC, Query Processor >Affects Versions: 2.2.0, 3.0.0 >Reporter: Mithun Radhakrishnan >Assignee: Mithun Radhakrishnan > Attachments: HIVE-17669.3.patch, HIVE-17699.1.patch, > HIVE-17699.2.patch > > > And another, from [~selinazh] and [~cdrome]. (YHIVE-927) > When a mapper needs to process multiple ORC files, it might land up having > use essentially the same {{SearchArgument}} over several files. It would be > good not to have to deserialize from string, over and over again. Caching the > object against the string-form should speed things up. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HIVE-17714) move custom SerDe schema considerations into metastore from QL
[ https://issues.apache.org/jira/browse/HIVE-17714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin reassigned HIVE-17714: --- Assignee: Alan Gates > move custom SerDe schema considerations into metastore from QL > -- > > Key: HIVE-17714 > URL: https://issues.apache.org/jira/browse/HIVE-17714 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Alan Gates > > Columns in metastore for tables that use external schema don't have the type > information (since HIVE-11985) and may be entirely inconsistent (since > forever, due to issues like HIVE-17713; or for SerDes that allow an URL for > the schema, due to the change in the underlying file). > Currently, if you trace the usage of ConfVars.SERDESUSINGMETASTOREFORSCHEMA, > and to MetaStoreUtils.getFieldsFromDeserializer, you'd see that the code in > QL handles this in Hive. So, for the most part metastore just returns > whatever is stored for columns in the database. > One exception appears to be get_fields_with_environment_context, which is > interesting... so getTable will return incorrect columns (potentially), but > get_fields/get_schema will return correct ones from SerDe as far as I can > tell. > As part of separating the metastore, we should make sure all the APIs return > the correct schema for the columns; it's not a good idea to have everyone > reimplement getFieldsFromDeserializer. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17713) disallow adding/modifying columns in metastore for Avro/CSV/etc. SerDes
[ https://issues.apache.org/jira/browse/HIVE-17713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193621#comment-16193621 ] Sergey Shelukhin commented on HIVE-17713: - cc [~icocio] cc [~thejas] thoughts? > disallow adding/modifying columns in metastore for Avro/CSV/etc. SerDes > --- > > Key: HIVE-17713 > URL: https://issues.apache.org/jira/browse/HIVE-17713 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin > > This is kind of a followup from HIVE-11985 > When a SerDe uses an external schema (either embedded, or via a file link), > it's not using the information stored in metastore for the columns. So, if > the users modify table schema via add column and other such commands it won't > have effect on the serde, since it's using the external schema to figure out > what the columns are; leading to confusion and bugs. We should not allow such > a modification for SerDes not in hive.serdes.using.metastore.for.schema -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17713) disallow adding/modifying columns in metastore for Avro/CSV/etc. SerDes
[ https://issues.apache.org/jira/browse/HIVE-17713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-17713: Description: This is kind of a followup from HIVE-11985 When a SerDe uses an external schema (either embedded, or via a file link), it's not using the information stored in metastore for the columns. So, if the users modify table schema via add column and other such commands it won't have effect on the serde, since it's using the external schema to figure out what the columns are; leading to confusion and bugs. We should not allow such a modification for SerDes not in hive.serdes.using.metastore.for.schema was: This is kind of a followup from HIVE-11985 When a SerDe uses an external schema (either embedded, or via a file link), it's not using the information stored in metastore for the columns. So, if the users modify table schema via add column and other such commands it won't have effect on serde, since it's using the external schema; leading to confusion and bugs. We should not allow such a modification for SerDes not in hive.serdes.using.metastore.for.schema > disallow adding/modifying columns in metastore for Avro/CSV/etc. SerDes > --- > > Key: HIVE-17713 > URL: https://issues.apache.org/jira/browse/HIVE-17713 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin > > This is kind of a followup from HIVE-11985 > When a SerDe uses an external schema (either embedded, or via a file link), > it's not using the information stored in metastore for the columns. So, if > the users modify table schema via add column and other such commands it won't > have effect on the serde, since it's using the external schema to figure out > what the columns are; leading to confusion and bugs. We should not allow such > a modification for SerDes not in hive.serdes.using.metastore.for.schema -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17713) disallow adding/modifying columns in metastore for Avro/CSV/etc. SerDes
[ https://issues.apache.org/jira/browse/HIVE-17713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-17713: Summary: disallow adding/modifying columns in metastore for Avro/CSV/etc. SerDes (was: disallow adding/modifying columns for Avro/CSV/etc. SerDes) > disallow adding/modifying columns in metastore for Avro/CSV/etc. SerDes > --- > > Key: HIVE-17713 > URL: https://issues.apache.org/jira/browse/HIVE-17713 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin > > This is kind of a followup from HIVE-11985 > When a SerDe uses an external schema (either embedded, or via a file link), > it's not using the information stored in metastore for the columns. So, if > the users modify table schema via add column and other such commands it won't > have effect on serde, since it's using the external schema; leading to > confusion and bugs. We should not allow such a modification for SerDes not in > hive.serdes.using.metastore.for.schema -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17701) Show historic queries only for admin users
[ https://issues.apache.org/jira/browse/HIVE-17701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193613#comment-16193613 ] Tao Li commented on HIVE-17701: --- By looking at the code, we use the hasAdministratorAccess method to check if the use is an admin to access the config/stack pages. But for the reported bug, I assume we are trying to filter out the queries that are not related to this user. That means a non-admin user "foo" should not see queries from other users, while an admin user "bar" should see all queries. Is this understanding correct? Please confirm. If it sounds correct, then the behavior is different from the logic by using hasAdministratorAccess. > Show historic queries only for admin users > -- > > Key: HIVE-17701 > URL: https://issues.apache.org/jira/browse/HIVE-17701 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Thejas M Nair >Assignee: Tao Li > > The HiveServer2 Web UI (HIVE-12550) shows recently completed queries. > However, a user can see the queries run by other users as well, and that is a > security/privacy concern. > Only admin users should be allowed to see queries from other users (similar > to behavior of display for configs, stack trace etc). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17710) LockManager and External tables
[ https://issues.apache.org/jira/browse/HIVE-17710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193543#comment-16193543 ] Alan Gates commented on HIVE-17710: --- Since part of the notion of external tables is that data can be (often is?) written from outside Hive and when a drop table is done the data isn't removed, I'm not even sure what locking means for an external table. At most it would mean locking the table structure, but not the data. It seems reasonable to say we don't lock them. > LockManager and External tables > --- > > Key: HIVE-17710 > URL: https://issues.apache.org/jira/browse/HIVE-17710 > Project: Hive > Issue Type: New Feature > Components: Transactions >Reporter: Eugene Koifman >Assignee: Eugene Koifman > > should the LM take locks on External tables? Out of the box Acid LM is being > conservative which can cause throughput issues. > A better strategy may be to exclude External tables but enable explicit "lock > table/partition " command (only on external tables?). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HIVE-17711) Update committer list
[ https://issues.apache.org/jira/browse/HIVE-17711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg reassigned HIVE-17711: -- > Update committer list > - > > Key: HIVE-17711 > URL: https://issues.apache.org/jira/browse/HIVE-17711 > Project: Hive > Issue Type: Task >Reporter: Vineet Garg >Assignee: Vineet Garg > > Update hive committer list to include: > Name: Vineet Garg > Apache ID: vgarg > Organization: Hortonworks -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17711) Update committer list
[ https://issues.apache.org/jira/browse/HIVE-17711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193513#comment-16193513 ] Jesus Camacho Rodriguez commented on HIVE-17711: +1 > Update committer list > - > > Key: HIVE-17711 > URL: https://issues.apache.org/jira/browse/HIVE-17711 > Project: Hive > Issue Type: Task >Reporter: Vineet Garg >Assignee: Vineet Garg > Attachments: HIVE-17711.patch > > > Update hive committer list to include: > Name: Vineet Garg > Apache ID: vgarg > Organization: Hortonworks -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17711) Update committer list
[ https://issues.apache.org/jira/browse/HIVE-17711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193507#comment-16193507 ] Vineet Garg commented on HIVE-17711: [~jcamachorodriguez] [~ashutoshc] can you take a look? :) > Update committer list > - > > Key: HIVE-17711 > URL: https://issues.apache.org/jira/browse/HIVE-17711 > Project: Hive > Issue Type: Task >Reporter: Vineet Garg >Assignee: Vineet Garg > Attachments: HIVE-17711.patch > > > Update hive committer list to include: > Name: Vineet Garg > Apache ID: vgarg > Organization: Hortonworks -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17711) Update committer list
[ https://issues.apache.org/jira/browse/HIVE-17711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-17711: --- Attachment: HIVE-17711.patch > Update committer list > - > > Key: HIVE-17711 > URL: https://issues.apache.org/jira/browse/HIVE-17711 > Project: Hive > Issue Type: Task >Reporter: Vineet Garg >Assignee: Vineet Garg > Attachments: HIVE-17711.patch > > > Update hive committer list to include: > Name: Vineet Garg > Apache ID: vgarg > Organization: Hortonworks -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HIVE-17692) Block CONCATENATE on Acid tables
[ https://issues.apache.org/jira/browse/HIVE-17692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman reassigned HIVE-17692: - Assignee: Steve Yeom (was: Eugene Koifman) > Block CONCATENATE on Acid tables > > > Key: HIVE-17692 > URL: https://issues.apache.org/jira/browse/HIVE-17692 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Reporter: Eugene Koifman >Assignee: Steve Yeom >Priority: Critical > > See _DDLSemanticAnalzyer.analyzeAlterTablePartMergeFiles(ASTNode ast, > String tableName, HashMappartSpec)_ > This was fine before due to > {noformat} > // throw a HiveException if the table/partition is bucketized > if (bucketCols != null && bucketCols.size() > 0) { > throw new > SemanticException(ErrorMsg.CONCATENATE_UNSUPPORTED_TABLE_BUCKETED.getMsg()); > } > {noformat} > but now that we support unbucketed acid tables -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17692) Block CONCATENATE on Acid tables
[ https://issues.apache.org/jira/browse/HIVE-17692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-17692: -- Target Version/s: 3.0.0 > Block CONCATENATE on Acid tables > > > Key: HIVE-17692 > URL: https://issues.apache.org/jira/browse/HIVE-17692 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Reporter: Eugene Koifman >Assignee: Steve Yeom >Priority: Critical > > See _DDLSemanticAnalzyer.analyzeAlterTablePartMergeFiles(ASTNode ast, > String tableName, HashMappartSpec)_ > This was fine before due to > {noformat} > // throw a HiveException if the table/partition is bucketized > if (bucketCols != null && bucketCols.size() > 0) { > throw new > SemanticException(ErrorMsg.CONCATENATE_UNSUPPORTED_TABLE_BUCKETED.getMsg()); > } > {noformat} > but now that we support unbucketed acid tables -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HIVE-17710) LockManager and External tables
[ https://issues.apache.org/jira/browse/HIVE-17710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman reassigned HIVE-17710: - > LockManager and External tables > --- > > Key: HIVE-17710 > URL: https://issues.apache.org/jira/browse/HIVE-17710 > Project: Hive > Issue Type: New Feature > Components: Transactions >Reporter: Eugene Koifman >Assignee: Eugene Koifman > > should the LM take locks on External tables? Out of the box Acid LM is being > conservative which can cause throughput issues. > A better strategy may be to exclude External tables but enable explicit "lock > table/partition " command (only on external tables?). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17691) Misc Nits
[ https://issues.apache.org/jira/browse/HIVE-17691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-17691: -- Description: # DDLSemanticAnalyzer.alterTableOutput is unused # DDLTask.generateAddMmTasks(Table) - stmtId should probably come from TransactionManager # DDLTask.createTable(Hive db, CreateTableDesc crtTbl) has _Long mmWriteId = crtTbl.getInitialMmWriteId();_ logic is unclear.. this ID is only set in one place.. # FileSinkOperator has multiple places that look like _conf.getWriteType() == AcidUtils.Operation.NOT_ACID || conf.isMmTable()_ - what is the writeType for MM tables? Seems that Wei opted for "work.getLoadTableWork().getWriteType() != AcidUtils.Operation.NOT_ACID && !tbd.isMmTable()" to mean MM, e.g. MoveTask.handleStaticParts() call to Hive.loadPartition() # HiveConf.HIVE_TXN_OPERATIONAL_PROPERTIES - the doc/explanation there is obsolete # Compactor Initiator likely doesn't work for MM tables. It's triggered by into in TXN_COMPONENTS/COMPLETED_TXN_COMPONENTS. MM tables don't write to either because DbTxnManager.acquireLocks() does _compBuilder.setIsAcid(AcidUtils.isFullAcidTable(t));_ i.e. it treats MM as non-acid tables # In general integration with full Acid seems confused wrt to MM and seems to treat MM as special table type rather than subtype of Acid table. (mostly, but not always). # LoadSemanticAnalyzer.analyzeInternal(ASTNode) sets statementId to 0 rather than from TM # ImportCommitTask - doesn't currently do anything. It used to commit mmID. Need to verify we properly commit the txn in the Driver # As far as I can tell all the mm_*.q tests run on TestCliDriver which means MR. This doesn't exercise some code specifically for dealing with writes from Union All queries (CTAS, Insert into). On MR this requires "hive.optimize.union.remove=true" (false by default) was: # DDLSemanticAnalyzer.alterTableOutput is unused # DDLTask.generateAddMmTasks(Table) - stmtId should probably come from TransactionManager # DDLTask.createTable(Hive db, CreateTableDesc crtTbl) has _Long mmWriteId = crtTbl.getInitialMmWriteId();_ logic is unclear.. this ID is only set in one place.. # FileSinkOperator has multiple places that look like _conf.getWriteType() == AcidUtils.Operation.NOT_ACID || conf.isMmTable()_ - what is the writeType for MM tables? Seems that Wei opted for "work.getLoadTableWork().getWriteType() != AcidUtils.Operation.NOT_ACID && !tbd.isMmTable()" to mean MM, e.g. MoveTask.handleStaticParts() call to Hive.loadPartition() # HiveConf.HIVE_TXN_OPERATIONAL_PROPERTIES - the doc/explanation there is obsolete # Compactor Initiator likely doesn't work for MM tables. It's triggered by into in TXN_COMPONENTS/COMPLETED_TXN_COMPONENTS. MM tables don't write to either because DbTxnManager.acquireLocks() does _compBuilder.setIsAcid(AcidUtils.isFullAcidTable(t));_ i.e. it treats MM as non-acid tables # In general integration with full Acid seems confused wrt to MM and seems to treat MM as special table type rather than subtype of Acid table. (mostly, but not always). # LoadSemanticAnalyzer.analyzeInternal(ASTNode) sets statementId to 0 rather than from TM # ImportCommitTask - doesn't currently do anything. It used to commit mmID. Need to verify we properly commit the txn in the Driver > Misc Nits > - > > Key: HIVE-17691 > URL: https://issues.apache.org/jira/browse/HIVE-17691 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Reporter: Eugene Koifman > > # DDLSemanticAnalyzer.alterTableOutput is unused > # DDLTask.generateAddMmTasks(Table) - stmtId should probably come from > TransactionManager > # DDLTask.createTable(Hive db, CreateTableDesc crtTbl) has _Long mmWriteId = > crtTbl.getInitialMmWriteId();_ logic is unclear.. this ID is only set in one > place.. > # FileSinkOperator has multiple places that look like _conf.getWriteType() == > AcidUtils.Operation.NOT_ACID || conf.isMmTable()_ - what is the writeType for > MM tables? Seems that Wei opted for "work.getLoadTableWork().getWriteType() > != AcidUtils.Operation.NOT_ACID && !tbd.isMmTable()" to mean MM, e.g. > MoveTask.handleStaticParts() call to Hive.loadPartition() > # HiveConf.HIVE_TXN_OPERATIONAL_PROPERTIES - the doc/explanation there is > obsolete > # Compactor Initiator likely doesn't work for MM tables. It's triggered by > into in TXN_COMPONENTS/COMPLETED_TXN_COMPONENTS. MM tables don't write to > either because DbTxnManager.acquireLocks() does > _compBuilder.setIsAcid(AcidUtils.isFullAcidTable(t));_ i.e. it treats MM as > non-acid tables > # In general integration with full Acid seems confused wrt to MM and seems to > treat MM as special table type rather than subtype of Acid table. (mostly, > but not always). > # LoadSemanticAnalyzer.analyzeInternal(ASTNode) sets
[jira] [Updated] (HIVE-17566) Create schema required for workload management.
[ https://issues.apache.org/jira/browse/HIVE-17566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-17566: Attachment: HIVE-17566.06.patch The same patch to trigger QA again > Create schema required for workload management. > --- > > Key: HIVE-17566 > URL: https://issues.apache.org/jira/browse/HIVE-17566 > Project: Hive > Issue Type: Sub-task >Reporter: Harish Jaiprakash >Assignee: Sergey Shelukhin > Attachments: HIVE-17566.01.patch, HIVE-17566.02.patch, > HIVE-17566.03.patch, HIVE-17566.04.patch, HIVE-17566.05.patch, > HIVE-17566.06.patch > > > Schema + model changes required for workload management. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HIVE-17566) Create schema required for workload management.
[ https://issues.apache.org/jira/browse/HIVE-17566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin reassigned HIVE-17566: --- Assignee: Harish Jaiprakash (was: Sergey Shelukhin) > Create schema required for workload management. > --- > > Key: HIVE-17566 > URL: https://issues.apache.org/jira/browse/HIVE-17566 > Project: Hive > Issue Type: Sub-task >Reporter: Harish Jaiprakash >Assignee: Harish Jaiprakash > Attachments: HIVE-17566.01.patch, HIVE-17566.02.patch, > HIVE-17566.03.patch, HIVE-17566.04.patch, HIVE-17566.05.patch, > HIVE-17566.06.patch > > > Schema + model changes required for workload management. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HIVE-17566) Create schema required for workload management.
[ https://issues.apache.org/jira/browse/HIVE-17566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin reassigned HIVE-17566: --- Assignee: Sergey Shelukhin (was: Harish Jaiprakash) > Create schema required for workload management. > --- > > Key: HIVE-17566 > URL: https://issues.apache.org/jira/browse/HIVE-17566 > Project: Hive > Issue Type: Sub-task >Reporter: Harish Jaiprakash >Assignee: Sergey Shelukhin > Attachments: HIVE-17566.01.patch, HIVE-17566.02.patch, > HIVE-17566.03.patch, HIVE-17566.04.patch, HIVE-17566.05.patch, > HIVE-17566.06.patch > > > Schema + model changes required for workload management. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17691) Misc Nits
[ https://issues.apache.org/jira/browse/HIVE-17691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-17691: -- Description: # DDLSemanticAnalyzer.alterTableOutput is unused # DDLTask.generateAddMmTasks(Table) - stmtId should probably come from TransactionManager # DDLTask.createTable(Hive db, CreateTableDesc crtTbl) has _Long mmWriteId = crtTbl.getInitialMmWriteId();_ logic is unclear.. this ID is only set in one place.. # FileSinkOperator has multiple places that look like _conf.getWriteType() == AcidUtils.Operation.NOT_ACID || conf.isMmTable()_ - what is the writeType for MM tables? Seems that Wei opted for "work.getLoadTableWork().getWriteType() != AcidUtils.Operation.NOT_ACID && !tbd.isMmTable()" to mean MM, e.g. MoveTask.handleStaticParts() call to Hive.loadPartition() # HiveConf.HIVE_TXN_OPERATIONAL_PROPERTIES - the doc/explanation there is obsolete # Compactor Initiator likely doesn't work for MM tables. It's triggered by into in TXN_COMPONENTS/COMPLETED_TXN_COMPONENTS. MM tables don't write to either because DbTxnManager.acquireLocks() does _compBuilder.setIsAcid(AcidUtils.isFullAcidTable(t));_ i.e. it treats MM as non-acid tables # In general integration with full Acid seems confused wrt to MM and seems to treat MM as special table type rather than subtype of Acid table. (mostly, but not always). # LoadSemanticAnalyzer.analyzeInternal(ASTNode) sets statementId to 0 rather than from TM # ImportCommitTask - doesn't currently do anything. It used to commit mmID. Need to verify we properly commit the txn in the Driver was: # DDLSemanticAnalyzer.alterTableOutput is unused # DDLTask.generateAddMmTasks(Table) - stmtId should probably come from TransactionManager # DDLTask.createTable(Hive db, CreateTableDesc crtTbl) has _Long mmWriteId = crtTbl.getInitialMmWriteId();_ logic is unclear.. this ID is only set in one place.. # FileSinkOperator has multiple places that look like _conf.getWriteType() == AcidUtils.Operation.NOT_ACID || conf.isMmTable()_ - what is the writeType for MM tables? Seems that Wei opted for "work.getLoadTableWork().getWriteType() != AcidUtils.Operation.NOT_ACID && !tbd.isMmTable()" to mean MM, e.g. MoveTask.handleStaticParts() call to Hive.loadPartition() # HiveConf.HIVE_TXN_OPERATIONAL_PROPERTIES - the doc/explanation there is obsolete # Compactor Initiator likely doesn't work for MM tables. It's triggered by into in TXN_COMPONENTS/COMPLETED_TXN_COMPONENTS. MM tables don't write to either because DbTxnManager.acquireLocks() does _compBuilder.setIsAcid(AcidUtils.isFullAcidTable(t));_ i.e. it treats MM as non-acid tables # In general integration with full Acid seems confused wrt to MM and seems to treat MM as special table type rather than subtype of Acid table. (mostly, but not always). # LoadSemanticAnalyzer.analyzeInternal(ASTNode) sets statementId to 0 rather than from TM # > Misc Nits > - > > Key: HIVE-17691 > URL: https://issues.apache.org/jira/browse/HIVE-17691 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Reporter: Eugene Koifman > > # DDLSemanticAnalyzer.alterTableOutput is unused > # DDLTask.generateAddMmTasks(Table) - stmtId should probably come from > TransactionManager > # DDLTask.createTable(Hive db, CreateTableDesc crtTbl) has _Long mmWriteId = > crtTbl.getInitialMmWriteId();_ logic is unclear.. this ID is only set in one > place.. > # FileSinkOperator has multiple places that look like _conf.getWriteType() == > AcidUtils.Operation.NOT_ACID || conf.isMmTable()_ - what is the writeType for > MM tables? Seems that Wei opted for "work.getLoadTableWork().getWriteType() > != AcidUtils.Operation.NOT_ACID && !tbd.isMmTable()" to mean MM, e.g. > MoveTask.handleStaticParts() call to Hive.loadPartition() > # HiveConf.HIVE_TXN_OPERATIONAL_PROPERTIES - the doc/explanation there is > obsolete > # Compactor Initiator likely doesn't work for MM tables. It's triggered by > into in TXN_COMPONENTS/COMPLETED_TXN_COMPONENTS. MM tables don't write to > either because DbTxnManager.acquireLocks() does > _compBuilder.setIsAcid(AcidUtils.isFullAcidTable(t));_ i.e. it treats MM as > non-acid tables > # In general integration with full Acid seems confused wrt to MM and seems to > treat MM as special table type rather than subtype of Acid table. (mostly, > but not always). > # LoadSemanticAnalyzer.analyzeInternal(ASTNode) sets statementId to 0 rather > than from TM > # ImportCommitTask - doesn't currently do anything. It used to commit mmID. > Need to verify we properly commit the txn in the Driver -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17706) Add a possibility to run the BeeLine tests on the default database
[ https://issues.apache.org/jira/browse/HIVE-17706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193370#comment-16193370 ] Hive QA commented on HIVE-17706: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12890549/HIVE-17706.2.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 11200 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_predicate_pushdown] (batchId=232) org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_single_sourced_multi_insert] (batchId=232) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[optimize_nullscan] (batchId=162) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=157) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_explainuser_1] (batchId=171) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_2] (batchId=101) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query14] (batchId=240) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7142/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7142/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7142/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 7 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12890549 - PreCommit-HIVE-Build > Add a possibility to run the BeeLine tests on the default database > -- > > Key: HIVE-17706 > URL: https://issues.apache.org/jira/browse/HIVE-17706 > Project: Hive > Issue Type: Bug > Components: Testing Infrastructure >Affects Versions: 3.0.0 >Reporter: Peter Vary >Assignee: Peter Vary > Attachments: HIVE-17706.2.patch, HIVE-17706.patch > > > Currently it is possible to run the BeeLine tests sequentially but it still > relies on cleaning up after the tests by cleaning up the database. Some of > the tests could be run only against the default database. We need a cleanup > mechanism between the tests -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HIVE-17444) TestMiniLlapCliDriver.testCliDriver[llap_smb]
[ https://issues.apache.org/jira/browse/HIVE-17444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal reassigned HIVE-17444: - Assignee: Deepak Jaiswal > TestMiniLlapCliDriver.testCliDriver[llap_smb] > - > > Key: HIVE-17444 > URL: https://issues.apache.org/jira/browse/HIVE-17444 > Project: Hive > Issue Type: Sub-task >Reporter: Vihang Karajgaonkar >Assignee: Deepak Jaiswal > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17705) HIVE-17562 is returning incorrect results
[ https://issues.apache.org/jira/browse/HIVE-17705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-17705: - Resolution: Fixed Fix Version/s: 2.3.1 2.2.1 2.4.0 Status: Resolved (was: Patch Available) Committed to branch-2, branch-2.2, branch-2.3. Thanks for the review! > HIVE-17562 is returning incorrect results > - > > Key: HIVE-17705 > URL: https://issues.apache.org/jira/browse/HIVE-17705 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 2.4.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Blocker > Fix For: 2.4.0, 2.2.1, 2.3.1 > > Attachments: HIVE-17705-branch-2.patch > > > HIVE-17562 is generating incorrect results intermittently. Also derived > tables with default file format set to ORC is resulting in AIOB exception. > 1) Intermittent issue happens based on the order of selection of bucket file > for ETL split strategy. Flipping the covered boolean flag looks error prone. > The test case always passed when bucket_0 is analyzed first. When > bucket_1 is analyzed first duplicate rows were generated. > 2) Some internal test revealed a test failure with ArrayIndexOutOfBounds > exception which happens for non-bucketed table. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Comment Edited] (HIVE-17562) ACID 1.0 + ETL strategy should treat empty compacted files as uncovered deltas
[ https://issues.apache.org/jira/browse/HIVE-17562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183167#comment-16183167 ] Prasanth Jayachandran edited comment on HIVE-17562 at 10/5/17 5:53 PM: --- Committed to branch-2, branch-2.2, branch-2.3. Thanks for the review! was (Author: prasanth_j): Committed to branch-2. Thanks for the review! > ACID 1.0 + ETL strategy should treat empty compacted files as uncovered deltas > -- > > Key: HIVE-17562 > URL: https://issues.apache.org/jira/browse/HIVE-17562 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 2.4.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Fix For: 2.4.0, 2.2.1, 2.3.1 > > Attachments: HIVE-17562.1.branch-2.patch, HIVE-17562-branch-2.patch, > HIVE-17562-branch-2.patch > > > In branch-2, with ACID 1.0, following sequence will result in incorrect > results > 1) Set split strategy to ETL > 2) Insert some rows > 3) Delete all rows > 4) Alter table compact MAJOR > 5) Insert some rows > 6) Select * query will not return any rows that is written at last (step 5) > The reason for that, compaction essentially voids the first insert in step 2. > Now when ETL split strategy is chosen, there will not be any stripes in the > base files. So no split gets generated and any subsequent deltas gets ignored. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17562) ACID 1.0 + ETL strategy should treat empty compacted files as uncovered deltas
[ https://issues.apache.org/jira/browse/HIVE-17562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-17562: - Fix Version/s: 2.3.1 2.2.1 > ACID 1.0 + ETL strategy should treat empty compacted files as uncovered deltas > -- > > Key: HIVE-17562 > URL: https://issues.apache.org/jira/browse/HIVE-17562 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 2.4.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Fix For: 2.4.0, 2.2.1, 2.3.1 > > Attachments: HIVE-17562.1.branch-2.patch, HIVE-17562-branch-2.patch, > HIVE-17562-branch-2.patch > > > In branch-2, with ACID 1.0, following sequence will result in incorrect > results > 1) Set split strategy to ETL > 2) Insert some rows > 3) Delete all rows > 4) Alter table compact MAJOR > 5) Insert some rows > 6) Select * query will not return any rows that is written at last (step 5) > The reason for that, compaction essentially voids the first insert in step 2. > Now when ETL split strategy is chosen, there will not be any stripes in the > base files. So no split gets generated and any subsequent deltas gets ignored. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17705) HIVE-17562 is returning incorrect results
[ https://issues.apache.org/jira/browse/HIVE-17705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-17705: - Affects Version/s: (was: 3.0.0) > HIVE-17562 is returning incorrect results > - > > Key: HIVE-17705 > URL: https://issues.apache.org/jira/browse/HIVE-17705 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 2.4.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Blocker > Attachments: HIVE-17705-branch-2.patch > > > HIVE-17562 is generating incorrect results intermittently. Also derived > tables with default file format set to ORC is resulting in AIOB exception. > 1) Intermittent issue happens based on the order of selection of bucket file > for ETL split strategy. Flipping the covered boolean flag looks error prone. > The test case always passed when bucket_0 is analyzed first. When > bucket_1 is analyzed first duplicate rows were generated. > 2) Some internal test revealed a test failure with ArrayIndexOutOfBounds > exception which happens for non-bucketed table. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17705) HIVE-17562 is returning incorrect results
[ https://issues.apache.org/jira/browse/HIVE-17705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-17705: - Target Version/s: 2.4.0 (was: 3.0.0, 2.4.0) > HIVE-17562 is returning incorrect results > - > > Key: HIVE-17705 > URL: https://issues.apache.org/jira/browse/HIVE-17705 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 2.4.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Blocker > Attachments: HIVE-17705-branch-2.patch > > > HIVE-17562 is generating incorrect results intermittently. Also derived > tables with default file format set to ORC is resulting in AIOB exception. > 1) Intermittent issue happens based on the order of selection of bucket file > for ETL split strategy. Flipping the covered boolean flag looks error prone. > The test case always passed when bucket_0 is analyzed first. When > bucket_1 is analyzed first duplicate rows were generated. > 2) Some internal test revealed a test failure with ArrayIndexOutOfBounds > exception which happens for non-bucketed table. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17705) HIVE-17562 is returning incorrect results
[ https://issues.apache.org/jira/browse/HIVE-17705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193241#comment-16193241 ] Eugene Koifman commented on HIVE-17705: --- LGTMT +1 > HIVE-17562 is returning incorrect results > - > > Key: HIVE-17705 > URL: https://issues.apache.org/jira/browse/HIVE-17705 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.0.0, 2.4.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Blocker > Attachments: HIVE-17705-branch-2.patch > > > HIVE-17562 is generating incorrect results intermittently. Also derived > tables with default file format set to ORC is resulting in AIOB exception. > 1) Intermittent issue happens based on the order of selection of bucket file > for ETL split strategy. Flipping the covered boolean flag looks error prone. > The test case always passed when bucket_0 is analyzed first. When > bucket_1 is analyzed first duplicate rows were generated. > 2) Some internal test revealed a test failure with ArrayIndexOutOfBounds > exception which happens for non-bucketed table. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17699) Skip calling authValidator.checkPrivileges when there is nothing to get authorized
[ https://issues.apache.org/jira/browse/HIVE-17699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193196#comment-16193196 ] Aihua Xu commented on HIVE-17699: - [~lina...@cloudera.com] Looked at those classes, we should call authValidator.checkPrivileges for metadata query. > Skip calling authValidator.checkPrivileges when there is nothing to get > authorized > -- > > Key: HIVE-17699 > URL: https://issues.apache.org/jira/browse/HIVE-17699 > Project: Hive > Issue Type: Improvement >Affects Versions: 2.1.1 >Reporter: Aihua Xu >Assignee: Aihua Xu > Attachments: HIVE-17699.1.patch > > > For the command like "drop database if exists db1;" and the database db1 > doesn't exist, there will be nothing to get authorized. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17078) Add more logs to MapredLocalTask
[ https://issues.apache.org/jira/browse/HIVE-17078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193202#comment-16193202 ] Hive QA commented on HIVE-17078: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12890534/HIVE-17078.7.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 11200 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_predicate_pushdown] (batchId=232) org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_single_sourced_multi_insert] (batchId=232) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[optimize_nullscan] (batchId=162) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=157) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_explainuser_1] (batchId=171) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_2] (batchId=101) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query14] (batchId=240) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query23] (batchId=240) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7141/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7141/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7141/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 8 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12890534 - PreCommit-HIVE-Build > Add more logs to MapredLocalTask > > > Key: HIVE-17078 > URL: https://issues.apache.org/jira/browse/HIVE-17078 > Project: Hive > Issue Type: Improvement >Reporter: Yibing Shi >Assignee: Barna Zsombor Klara >Priority: Minor > Attachments: HIVE-17078.1.patch, HIVE-17078.2.patch, > HIVE-17078.3.patch, HIVE-17078.4.PATCH, HIVE-17078.5.PATCH, > HIVE-17078.6.patch, HIVE-17078.7.patch > > > By default, {{MapredLocalTask}} is executed in a child process of Hive, in > case the local task uses too much resources that may affect Hive. Currently, > the stdout and stderr information of the child process is printed in Hive's > stdout/stderr log, which doesn't have a timestamp information, and is > separated from Hive service logs. This makes it hard to troubleshoot problems > in MapredLocalTasks. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17700) Update committer list
[ https://issues.apache.org/jira/browse/HIVE-17700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16193184#comment-16193184 ] Ashutosh Chauhan commented on HIVE-17700: - +1 > Update committer list > - > > Key: HIVE-17700 > URL: https://issues.apache.org/jira/browse/HIVE-17700 > Project: Hive > Issue Type: Task >Reporter: Sushanth Sowmyan >Assignee: Sushanth Sowmyan >Priority: Trivial > Attachments: HIVE-17700.patch > > > Please update committer list for Sushanth to remove company name (and move to > emeritus list) -- This message was sent by Atlassian JIRA (v6.4.14#64029)