[jira] [Updated] (HIVE-18477) White spaces characters in view creation via JDBC cause problems with show create.
[ https://issues.apache.org/jira/browse/HIVE-18477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Venu Yanamandra updated HIVE-18477: --- Description: When view is created via JDBC, white spaces are included in the view creation. This is not the same behavior if we use beeline to create the view. For the JDBC issue, consider the below tables setup. {code:java} create table t0 (id int, name string, address string, dob timestamp); create table t1 (id int, activity string, timedone timestamp); create table t2 (id int, history string); {code} And, the Java code to create the view: {code:java} public class ViewCreationThroughJDBC{ public static void main(String[] args) throws SQLException{ try { Class.forName("org.apache.hive.jdbc.HiveDriver"); } catch (ClassNotFoundException e) { e.printStackTrace(); System.exit(1); } String hostName = null; try { InetAddress ipAddress = InetAddress.getLocalHost(); hostName = ipAddress.getHostName(); System.out.println("Current IP: <" + ipAddress + ">"); System.out.println("Hostname is: <" + hostName + ">"); } catch (UnknownHostException e) { e.printStackTrace(); } String sql = null; try { sql = new String(Files.readAllBytes(Paths.get("view_create.txt")), StandardCharsets.UTF_8); } catch (Exception e) { e.printStackTrace(); } Connection conn = DriverManager.getConnection("jdbc:hive2://" + hostName + ":1/default;", "username", "password"); Statement stmt = conn.createStatement(); System.out.println("Running: " + sql); //ResultSet res = stmt.executeQuery(sql); stmt.execute(sql); try { //res.close(); stmt.close(); conn.close(); } catch (Exception e) { e.printStackTrace(); } } } {code} And, the contents of view_create.txt referenced in the above code is as shown below: {code:java} create view v0 as select -- get the id from t0 table a.id -- Do not get --, any other --, details -- lets gather name address and dob , a.name , a.address , a.dob , b.activity , b.timedone -- The column name should be timezone and not timedone , c.history from t0 a, t1 b, t2 c where a.id = b.id and c.id = a.id {code} When this code is run and a 'show create table v0' is run from beeline, it shows empty lines. Essentially, when: 'show create table v0' is run from beeline, below is the output I see: {code:java} +--+--+ | createtab_stmt | +--+--+ | CREATE VIEW `v0` AS select | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | +--+--+ 22 rows selected (0.188 seconds){code} was: When view is created via JDBC, white spaces are included in the view creation. This is not the same behavior if we use beeline to create the view. For the JDBC issue, consider the below tables setup. {code} create table t0 (id int, name string, address string, dob timestamp); create table t1 (id int, activity string, timedone timestamp); create table t2 (id int, history string); {code} And, the Java code to create the view: {code} public class ViewCreationThroughJDBC{ public static void main(String[] args) throws SQLException{ try { Class.forName("org.apache.hive.jdbc.HiveDriver"); } catch (ClassNotFoundException e) { e.printStackTrace(); System.exit(1); } String hostName = null; try { InetAddress ipAddress = InetAddress.getLocalHost(); hostName = ipAddress.getHostName(); System.out.println("Current IP: <" + ipAddress + ">"); System.out.println("Hostname is: <" + hostName + ">"); } catch (UnknownHostException e) { e.printStackTrace(); } String sql = null; try { sql = new String(Files.readAllBytes(Paths.get("view_create.txt")), StandardCharsets.UTF_8); } catch (Exception e) { e.printStackTrace(); } Connection conn = DriverManager.getConnection("jdbc:hive2://" + hostName + ":1/default;", "username", "password"); Statement stmt = conn.createStatement(); System.out.println("Running: " + sql); //ResultSet res = stmt.executeQuery(sql); stmt.execute(sql); try { //res.close(); stmt.close(); conn.close(); } catch (Exception e) { e.printStackTrace(); } } } {code} And, the contents of view_create.txt referenced in the above code is as shown below: {code} create view v0 as select -- get the id from t0 table a.id -- Do not get --, any other --, details -- lets gather name address and dob , a.name , a.address , a.dob , b.activity , b.timedone -- The column name should be timezone and not timedone , c.history from t0 a, t1 b, t2 c where a.id = b.id and c.id = a.id {code} When this code is run and a 'show create table v0' is run from beeline, it shows empty lines. > White spaces characters in view creation via JDBC cause problems with show > create. > -- > > Key: HIVE-18477 > URL: https://issues.apache.org/jira/browse/HIVE-18477 >
[jira] [Commented] (HIVE-18460) Compactor doesn't pass Table properties to the Orc writer
[ https://issues.apache.org/jira/browse/HIVE-18460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16330208#comment-16330208 ] Hive QA commented on HIVE-18460: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 38s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 44s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 36s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 17m 28s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 80e6f7b | | Default Java | 1.8.0_111 | | modules | C: ql itests/hive-unit U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8670/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Compactor doesn't pass Table properties to the Orc writer > - > > Key: HIVE-18460 > URL: https://issues.apache.org/jira/browse/HIVE-18460 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 0.13 >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Critical > Attachments: HIVE-18460.01.patch, HIVE-18460.02.patch, > HIVE-18460.03.patch, HIVE-18460.04.patch > > > > CompactorMap.getWrite()/getDeleteEventWriter() both do > AcidOutputFormat.Options.tableProperties() but > OrcOutputFormat.getRawRecordWriter() does > {noformat} > final OrcFile.WriterOptions opts = > OrcFile.writerOptions(options.getConfiguration()); > {noformat} > which ignores tableProperties value. > It should do > {noformat} > final OrcFile.WriterOptions opts = > OrcFile.writerOptions(options.getTableProperties(), > options.getConfiguration()); > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18301) Investigate to enable MapInput cache in Hive on Spark
[ https://issues.apache.org/jira/browse/HIVE-18301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16330190#comment-16330190 ] liyunzhang commented on HIVE-18301: --- [~lirui]: {quote}My understanding is if the HadoopRDD is cached, the records are not produced by record reader and IOContext is not populated. Therefore the information in IOContext will be unavailable, e.g. the input path. This may cause problem because some operators need to take certain actions when input file changes – {{Operator::cleanUpInputFileChanged}}. So basically my point is we have to figure out the scenarios where IOContext is necessary. Then decide whether we should disable caching in such cases. {quote} Yes, if HadoopRDD is cached, it will not call {code:java} CombineHiveRecordReader#init ->HiveContextAwareRecordReader.initIOContext ->IOContext.setInputPath {code} . It will use the cached result to call MapOperator#process(Writable value), so NPE is thrown because at that time IOContext.getInputPath return null. Now I just modify the code of MapOperator#process(Writable value) like [link|https://github.com/kellyzly/hive/commit/e81b7df572e2c543095f55dd160b428c355da2fb] Here my question is 1. when {{context.getIoCxt().getInputPath() == null}}, I think in this situation, this record is from cache not from CombineHiveRecordReader. We need not to call MapOperator#cleanUpInputFileChanged because MapOperator#cleanUpInputFileChanged is only designed for one Mapper scanning multiple files(like CombineFileInputFormat) and multiple partitions and inputPath will change in these situations and need to call {{cleanUpInputFileChanged}} to reinitialize [some variables|https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/MapOperator.java#L532] but we need not consider reinitialization for a cached record. Is my understanding right? If right, is there any other way to judge this is cached record or not except by {{context.getIoCxt().getInputPath() == null}} 2. how to initiliaze IOContext#getInputPath in cache situation? we need this variable to reinitialize MapOperator::currentCtxs in MapOperator#initializeContexts {code:java} public void initializeContexts() { Path fpath = getExecContext().getCurrentInputPath(); String nominalPath = getNominalPath(fpath); Mapcontexts = opCtxMap.get(nominalPath); currentCtxs = contexts.values().toArray(new MapOpCtx[contexts.size()]); } {code} in the code, we store MapOpCtx for every MapOperator in opCtxMap(Map >). In table with partitions, there will be multiple elements in opCtxMap( opCtxMap.keySet() is a set containing partition names). Currently I test on a table without partitions and can directly use opCtxMap.values().iterator().next() to initialize [context|https://github.com/kellyzly/hive/blob/HIVE-17486.4/ql/src/java/org/apache/hadoop/hive/ql/exec/MapOperator.java#L713] and runs successfully in yarn mode. But I guess this is not right with partitioned table. > Investigate to enable MapInput cache in Hive on Spark > - > > Key: HIVE-18301 > URL: https://issues.apache.org/jira/browse/HIVE-18301 > Project: Hive > Issue Type: Bug >Reporter: liyunzhang >Assignee: liyunzhang >Priority: Major > > Before IOContext problem is found in MapTran when spark rdd cache is enabled > in HIVE-8920. > so we disabled rdd cache in MapTran at > [SparkPlanGenerator|https://github.com/kellyzly/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkPlanGenerator.java#L202]. > The problem is IOContext seems not initialized correctly in the spark yarn > client/cluster mode and caused the exception like > {code} > Job aborted due to stage failure: Task 93 in stage 0.0 failed 4 times, most > recent failure: Lost task 93.3 in stage 0.0 (TID 616, bdpe48): > java.lang.RuntimeException: Error processing row: > java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:165) > at > org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:48) > at > org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:27) > at > org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList.hasNext(HiveBaseFunctionResultList.java:85) > at > scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:42) > at > org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125) > at > org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79) > at >
[jira] [Commented] (HIVE-18393) Error returned when some other type is read as string from parquet tables
[ https://issues.apache.org/jira/browse/HIVE-18393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16330182#comment-16330182 ] Hive QA commented on HIVE-18393: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12906504/HIVE-18393.4.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 15 failed/errored test(s), 11626 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[typechangetest] (batchId=10) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part] (batchId=94) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=121) org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=254) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=232) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=232) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=232) org.apache.hive.jdbc.authorization.TestHS2AuthzContext.testAuthzContextContentsCmdProcessorCmd (batchId=237) org.apache.hive.jdbc.authorization.TestHS2AuthzContext.testAuthzContextContentsDriverCmd (batchId=237) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8669/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8669/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8669/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 15 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12906504 - PreCommit-HIVE-Build > Error returned when some other type is read as string from parquet tables > - > > Key: HIVE-18393 > URL: https://issues.apache.org/jira/browse/HIVE-18393 > Project: Hive > Issue Type: Bug >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18393.1.patch, HIVE-18393.2.patch, > HIVE-18393.3.patch, HIVE-18393.4.patch, HIVE-18393.4.patch, HIVE-18393.4.patch > > > TimeStamp, Decimal, Double, Float, BigInt, Int, SmallInt, Tinyint and Boolean > when read as String, Varchar or Char should return the correct data. Now > this results in error for parquet tables. > Test Case: > {code} > drop table if exists testAltCol; > create table testAltCol > (cId TINYINT, > cTimeStamp TIMESTAMP, > cDecimal DECIMAL(38,18), > cDoubleDOUBLE, > cFloat FLOAT, > cBigIntBIGINT, > cInt INT, > cSmallInt SMALLINT, > cTinyint TINYINT, > cBoolean BOOLEAN); > insert into testAltCol values > (1, > '2017-11-07 09:02:49.9', > 12345678901234567890.123456789012345678, > 1.79e308, > 3.4e38, > 1234567890123456789, > 1234567890, > 12345, > 123, > TRUE); > insert into testAltCol values > (2, > '1400-01-01 01:01:01.1', > 1.1, > 2.2, > 3.3, > 1, > 2, > 3, > 4, > FALSE); > insert into testAltCol values > (3, > '1400-01-01 01:01:01.1', > 10.1, > 20.2, > 30.3, > 1234567890123456789, > 1234567890, > 12345, > 123, > TRUE); > select cId, cTimeStamp from testAltCol order by cId; > select cId, cDecimal, cDouble, cFloat from testAltCol order by cId; > select cId, cBigInt, cInt, cSmallInt, cTinyint from testAltCol order by cId; > select cId, cBoolean from testAltCol order by cId; > drop table if exists testAltColP; > create table testAltColP stored as parquet as select * from testAltCol; > select cId, cTimeStamp from testAltColP order by cId; > select cId, cDecimal, cDouble, cFloat from testAltColP order by cId; > select cId, cBigInt, cInt, cSmallInt, cTinyint from testAltColP order by cId; > select cId, cBoolean from testAltColP order by cId; > alter table testAltColP replace columns > (cId TINYINT, > cTimeStamp STRING, > cDecimal STRING, > cDouble
[jira] [Commented] (HIVE-18476) copy hdfs ACL's as part of replication
[ https://issues.apache.org/jira/browse/HIVE-18476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16330150#comment-16330150 ] anishek commented on HIVE-18476: cc [~thejas] For SQL Standard Auth, we should also copy over the roles + privileges to target warehouse ? > copy hdfs ACL's as part of replication > -- > > Key: HIVE-18476 > URL: https://issues.apache.org/jira/browse/HIVE-18476 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: anishek >Assignee: anishek >Priority: Major > Fix For: 3.0.0 > > > with improvements to HDFS ACL's in hadoop 3.0, hive should, as part of > replication also copy over the ACL's when copying files to target warehouse. > this would also mean setting the correct owner name and group name > so setOwner + setAcl has to be done on the files copied. > reference: > https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18476) copy hdfs ACL's as part of replication
[ https://issues.apache.org/jira/browse/HIVE-18476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] anishek reassigned HIVE-18476: -- > copy hdfs ACL's as part of replication > -- > > Key: HIVE-18476 > URL: https://issues.apache.org/jira/browse/HIVE-18476 > Project: Hive > Issue Type: Bug > Components: HiveServer2 > Environment: with improvements to HDFS ACL's in hadoop 3.0, hive > should, as part of replication also copy over the ACL's when copying files to > target warehouse. this would also mean setting the correct owner name and > group name > so setOwner + setAcl has to be done on the files copied. > reference: > https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html >Reporter: anishek >Assignee: anishek >Priority: Major > Fix For: 3.0.0 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18476) copy hdfs ACL's as part of replication
[ https://issues.apache.org/jira/browse/HIVE-18476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] anishek updated HIVE-18476: --- Description: with improvements to HDFS ACL's in hadoop 3.0, hive should, as part of replication also copy over the ACL's when copying files to target warehouse. this would also mean setting the correct owner name and group name so setOwner + setAcl has to be done on the files copied. reference: https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html > copy hdfs ACL's as part of replication > -- > > Key: HIVE-18476 > URL: https://issues.apache.org/jira/browse/HIVE-18476 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: anishek >Assignee: anishek >Priority: Major > Fix For: 3.0.0 > > > with improvements to HDFS ACL's in hadoop 3.0, hive should, as part of > replication also copy over the ACL's when copying files to target warehouse. > this would also mean setting the correct owner name and group name > so setOwner + setAcl has to be done on the files copied. > reference: > https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18476) copy hdfs ACL's as part of replication
[ https://issues.apache.org/jira/browse/HIVE-18476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] anishek updated HIVE-18476: --- Environment: (was: with improvements to HDFS ACL's in hadoop 3.0, hive should, as part of replication also copy over the ACL's when copying files to target warehouse. this would also mean setting the correct owner name and group name so setOwner + setAcl has to be done on the files copied. reference: https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html) > copy hdfs ACL's as part of replication > -- > > Key: HIVE-18476 > URL: https://issues.apache.org/jira/browse/HIVE-18476 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: anishek >Assignee: anishek >Priority: Major > Fix For: 3.0.0 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18393) Error returned when some other type is read as string from parquet tables
[ https://issues.apache.org/jira/browse/HIVE-18393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16330114#comment-16330114 ] Hive QA commented on HIVE-18393: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 35s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 5s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} serde: The patch generated 0 new + 9 unchanged - 4 fixed = 9 total (was 13) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 34s{color} | {color:green} The patch ql passed checkstyle {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 16m 26s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / a45becb | | Default Java | 1.8.0_111 | | modules | C: serde ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8669/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Error returned when some other type is read as string from parquet tables > - > > Key: HIVE-18393 > URL: https://issues.apache.org/jira/browse/HIVE-18393 > Project: Hive > Issue Type: Bug >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18393.1.patch, HIVE-18393.2.patch, > HIVE-18393.3.patch, HIVE-18393.4.patch, HIVE-18393.4.patch, HIVE-18393.4.patch > > > TimeStamp, Decimal, Double, Float, BigInt, Int, SmallInt, Tinyint and Boolean > when read as String, Varchar or Char should return the correct data. Now > this results in error for parquet tables. > Test Case: > {code} > drop table if exists testAltCol; > create table testAltCol > (cId TINYINT, > cTimeStamp TIMESTAMP, > cDecimal DECIMAL(38,18), > cDoubleDOUBLE, > cFloat FLOAT, > cBigIntBIGINT, > cInt INT, > cSmallInt SMALLINT, > cTinyint TINYINT, > cBoolean BOOLEAN); > insert into testAltCol values > (1, > '2017-11-07 09:02:49.9', > 12345678901234567890.123456789012345678, > 1.79e308, > 3.4e38, > 1234567890123456789, > 1234567890, > 12345, > 123, > TRUE); > insert into testAltCol values > (2, > '1400-01-01 01:01:01.1', >
[jira] [Updated] (HIVE-18475) Vectorization of CASE with NULL makes unexpected NULL values
[ https://issues.apache.org/jira/browse/HIVE-18475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Teddy Choi updated HIVE-18475: -- Status: Patch Available (was: Open) > Vectorization of CASE with NULL makes unexpected NULL values > > > Key: HIVE-18475 > URL: https://issues.apache.org/jira/browse/HIVE-18475 > Project: Hive > Issue Type: Bug >Reporter: Teddy Choi >Assignee: Teddy Choi >Priority: Critical > Attachments: HIVE-18475.patch > > > Vectorization of CASE with NULL (HIVE-16731) makes unexpected NULL values -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18386) Create dummy materialized views registry and make it configurable
[ https://issues.apache.org/jira/browse/HIVE-18386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-18386: Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) Pushed to master. Thanks, Jesus! > Create dummy materialized views registry and make it configurable > - > > Key: HIVE-18386 > URL: https://issues.apache.org/jira/browse/HIVE-18386 > Project: Hive > Issue Type: Improvement > Components: Materialized views >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18386.01.patch, HIVE-18386.02.patch, > HIVE-18386.05.patch > > > HiveMaterializedViewsRegistry keeps the materialized views plans in memory to > have quick access when queries are planned. For debugging purposes, we will > create a dummy materialized views registry that forwards all calls to > metastore and make the choice configurable. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18475) Vectorization of CASE with NULL makes unexpected NULL values
[ https://issues.apache.org/jira/browse/HIVE-18475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Teddy Choi updated HIVE-18475: -- Attachment: HIVE-18475.patch > Vectorization of CASE with NULL makes unexpected NULL values > > > Key: HIVE-18475 > URL: https://issues.apache.org/jira/browse/HIVE-18475 > Project: Hive > Issue Type: Bug >Reporter: Teddy Choi >Assignee: Teddy Choi >Priority: Critical > Attachments: HIVE-18475.patch > > > Vectorization of CASE with NULL (HIVE-16731) makes unexpected NULL values -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18475) Vectorization of CASE with NULL makes unexpected NULL values
[ https://issues.apache.org/jira/browse/HIVE-18475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Teddy Choi reassigned HIVE-18475: - > Vectorization of CASE with NULL makes unexpected NULL values > > > Key: HIVE-18475 > URL: https://issues.apache.org/jira/browse/HIVE-18475 > Project: Hive > Issue Type: Bug >Reporter: Teddy Choi >Assignee: Teddy Choi >Priority: Critical > > Vectorization of CASE with NULL (HIVE-16731) makes unexpected NULL values -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18472) Beeline gives log4j warnings
[ https://issues.apache.org/jira/browse/HIVE-18472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16330088#comment-16330088 ] Hive QA commented on HIVE-18472: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12906503/HIVE-18472.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 19 failed/errored test(s), 11625 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join25] (batchId=72) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] (batchId=48) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[char_udf1] (batchId=89) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] (batchId=12) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=151) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketizedhiveinputformat] (batchId=178) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part] (batchId=94) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=121) org.apache.hadoop.hive.ql.exec.tez.TestWorkloadManager.testApplyPlanQpChanges (batchId=285) org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=254) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=232) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=232) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=232) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8668/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8668/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8668/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 19 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12906503 - PreCommit-HIVE-Build > Beeline gives log4j warnings > > > Key: HIVE-18472 > URL: https://issues.apache.org/jira/browse/HIVE-18472 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18472.1.patch, HIVE-18472.1.patch > > > Starting Beeline gives the following warnings multiple times: > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/opt/cloudera/parcels/CDH-6.x-1.cdh6.x.p0.215261/jars/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/opt/cloudera/parcels/CDH-6.x-1.cdh6.x.p0.215261/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See [http://www.slf4j.org/codes.html#multiple_bindings] for an > explanation. > SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] > ERROR StatusLogger No log4j2 configuration file found. Using default > configuration: logging only errors to the console. Set system property > 'org.apache.logging.log4j.simplelog.StatusLogger.level' to TRACE to show > Log4j2 internal initialization logging. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18472) Beeline gives log4j warnings
[ https://issues.apache.org/jira/browse/HIVE-18472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16330016#comment-16330016 ] Hive QA commented on HIVE-18472: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s{color} | {color:green} The patch has no whitespace issues. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 47s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 1m 26s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / a45becb | | modules | C: . U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8668/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Beeline gives log4j warnings > > > Key: HIVE-18472 > URL: https://issues.apache.org/jira/browse/HIVE-18472 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18472.1.patch, HIVE-18472.1.patch > > > Starting Beeline gives the following warnings multiple times: > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/opt/cloudera/parcels/CDH-6.x-1.cdh6.x.p0.215261/jars/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/opt/cloudera/parcels/CDH-6.x-1.cdh6.x.p0.215261/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See [http://www.slf4j.org/codes.html#multiple_bindings] for an > explanation. > SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] > ERROR StatusLogger No log4j2 configuration file found. Using default > configuration: logging only errors to the console. Set system property > 'org.apache.logging.log4j.simplelog.StatusLogger.level' to TRACE to show > Log4j2 internal initialization logging. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18386) Create dummy materialized views registry and make it configurable
[ https://issues.apache.org/jira/browse/HIVE-18386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16330006#comment-16330006 ] Hive QA commented on HIVE-18386: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12906515/HIVE-18386.05.patch {color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 17 failed/errored test(s), 11627 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join25] (batchId=72) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] (batchId=48) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl] (batchId=173) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=151) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part] (batchId=94) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=121) org.apache.hadoop.hive.metastore.TestAcidTableSetup.testTransactionalValidation (batchId=221) org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=254) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=232) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=232) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=232) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8667/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8667/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8667/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 17 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12906515 - PreCommit-HIVE-Build > Create dummy materialized views registry and make it configurable > - > > Key: HIVE-18386 > URL: https://issues.apache.org/jira/browse/HIVE-18386 > Project: Hive > Issue Type: Improvement > Components: Materialized views >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-18386.01.patch, HIVE-18386.02.patch, > HIVE-18386.05.patch > > > HiveMaterializedViewsRegistry keeps the materialized views plans in memory to > have quick access when queries are planned. For debugging purposes, we will > create a dummy materialized views registry that forwards all calls to > metastore and make the choice configurable. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18386) Create dummy materialized views registry and make it configurable
[ https://issues.apache.org/jira/browse/HIVE-18386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329989#comment-16329989 ] Hive QA commented on HIVE-18386: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 31s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 36s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 57s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 10s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 46s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 38s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 18s{color} | {color:red} common: The patch generated 1 new + 941 unchanged - 0 fixed = 942 total (was 941) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 42s{color} | {color:red} ql: The patch generated 8 new + 442 unchanged - 2 fixed = 450 total (was 444) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 45s{color} | {color:red} root: The patch generated 9 new + 1649 unchanged - 2 fixed = 1658 total (was 1651) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 43s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 11s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 49m 22s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile xml | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / a45becb | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8667/yetus/diff-checkstyle-common.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8667/yetus/diff-checkstyle-ql.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8667/yetus/diff-checkstyle-root.txt | | modules | C: common ql service cli . itests/util U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8667/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Create dummy materialized views registry and make it configurable > - > > Key: HIVE-18386 > URL: https://issues.apache.org/jira/browse/HIVE-18386 > Project: Hive > Issue Type: Improvement > Components: Materialized views >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-18386.01.patch, HIVE-18386.02.patch, > HIVE-18386.05.patch > > > HiveMaterializedViewsRegistry keeps the materialized views plans in memory to > have quick
[jira] [Updated] (HIVE-18097) WM query move phase 2 - handling the destination pool being full
[ https://issues.apache.org/jira/browse/HIVE-18097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-18097: Description: We may add an option to move triggers: "on full kill/ignore/delay kill". The former will kill the query if the dest is full. The ignore one would not perform the move... The thinking is that admin would add another trigger to kill queries after some time (e.g. - move after 10s, kill after 30s), and thus give slow/large queries a grace period in case dest is full so they cannot gracefully release the capacity. It's possible to add a third option - "delay kill", where we would ignore the move, but kill the query if someone requests capacity and the pool is full. Note that none of these require oversubscription; oversubscription is a pain to track. was: Final design TBD. We may add an option to move triggers: "on full kill/ignore/delay kill". The former will kill the query if the dest is full. The ignore one would not perform the move... The thinking is that admin would add another trigger to kill queries after some time (e.g. - move after 10s, kill after 30s), and thus give slow/large queries a grace period in case dest is full so they cannot gracefully release the capacity. It's possible to add a third option - "delay kill", where we would ignore the move, but kill the query if someone requests capacity and the pool is full. Note that none of these require oversubscription; oversubscription is a pain to track. > WM query move phase 2 - handling the destination pool being full > > > Key: HIVE-18097 > URL: https://issues.apache.org/jira/browse/HIVE-18097 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Priority: Major > > We may add an option to move triggers: "on full kill/ignore/delay kill". > The former will kill the query if the dest is full. > The ignore one would not perform the move... The thinking is that admin would > add another trigger to kill queries after some time (e.g. - move after 10s, > kill after 30s), and thus give slow/large queries a grace period in case dest > is full so they cannot gracefully release the capacity. > It's possible to add a third option - "delay kill", where we would ignore the > move, but kill the query if someone requests capacity and the pool is full. > Note that none of these require oversubscription; oversubscription is a pain > to track. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18438) WM RP: it's impossible to unset things
[ https://issues.apache.org/jira/browse/HIVE-18438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-18438: Status: Patch Available (was: Open) > WM RP: it's impossible to unset things > -- > > Key: HIVE-18438 > URL: https://issues.apache.org/jira/browse/HIVE-18438 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-18438.patch > > > It should be possible to unset default pool, query parallelism for a RP; also > scheduling policy for a pool, although that does have a magic value 'default' -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18438) WM RP: it's impossible to unset things
[ https://issues.apache.org/jira/browse/HIVE-18438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-18438: Attachment: HIVE-18438.patch > WM RP: it's impossible to unset things > -- > > Key: HIVE-18438 > URL: https://issues.apache.org/jira/browse/HIVE-18438 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-18438.patch > > > It should be possible to unset default pool, query parallelism for a RP; also > scheduling policy for a pool, although that does have a magic value 'default' -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-17257) Hive should merge empty files
[ https://issues.apache.org/jira/browse/HIVE-17257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chao Sun updated HIVE-17257: Attachment: (was: HIVE-17257.3.patch) > Hive should merge empty files > - > > Key: HIVE-17257 > URL: https://issues.apache.org/jira/browse/HIVE-17257 > Project: Hive > Issue Type: Bug >Reporter: Chao Sun >Assignee: Chao Sun >Priority: Major > Attachments: HIVE-17257.0.patch, HIVE-17257.1.patch, > HIVE-17257.2.patch, HIVE-17257.3.patch > > > Currently if merging file option is turned on and the dest dir contains large > number of empty files, Hive will not trigger merge task: > {code} > private long getMergeSize(FileSystem inpFs, Path dirPath, long avgSize) { > AverageSize averageSize = getAverageSize(inpFs, dirPath); > if (averageSize.getTotalSize() <= 0) { > return -1; > } > if (averageSize.getNumFiles() <= 1) { > return -1; > } > if (averageSize.getTotalSize()/averageSize.getNumFiles() < avgSize) { > return averageSize.getTotalSize(); > } > return -1; > } > {code} > This logic doesn't seem right as the it seems better to combine these empty > files into one. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-17257) Hive should merge empty files
[ https://issues.apache.org/jira/browse/HIVE-17257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chao Sun updated HIVE-17257: Attachment: HIVE-17257.3.patch > Hive should merge empty files > - > > Key: HIVE-17257 > URL: https://issues.apache.org/jira/browse/HIVE-17257 > Project: Hive > Issue Type: Bug >Reporter: Chao Sun >Assignee: Chao Sun >Priority: Major > Attachments: HIVE-17257.0.patch, HIVE-17257.1.patch, > HIVE-17257.2.patch, HIVE-17257.3.patch > > > Currently if merging file option is turned on and the dest dir contains large > number of empty files, Hive will not trigger merge task: > {code} > private long getMergeSize(FileSystem inpFs, Path dirPath, long avgSize) { > AverageSize averageSize = getAverageSize(inpFs, dirPath); > if (averageSize.getTotalSize() <= 0) { > return -1; > } > if (averageSize.getNumFiles() <= 1) { > return -1; > } > if (averageSize.getTotalSize()/averageSize.getNumFiles() < avgSize) { > return averageSize.getTotalSize(); > } > return -1; > } > {code} > This logic doesn't seem right as the it seems better to combine these empty > files into one. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18449) Add configurable policy for choosing the HMS URI from hive.metastore.uris
[ https://issues.apache.org/jira/browse/HIVE-18449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329957#comment-16329957 ] Alexander Kolbasov commented on HIVE-18449: --- IMO this patch is most valuable as testing config that allows deterministic testing. Using this as a workaround for other bugs seems to be less useful. > Add configurable policy for choosing the HMS URI from hive.metastore.uris > - > > Key: HIVE-18449 > URL: https://issues.apache.org/jira/browse/HIVE-18449 > Project: Hive > Issue Type: Improvement > Components: Metastore >Reporter: Sahil Takiar >Assignee: Janaki Lahorani >Priority: Major > > HIVE-10815 added logic to randomly choose a HMS URI from > {{hive.metastore.uris}}. It would be nice if there was a configurable policy > that determined how a URI is chosen from this list - e.g. one option can be > to randomly pick a URI, another option can be to choose the first URI in the > list (which was the behavior prior to HIVE-10815). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18449) Add configurable policy for choosing the HMS URI from hive.metastore.uris
[ https://issues.apache.org/jira/browse/HIVE-18449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329951#comment-16329951 ] Thejas M Nair commented on HIVE-18449: -- I see the value of the metastore selection policy to temporarily workaround some bugs like one you mentioned. As you pointed out this patch is a different use case from HIVE-18347. HIVE-18347 is about how to retrieve the list of metastore uris, and this one is about how to pick one from the list. I think it makes sense to address both separately with different knobs. > Add configurable policy for choosing the HMS URI from hive.metastore.uris > - > > Key: HIVE-18449 > URL: https://issues.apache.org/jira/browse/HIVE-18449 > Project: Hive > Issue Type: Improvement > Components: Metastore >Reporter: Sahil Takiar >Assignee: Janaki Lahorani >Priority: Major > > HIVE-10815 added logic to randomly choose a HMS URI from > {{hive.metastore.uris}}. It would be nice if there was a configurable policy > that determined how a URI is chosen from this list - e.g. one option can be > to randomly pick a URI, another option can be to choose the first URI in the > list (which was the behavior prior to HIVE-10815). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18386) Create dummy materialized views registry and make it configurable
[ https://issues.apache.org/jira/browse/HIVE-18386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329949#comment-16329949 ] Hive QA commented on HIVE-18386: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12906515/HIVE-18386.05.patch {color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 30 failed/errored test(s), 11627 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join25] (batchId=72) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] (batchId=48) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cte_mat_4] (batchId=6) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl] (batchId=173) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=151) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[archive_partspec2] (batchId=94) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_alter_table_exchange_partition_fail] (batchId=94) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part] (batchId=94) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[ctas_noemptyfolder] (batchId=94) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[materialized_view_authorization_create_no_grant] (batchId=94) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[materialized_view_authorization_drop_other] (batchId=94) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[materialized_view_no_transactional_rewrite_2] (batchId=94) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_aggregator_error_1] (batchId=94) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_publisher_error_1] (batchId=94) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_notin_implicit_gby] (batchId=94) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[truncate_bucketed_column] (batchId=94) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=121) org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=254) org.apache.hive.hcatalog.pig.TestHCatLoaderComplexSchema.testMapNullKey[0] (batchId=191) org.apache.hive.hcatalog.pig.TestHCatLoaderComplexSchema.testMapWithComplexData[5] (batchId=191) org.apache.hive.hcatalog.pig.TestHCatLoaderComplexSchema.testSyntheticComplexSchema[5] (batchId=191) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=232) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=232) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=232) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8666/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8666/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8666/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 30 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12906515 - PreCommit-HIVE-Build > Create dummy materialized views registry and make it configurable > - > > Key: HIVE-18386 > URL: https://issues.apache.org/jira/browse/HIVE-18386 > Project: Hive > Issue Type: Improvement > Components: Materialized views >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-18386.01.patch, HIVE-18386.02.patch, > HIVE-18386.05.patch > > > HiveMaterializedViewsRegistry keeps the materialized views plans in memory to > have quick access when queries are planned. For debugging purposes, we will > create a dummy materialized views registry that forwards all calls to > metastore and make the choice
[jira] [Commented] (HIVE-10693) LLAP: DAG got stuck after reducer fetch failed
[ https://issues.apache.org/jira/browse/HIVE-10693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329947#comment-16329947 ] xiaoli commented on HIVE-10693: --- hello Sergey Shelukhin: I am hitting the same issue.Because one of our machines has a "linux hung task" problem, Our tez version 0.7.0 link: !image-2018-01-18-11-26-56-388.png! The tez task on this machine got stuck after reducer fetch failed,so When I see this issue,Can be solved by some tez configuration ,or other issue link? > LLAP: DAG got stuck after reducer fetch failed > -- > > Key: HIVE-10693 > URL: https://issues.apache.org/jira/browse/HIVE-10693 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Siddharth Seth >Priority: Major > > Internal app ID application_1429683757595_0912, LLAP > application_1429683757595_0911. If someone without access wants to > investigate I'll get the logs. > I've ran into this only once. Feel free to close as not repro, I'll reopen if > I see again :) I want to make sure some debug info is preserved just in case. > Running Q1 - Map 1 w/1000 tasks (in this particular case), followed by > Reducer 2 and Reducer 3, 1 task each, IIRC 3 is uber. > Fetch failed with I'd assume some random disturbance in the force: > {noformat} > 2015-05-12 13:37:31,056 [fetcher [Map_1] #17()] WARN > org.apache.tez.runtime.library.common.shuffle.orderedgrouped.FetcherOrderedGrouped: > Failed to verify reply after connecting to > cn047-10.l42scl.hortonworks.com:15551 with 1 inputs pending > java.net.SocketTimeoutException: Read timed out >at java.net.SocketInputStream.$$YJP$$socketRead0(Native Method) >at java.net.SocketInputStream.socketRead0(SocketInputStream.java) >at java.net.SocketInputStream.read(SocketInputStream.java:150) >at java.net.SocketInputStream.read(SocketInputStream.java:121) >at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) >at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) >at java.io.BufferedInputStream.read(BufferedInputStream.java:345) >at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:703) >at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:647) >at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:787) >at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:647) >at > sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1534) >at > sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1439) >at > org.apache.tez.runtime.library.common.shuffle.HttpConnection.getInputStream(HttpConnection.java:256) >at > org.apache.tez.runtime.library.common.shuffle.orderedgrouped.FetcherOrderedGrouped.setupConnection(FetcherOrderedGrouped.java:339) >at > org.apache.tez.runtime.library.common.shuffle.orderedgrouped.FetcherOrderedGrouped.copyFromHost(FetcherOrderedGrouped.java:257) >at > org.apache.tez.runtime.library.common.shuffle.orderedgrouped.FetcherOrderedGrouped.fetchNext(FetcherOrderedGrouped.java:167) >at > org.apache.tez.runtime.library.common.shuffle.orderedgrouped.FetcherOrderedGrouped.run(FetcherOrderedGrouped.java:182) > {noformat} > AM registered this as Map 1 task failure > {noformat} > 2015-05-12 13:37:31,156 INFO [Dispatcher thread: Central] > impl.TaskAttemptImpl: attempt_1429683757595_0912_1_00_000998_0 blamed for > read error from attempt_1429683757595_0912_1_01_00_0 at inputIndex 998 > ... > 2015-05-12 13:37:31,174 INFO [Dispatcher thread: Central] impl.TaskImpl: > Scheduling new attempt for task: task_1429683757595_0912_1_00_000998, > currentFailedAttempts: 1, maxFailedAttempts: 4 > {noformat} > Eventually Map 1 completed > {noformat} > 2015-05-12 13:38:25,247 INFO [Dispatcher thread: Central] > history.HistoryEventHandler: > [HISTORY][DAG:dag_1429683757595_0912_1][Event:VERTEX_FINISHED]: > vertexName=Map 1, vertexId=vertex_1429683757595_0912_1_00, > initRequestedTime=1431462752913, initedTime=1431462754818, > startRequestedTime=1431462754819, startedTime=1431462754819, > finishTime=1431463105101, timeTaken=350282, status=SUCCEEDED, diagnostics=, > counters=Counters: 29, org.apache.tez.common.counters.DAGCounter, > DATA_LOCAL_TASKS=59, RACK_LOCAL_TASKS=941, File System Counters, > FILE_BYTES_READ=2160704, FILE_BYTES_WRITTEN=20377550, FILE_READ_OPS=0, > FILE_LARGE_READ_OPS=0, FILE_WRITE_OPS=0, HDFS_BYTES_READ=9798097828287, > HDFS_BYTES_WRITTEN=0, HDFS_READ_OPS=406131, HDFS_LARGE_READ_OPS=0, > HDFS_WRITE_OPS=0, org.apache.tez.common.counters.TaskCounter, > SPILLED_RECORDS=4000, GC_TIME_MILLIS=73309, CPU_MILLISECONDS=0, > PHYSICAL_MEMORY_BYTES=-1000,
[jira] [Work stopped] (HIVE-17434) Using "add jar " from viewFs always occurred hdfs mismatch error
[ https://issues.apache.org/jira/browse/HIVE-17434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-17434 stopped by Bang Xiao. > Using "add jar " from viewFs always occurred hdfs mismatch error > > > Key: HIVE-17434 > URL: https://issues.apache.org/jira/browse/HIVE-17434 > Project: Hive > Issue Type: Bug >Affects Versions: 1.2.1, 1.2.2, 1.2.3 >Reporter: shenxianqiang >Assignee: Bang Xiao >Priority: Minor > Fix For: 1.2.1, 1.2.3 > > Attachments: HIVE-17434-branch-1.2.patch > > > add jar viewfs://nsX//lib/common.jar > always occure mismatch error -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-17434) Using "add jar " from viewFs always occurred hdfs mismatch error
[ https://issues.apache.org/jira/browse/HIVE-17434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bang Xiao updated HIVE-17434: - Target Version/s: 1.2.2, 1.2.1, 1.2.3 (was: 1.2.1, 1.2.2, 1.2.3) Status: Patch Available (was: Open) > Using "add jar " from viewFs always occurred hdfs mismatch error > > > Key: HIVE-17434 > URL: https://issues.apache.org/jira/browse/HIVE-17434 > Project: Hive > Issue Type: Bug >Affects Versions: 1.2.2, 1.2.1, 1.2.3 >Reporter: shenxianqiang >Assignee: Bang Xiao >Priority: Minor > Fix For: 1.2.3, 1.2.1 > > Attachments: HIVE-17434-branch-1.2.patch > > > add jar viewfs://nsX//lib/common.jar > always occure mismatch error -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18386) Create dummy materialized views registry and make it configurable
[ https://issues.apache.org/jira/browse/HIVE-18386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329938#comment-16329938 ] Hive QA commented on HIVE-18386: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 20s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 30s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 42s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 12s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 55s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 40s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 20s{color} | {color:red} common: The patch generated 1 new + 941 unchanged - 0 fixed = 942 total (was 941) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 40s{color} | {color:red} ql: The patch generated 8 new + 442 unchanged - 2 fixed = 450 total (was 444) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 43s{color} | {color:red} root: The patch generated 9 new + 1649 unchanged - 2 fixed = 1658 total (was 1651) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 50s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 49m 13s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile xml | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / a45becb | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8666/yetus/diff-checkstyle-common.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8666/yetus/diff-checkstyle-ql.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8666/yetus/diff-checkstyle-root.txt | | modules | C: common ql service cli . itests/util U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8666/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Create dummy materialized views registry and make it configurable > - > > Key: HIVE-18386 > URL: https://issues.apache.org/jira/browse/HIVE-18386 > Project: Hive > Issue Type: Improvement > Components: Materialized views >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-18386.01.patch, HIVE-18386.02.patch, > HIVE-18386.05.patch > > > HiveMaterializedViewsRegistry keeps the materialized views plans in memory to > have quick
[jira] [Commented] (HIVE-18442) HoS: No FileSystem for scheme: nullscan
[ https://issues.apache.org/jira/browse/HIVE-18442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329909#comment-16329909 ] Rui Li commented on HIVE-18442: --- Hi [~xuefuz], I believe it's related to how hive-exec.jar is added to driver's classpath. FileSystem uses ServiceLoader to [load FS implementations|https://github.com/apache/hadoop/blob/release-2.8.3-RC0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java#L2757]. This method is only called once. For NullScanFileSystem to be loaded, we have to make sure hive-exec.jar is loaded when the method is called. Alternatively, we can set the implementation class in JobConf, which is what the patch is doing. It seems hive-exec is added differently between yarn-client and yarn-cluster mode. I can do some more investigation into that. > HoS: No FileSystem for scheme: nullscan > --- > > Key: HIVE-18442 > URL: https://issues.apache.org/jira/browse/HIVE-18442 > Project: Hive > Issue Type: Bug > Components: Spark >Reporter: Rui Li >Assignee: Rui Li >Priority: Major > Attachments: HIVE-18442.1.patch > > > Hit the issue when I run following query in yarn-cluster mode: > {code} > select * from (select key from src where false) a left outer join (select key > from srcpart limit 0) b on a.key=b.key; > {code} > Stack trace: > {noformat} > Job failed with java.io.IOException: No FileSystem for scheme: nullscan > at > org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2799) > at > org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2810) > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:100) > at > org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2849) > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2831) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389) > at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356) > at > org.apache.hadoop.hive.ql.exec.Utilities.isEmptyPath(Utilities.java:2605) > at > org.apache.hadoop.hive.ql.exec.Utilities.isEmptyPath(Utilities.java:2601) > at > org.apache.hadoop.hive.ql.exec.Utilities$GetInputPathsCallable.call(Utilities.java:3409) > at > org.apache.hadoop.hive.ql.exec.Utilities.getInputPaths(Utilities.java:3347) > at > org.apache.hadoop.hive.ql.exec.spark.SparkPlanGenerator.cloneJobConf(SparkPlanGenerator.java:299) > at > org.apache.hadoop.hive.ql.exec.spark.SparkPlanGenerator.generate(SparkPlanGenerator.java:222) > at > org.apache.hadoop.hive.ql.exec.spark.SparkPlanGenerator.generate(SparkPlanGenerator.java:109) > at > org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient$JobStatusJob.call(RemoteHiveSparkClient.java:354) > at > org.apache.hive.spark.client.RemoteDriver$JobWrapper.call(RemoteDriver.java:358) > at > org.apache.hive.spark.client.RemoteDriver$JobWrapper.call(RemoteDriver.java:323) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18449) Add configurable policy for choosing the HMS URI from hive.metastore.uris
[ https://issues.apache.org/jira/browse/HIVE-18449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329900#comment-16329900 ] Vihang Karajgaonkar commented on HIVE-18449: Thanks for your inputs [~thejas] I agree adding new configs is a problem in general but in this particular case I think it is hard to anticipate with 100% certainty that there are no issues with active-active HMS configuration esp in a heavily used environment. The fix for HIVE-16886 went in Hive 3.0 so Hive 2.x is still vulnerable to the duplicate event id issue. Also, I am not a 100% sure but I remember there was some discussion on whether the fix for HIVE-16886 worked well for all the supported databases as well (I may be wrong on this). We had also seen in our testing that datanucleus cacheing was causing come problems in active-active mode as well but we could not reproduce it later. Having a config option helps alleviate the problem and provides an easy workaround until the issue is fixed. We can keep the existing behavior to random URIs as such using a default pluggable implementation on similar lines as [~szehon]'s patch. In general I think its a good idea to have a pluggable URI resolver which can be extended in future for usecases like (smart load balancing based on some metric) or automatic HMS service discovery in the future. > Add configurable policy for choosing the HMS URI from hive.metastore.uris > - > > Key: HIVE-18449 > URL: https://issues.apache.org/jira/browse/HIVE-18449 > Project: Hive > Issue Type: Improvement > Components: Metastore >Reporter: Sahil Takiar >Assignee: Janaki Lahorani >Priority: Major > > HIVE-10815 added logic to randomly choose a HMS URI from > {{hive.metastore.uris}}. It would be nice if there was a configurable policy > that determined how a URI is chosen from this list - e.g. one option can be > to randomly pick a URI, another option can be to choose the first URI in the > list (which was the behavior prior to HIVE-10815). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18471) Beeline gives log4j warnings
[ https://issues.apache.org/jira/browse/HIVE-18471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329894#comment-16329894 ] Janaki Lahorani commented on HIVE-18471: Thanks [~thejas]. :) > Beeline gives log4j warnings > > > Key: HIVE-18471 > URL: https://issues.apache.org/jira/browse/HIVE-18471 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Janaki Lahorani >Priority: Major > > Starting Beeline gives the following warnings: > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/opt/cloudera/parcels/CDH-6.x-1.cdh6.x.p0.215261/jars/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/opt/cloudera/parcels/CDH-6.x-1.cdh6.x.p0.215261/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See [http://www.slf4j.org/codes.html#multiple_bindings] for an > explanation. > SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] > ERROR StatusLogger No log4j2 configuration file found. Using default > configuration: logging only errors to the console. Set system property > 'org.apache.logging.log4j.simplelog.StatusLogger.level' to TRACE to show > Log4j2 internal initialization logging. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18471) Beeline gives log4j warnings
[ https://issues.apache.org/jira/browse/HIVE-18471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Janaki Lahorani updated HIVE-18471: --- Description: Starting Beeline gives the following warnings: SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-6.x-1.cdh6.x.p0.215261/jars/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-6.x-1.cdh6.x.p0.215261/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See [http://www.slf4j.org/codes.html#multiple_bindings] for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console. Set system property 'org.apache.logging.log4j.simplelog.StatusLogger.level' to TRACE to show Log4j2 internal initialization logging. was: Starting Beeline gives the following warnings: > Beeline gives log4j warnings > > > Key: HIVE-18471 > URL: https://issues.apache.org/jira/browse/HIVE-18471 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Janaki Lahorani >Priority: Major > > Starting Beeline gives the following warnings: > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/opt/cloudera/parcels/CDH-6.x-1.cdh6.x.p0.215261/jars/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/opt/cloudera/parcels/CDH-6.x-1.cdh6.x.p0.215261/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See [http://www.slf4j.org/codes.html#multiple_bindings] for an > explanation. > SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] > ERROR StatusLogger No log4j2 configuration file found. Using default > configuration: logging only errors to the console. Set system property > 'org.apache.logging.log4j.simplelog.StatusLogger.level' to TRACE to show > Log4j2 internal initialization logging. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18447) JDBC: Provide a way for JDBC users to pass cookie info via connection string
[ https://issues.apache.org/jira/browse/HIVE-18447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329890#comment-16329890 ] Hive QA commented on HIVE-18447: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12906496/HIVE-18447.1.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 21 failed/errored test(s), 11626 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join25] (batchId=72) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=151) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[archive_partspec2] (batchId=94) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part] (batchId=94) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[materialized_view_authorization_create_no_grant] (batchId=94) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[materialized_view_authorization_drop_other] (batchId=94) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[materialized_view_no_transactional_rewrite_2] (batchId=94) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_publisher_error_1] (batchId=94) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_notin_implicit_gby] (batchId=94) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[truncate_bucketed_column] (batchId=94) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=121) org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=254) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=232) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=232) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=232) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8665/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8665/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8665/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 21 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12906496 - PreCommit-HIVE-Build > JDBC: Provide a way for JDBC users to pass cookie info via connection string > > > Key: HIVE-18447 > URL: https://issues.apache.org/jira/browse/HIVE-18447 > Project: Hive > Issue Type: Bug > Components: JDBC >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta >Priority: Major > Attachments: HIVE-18447.1.patch > > > Some authentication mechanisms like Single Sign On, need the ability to pass > a cookie to some intermediate authentication service like Knox via the JDBC > driver. We need to add the mechanism in Hive's JDBC driver (when used in HTTP > transport mode). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18323) Vectorization: add the support of timestamp in VectorizedPrimitiveColumnReader for parquet
[ https://issues.apache.org/jira/browse/HIVE-18323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vihang Karajgaonkar updated HIVE-18323: --- Attachment: HIVE-18323.08.patch > Vectorization: add the support of timestamp in > VectorizedPrimitiveColumnReader for parquet > -- > > Key: HIVE-18323 > URL: https://issues.apache.org/jira/browse/HIVE-18323 > Project: Hive > Issue Type: Sub-task > Components: Vectorization >Affects Versions: 3.0.0 >Reporter: Aihua Xu >Assignee: Vihang Karajgaonkar >Priority: Major > Attachments: HIVE-18323.02.patch, HIVE-18323.03.patch, > HIVE-18323.04.patch, HIVE-18323.05.patch, HIVE-18323.06.patch, > HIVE-18323.07.patch, HIVE-18323.08-branch-2.patch, HIVE-18323.08.patch, > HIVE-18323.1.patch > > > {noformat} > CREATE TABLE `t1`( > `ts` timestamp, > `s1` string) > STORED AS PARQUET; > set hive.vectorized.execution.enabled=true; > SELECT * from t1 SORT BY s1; > {noformat} > This query will throw exception since timestamp is not supported here yet. > {noformat} > Caused by: java.io.IOException: java.io.IOException: Unsupported type: > optional int96 ts > at > org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121) > at > org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77) > at > org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:365) > at > org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:116) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18323) Vectorization: add the support of timestamp in VectorizedPrimitiveColumnReader for parquet
[ https://issues.apache.org/jira/browse/HIVE-18323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329874#comment-16329874 ] Vihang Karajgaonkar commented on HIVE-18323: TestMiniLlapLocalCliDriver[vectorized_parquet_types] test is hanging for me without the patch as well it looks like its unrelated. I will create a separate JIRA to get that test fixed. TestSparkCliDriver.testCliDriver[vectorized_ptf] fails without the patch as well. The other tests are known failing tests. +1 I will commit the patch tomorrow unless anyone has any objections. Thanks for the review [~aihuaxu] and [~Ferd]. Thanks for INT64 pointers [~spena] > Vectorization: add the support of timestamp in > VectorizedPrimitiveColumnReader for parquet > -- > > Key: HIVE-18323 > URL: https://issues.apache.org/jira/browse/HIVE-18323 > Project: Hive > Issue Type: Sub-task > Components: Vectorization >Affects Versions: 3.0.0 >Reporter: Aihua Xu >Assignee: Vihang Karajgaonkar >Priority: Major > Attachments: HIVE-18323.02.patch, HIVE-18323.03.patch, > HIVE-18323.04.patch, HIVE-18323.05.patch, HIVE-18323.06.patch, > HIVE-18323.07.patch, HIVE-18323.08-branch-2.patch, HIVE-18323.1.patch > > > {noformat} > CREATE TABLE `t1`( > `ts` timestamp, > `s1` string) > STORED AS PARQUET; > set hive.vectorized.execution.enabled=true; > SELECT * from t1 SORT BY s1; > {noformat} > This query will throw exception since timestamp is not supported here yet. > {noformat} > Caused by: java.io.IOException: java.io.IOException: Unsupported type: > optional int96 ts > at > org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121) > at > org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77) > at > org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:365) > at > org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:116) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18471) Beeline gives log4j warnings
[ https://issues.apache.org/jira/browse/HIVE-18471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329873#comment-16329873 ] Thejas M Nair commented on HIVE-18471: -- [~janulatha] Looks like you forgot to paste the warnings! :) > Beeline gives log4j warnings > > > Key: HIVE-18471 > URL: https://issues.apache.org/jira/browse/HIVE-18471 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Janaki Lahorani >Priority: Major > > Starting Beeline gives the following warnings: > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18449) Add configurable policy for choosing the HMS URI from hive.metastore.uris
[ https://issues.apache.org/jira/browse/HIVE-18449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329847#comment-16329847 ] Thejas M Nair commented on HIVE-18449: -- Thanks for clarifying. However, that looks like an unreliable workaround (won't work if the first metastore down and comes back up - you would have both being active). FYI, HIVE-16886 addresses issue you mentioned for DbNotificationListner. > Add configurable policy for choosing the HMS URI from hive.metastore.uris > - > > Key: HIVE-18449 > URL: https://issues.apache.org/jira/browse/HIVE-18449 > Project: Hive > Issue Type: Improvement > Components: Metastore >Reporter: Sahil Takiar >Assignee: Janaki Lahorani >Priority: Major > > HIVE-10815 added logic to randomly choose a HMS URI from > {{hive.metastore.uris}}. It would be nice if there was a configurable policy > that determined how a URI is chosen from this list - e.g. one option can be > to randomly pick a URI, another option can be to choose the first URI in the > list (which was the behavior prior to HIVE-10815). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18447) JDBC: Provide a way for JDBC users to pass cookie info via connection string
[ https://issues.apache.org/jira/browse/HIVE-18447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329842#comment-16329842 ] Hive QA commented on HIVE-18447: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 32s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 45s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 10s{color} | {color:red} jdbc: The patch generated 3 new + 93 unchanged - 2 fixed = 96 total (was 95) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 13m 2s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / a45becb | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8665/yetus/diff-checkstyle-jdbc.txt | | modules | C: jdbc itests/hive-unit U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8665/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > JDBC: Provide a way for JDBC users to pass cookie info via connection string > > > Key: HIVE-18447 > URL: https://issues.apache.org/jira/browse/HIVE-18447 > Project: Hive > Issue Type: Bug > Components: JDBC >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta >Priority: Major > Attachments: HIVE-18447.1.patch > > > Some authentication mechanisms like Single Sign On, need the ability to pass > a cookie to some intermediate authentication service like Knox via the JDBC > driver. We need to add the mechanism in Hive's JDBC driver (when used in HTTP > transport mode). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18457) improve show plan output (triggers, mappings)
[ https://issues.apache.org/jira/browse/HIVE-18457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329805#comment-16329805 ] Hive QA commented on HIVE-18457: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12906479/HIVE-18457.01.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 16 failed/errored test(s), 11624 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join25] (batchId=72) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=151) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part] (batchId=94) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=121) org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=254) org.apache.hadoop.hive.ql.parse.TestParseNegativeDriver.testCliDriver[wrong_distinct2] (batchId=245) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=232) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=232) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=232) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8664/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8664/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8664/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 16 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12906479 - PreCommit-HIVE-Build > improve show plan output (triggers, mappings) > - > > Key: HIVE-18457 > URL: https://issues.apache.org/jira/browse/HIVE-18457 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-18457.01.patch, HIVE-18457.patch > > > Did the following sequence to add triggers to UNMANAGED. I can see the > triggers added to metastore by IS_IN_UNAMANGED flag is not set in metastore. > Also show resource plans does not show triggers in unmanaged pool. > {code} > 0: jdbc:hive2://localhost:1> show resource plans; > +--+--++ > | rp_name | status | query_parallelism | > +--+--++ > | global | ACTIVE | NULL | > | llap | ENABLED | NULL | > +--+--++ > 0: jdbc:hive2://localhost:1>ALTER RESOURCE PLAN llap ACTIVATE; > 0: jdbc:hive2://localhost:1>ALTER RESOURCE PLAN global DISABLE; > 0: jdbc:hive2://localhost:1>CREATE TRIGGER global.highly_parallel WHEN > TOTAL_TASKS > 40 DO KILL; > 0: jdbc:hive2://localhost:1>ALTER TRIGGER global.highly_parallel ADD TO > UNMANAGED; > 0: jdbc:hive2://localhost:1>CREATE TRIGGER global.big_hdfs_read WHEN > HDFS_BYTES_READ > 30 DO KILL; > 0: jdbc:hive2://localhost:1>ALTER TRIGGER global.big_hdfs_read ADD TO > UNMANAGED; > 0: jdbc:hive2://localhost:1>CREATE TRIGGER global.slow_query WHEN > EXECUTION_TIME > 10 DO KILL; > 0: jdbc:hive2://localhost:1>ALTER TRIGGER global.slow_query ADD TO > UNMANAGED; > 0: jdbc:hive2://localhost:1>CREATE TRIGGER global.some_spills WHEN > SPILLED_RECORDS > 10 DO KILL; > 0: jdbc:hive2://localhost:1>ALTER TRIGGER global.some_spills ADD TO > UNMANAGED; > 0: jdbc:hive2://localhost:1>ALTER RESOURCE PLAN global ENABLE; > 0: jdbc:hive2://localhost:1>ALTER RESOURCE PLAN global ACTIVATE; > 0: jdbc:hive2://localhost:1> show resource plan global; > ++ > |line| >
[jira] [Commented] (HIVE-18449) Add configurable policy for choosing the HMS URI from hive.metastore.uris
[ https://issues.apache.org/jira/browse/HIVE-18449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329797#comment-16329797 ] Sahil Takiar commented on HIVE-18449: - Yes, active-active is better for loading balancing. However, there have been issues with running multiple HMS instances against a single backend DB, mainly due to issue with event notifications that systems like Sentry rely on. For example, duplicate event ids or event ids that are out of order. > Add configurable policy for choosing the HMS URI from hive.metastore.uris > - > > Key: HIVE-18449 > URL: https://issues.apache.org/jira/browse/HIVE-18449 > Project: Hive > Issue Type: Improvement > Components: Metastore >Reporter: Sahil Takiar >Assignee: Janaki Lahorani >Priority: Major > > HIVE-10815 added logic to randomly choose a HMS URI from > {{hive.metastore.uris}}. It would be nice if there was a configurable policy > that determined how a URI is chosen from this list - e.g. one option can be > to randomly pick a URI, another option can be to choose the first URI in the > list (which was the behavior prior to HIVE-10815). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18473) Infer timezone information correctly in DruidSerde
[ https://issues.apache.org/jira/browse/HIVE-18473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-18473: --- Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) q test files not related. Pushed to master, thanks for reviewing [~ashutoshc]! > Infer timezone information correctly in DruidSerde > -- > > Key: HIVE-18473 > URL: https://issues.apache.org/jira/browse/HIVE-18473 > Project: Hive > Issue Type: Bug > Components: Druid integration >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18473.patch > > > Currently timezone information is not being processed by DruidSerde (contrary > to other SerDes). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18449) Add configurable policy for choosing the HMS URI from hive.metastore.uris
[ https://issues.apache.org/jira/browse/HIVE-18449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329789#comment-16329789 ] Thejas M Nair commented on HIVE-18449: -- [~stakiar] If you have 2 instances of metastore up and running, it seems like active-active is going to give you better use of the already ear-marked resources ? Why would you choose active-passive over active-active ? > Add configurable policy for choosing the HMS URI from hive.metastore.uris > - > > Key: HIVE-18449 > URL: https://issues.apache.org/jira/browse/HIVE-18449 > Project: Hive > Issue Type: Improvement > Components: Metastore >Reporter: Sahil Takiar >Assignee: Janaki Lahorani >Priority: Major > > HIVE-10815 added logic to randomly choose a HMS URI from > {{hive.metastore.uris}}. It would be nice if there was a configurable policy > that determined how a URI is chosen from this list - e.g. one option can be > to randomly pick a URI, another option can be to choose the first URI in the > list (which was the behavior prior to HIVE-10815). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18473) Infer timezone information correctly in DruidSerde
[ https://issues.apache.org/jira/browse/HIVE-18473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329779#comment-16329779 ] Ashutosh Chauhan commented on HIVE-18473: - +1 > Infer timezone information correctly in DruidSerde > -- > > Key: HIVE-18473 > URL: https://issues.apache.org/jira/browse/HIVE-18473 > Project: Hive > Issue Type: Bug > Components: Druid integration >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-18473.patch > > > Currently timezone information is not being processed by DruidSerde (contrary > to other SerDes). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18386) Create dummy materialized views registry and make it configurable
[ https://issues.apache.org/jira/browse/HIVE-18386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329773#comment-16329773 ] Ashutosh Chauhan commented on HIVE-18386: - +1 pending tests. > Create dummy materialized views registry and make it configurable > - > > Key: HIVE-18386 > URL: https://issues.apache.org/jira/browse/HIVE-18386 > Project: Hive > Issue Type: Improvement > Components: Materialized views >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-18386.01.patch, HIVE-18386.02.patch, > HIVE-18386.05.patch > > > HiveMaterializedViewsRegistry keeps the materialized views plans in memory to > have quick access when queries are planned. For debugging purposes, we will > create a dummy materialized views registry that forwards all calls to > metastore and make the choice configurable. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18350) load data should rename files consistent with insert statements
[ https://issues.apache.org/jira/browse/HIVE-18350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal updated HIVE-18350: -- Attachment: HIVE-18350.6.patch > load data should rename files consistent with insert statements > --- > > Key: HIVE-18350 > URL: https://issues.apache.org/jira/browse/HIVE-18350 > Project: Hive > Issue Type: Bug >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-18350.1.patch, HIVE-18350.2.patch, > HIVE-18350.3.patch, HIVE-18350.4.patch, HIVE-18350.5.patch, HIVE-18350.6.patch > > > Insert statements create files of format ending with _0, 0001_0 etc. > However, the load data uses the input file name. That results in inconsistent > naming convention which makes SMB joins difficult in some scenarios and may > cause trouble for other types of queries in future. > We need consistent naming convention. > For non-bucketed table, hive renames all the files regardless of how they > were named by the user. > For bucketed table, hive relies on user to name the files matching the > bucket in non-strict mode. Hive assumes that the data belongs to same bucket > in a file. In strict mode, loading bucketed table is disabled. > This will likely affect most of the tests which load data which is pretty > significant due to which it is further divided into two subtasks for smoother > merge. > For existing tables in customer database, it is recommended to reload > bucketed tables otherwise if customer tries to run SMB join and there is a > bucket for which there is no split, then there is a possibility of getting > incorrect results. However, this is not a regression as it would happen even > without the patch. > With this patch however, and reloading data, the results should be correct. > For non-bucketed tables and external tables, there is no difference in > behavior and reloading data is not needed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18386) Create dummy materialized views registry and make it configurable
[ https://issues.apache.org/jira/browse/HIVE-18386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-18386: --- Attachment: HIVE-18386.05.patch > Create dummy materialized views registry and make it configurable > - > > Key: HIVE-18386 > URL: https://issues.apache.org/jira/browse/HIVE-18386 > Project: Hive > Issue Type: Improvement > Components: Materialized views >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-18386.01.patch, HIVE-18386.02.patch, > HIVE-18386.05.patch > > > HiveMaterializedViewsRegistry keeps the materialized views plans in memory to > have quick access when queries are planned. For debugging purposes, we will > create a dummy materialized views registry that forwards all calls to > metastore and make the choice configurable. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18386) Create dummy materialized views registry and make it configurable
[ https://issues.apache.org/jira/browse/HIVE-18386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-18386: --- Attachment: (was: HIVE-18386.04.patch) > Create dummy materialized views registry and make it configurable > - > > Key: HIVE-18386 > URL: https://issues.apache.org/jira/browse/HIVE-18386 > Project: Hive > Issue Type: Improvement > Components: Materialized views >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-18386.01.patch, HIVE-18386.02.patch, > HIVE-18386.05.patch > > > HiveMaterializedViewsRegistry keeps the materialized views plans in memory to > have quick access when queries are planned. For debugging purposes, we will > create a dummy materialized views registry that forwards all calls to > metastore and make the choice configurable. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18457) improve show plan output (triggers, mappings)
[ https://issues.apache.org/jira/browse/HIVE-18457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329732#comment-16329732 ] Hive QA commented on HIVE-18457: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 46s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 33s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 34s{color} | {color:red} ql: The patch generated 15 new + 123 unchanged - 0 fixed = 138 total (was 123) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 13m 39s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 01816fc | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8664/yetus/diff-checkstyle-ql.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8664/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > improve show plan output (triggers, mappings) > - > > Key: HIVE-18457 > URL: https://issues.apache.org/jira/browse/HIVE-18457 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-18457.01.patch, HIVE-18457.patch > > > Did the following sequence to add triggers to UNMANAGED. I can see the > triggers added to metastore by IS_IN_UNAMANGED flag is not set in metastore. > Also show resource plans does not show triggers in unmanaged pool. > {code} > 0: jdbc:hive2://localhost:1> show resource plans; > +--+--++ > | rp_name | status | query_parallelism | > +--+--++ > | global | ACTIVE | NULL | > | llap | ENABLED | NULL | > +--+--++ > 0: jdbc:hive2://localhost:1>ALTER RESOURCE PLAN llap ACTIVATE; > 0: jdbc:hive2://localhost:1>ALTER RESOURCE PLAN global DISABLE; > 0: jdbc:hive2://localhost:1>CREATE TRIGGER global.highly_parallel WHEN > TOTAL_TASKS > 40 DO KILL; > 0: jdbc:hive2://localhost:1>ALTER TRIGGER global.highly_parallel ADD TO > UNMANAGED; > 0: jdbc:hive2://localhost:1>CREATE TRIGGER global.big_hdfs_read WHEN > HDFS_BYTES_READ > 30 DO KILL; > 0: jdbc:hive2://localhost:1>ALTER TRIGGER global.big_hdfs_read ADD TO > UNMANAGED; > 0: jdbc:hive2://localhost:1>CREATE TRIGGER global.slow_query WHEN > EXECUTION_TIME > 10 DO KILL; > 0: jdbc:hive2://localhost:1>ALTER TRIGGER global.slow_query ADD
[jira] [Updated] (HIVE-18460) Compactor doesn't pass Table properties to the Orc writer
[ https://issues.apache.org/jira/browse/HIVE-18460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-18460: -- Status: Patch Available (was: Open) > Compactor doesn't pass Table properties to the Orc writer > - > > Key: HIVE-18460 > URL: https://issues.apache.org/jira/browse/HIVE-18460 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 0.13 >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Critical > Attachments: HIVE-18460.01.patch, HIVE-18460.02.patch, > HIVE-18460.03.patch, HIVE-18460.04.patch > > > > CompactorMap.getWrite()/getDeleteEventWriter() both do > AcidOutputFormat.Options.tableProperties() but > OrcOutputFormat.getRawRecordWriter() does > {noformat} > final OrcFile.WriterOptions opts = > OrcFile.writerOptions(options.getConfiguration()); > {noformat} > which ignores tableProperties value. > It should do > {noformat} > final OrcFile.WriterOptions opts = > OrcFile.writerOptions(options.getTableProperties(), > options.getConfiguration()); > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18460) Compactor doesn't pass Table properties to the Orc writer
[ https://issues.apache.org/jira/browse/HIVE-18460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-18460: -- Attachment: HIVE-18460.04.patch > Compactor doesn't pass Table properties to the Orc writer > - > > Key: HIVE-18460 > URL: https://issues.apache.org/jira/browse/HIVE-18460 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 0.13 >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Critical > Attachments: HIVE-18460.01.patch, HIVE-18460.02.patch, > HIVE-18460.03.patch, HIVE-18460.04.patch > > > > CompactorMap.getWrite()/getDeleteEventWriter() both do > AcidOutputFormat.Options.tableProperties() but > OrcOutputFormat.getRawRecordWriter() does > {noformat} > final OrcFile.WriterOptions opts = > OrcFile.writerOptions(options.getConfiguration()); > {noformat} > which ignores tableProperties value. > It should do > {noformat} > final OrcFile.WriterOptions opts = > OrcFile.writerOptions(options.getTableProperties(), > options.getConfiguration()); > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18460) Compactor doesn't pass Table properties to the Orc writer
[ https://issues.apache.org/jira/browse/HIVE-18460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-18460: -- Status: Open (was: Patch Available) > Compactor doesn't pass Table properties to the Orc writer > - > > Key: HIVE-18460 > URL: https://issues.apache.org/jira/browse/HIVE-18460 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 0.13 >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Critical > Attachments: HIVE-18460.01.patch, HIVE-18460.02.patch, > HIVE-18460.03.patch > > > > CompactorMap.getWrite()/getDeleteEventWriter() both do > AcidOutputFormat.Options.tableProperties() but > OrcOutputFormat.getRawRecordWriter() does > {noformat} > final OrcFile.WriterOptions opts = > OrcFile.writerOptions(options.getConfiguration()); > {noformat} > which ignores tableProperties value. > It should do > {noformat} > final OrcFile.WriterOptions opts = > OrcFile.writerOptions(options.getTableProperties(), > options.getConfiguration()); > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18323) Vectorization: add the support of timestamp in VectorizedPrimitiveColumnReader for parquet
[ https://issues.apache.org/jira/browse/HIVE-18323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329709#comment-16329709 ] Hive QA commented on HIVE-18323: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12906481/HIVE-18323.08-branch-2.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 11 failed/errored test(s), 10661 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[explaindenpendencydiffengs] (batchId=38) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=142) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_ppd_basic] (batchId=139) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[table_nonprintable] (batchId=140) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[join_acid_non_acid] (batchId=158) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=153) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_parquet_types] (batchId=155) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[merge_negative_5] (batchId=88) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[explaindenpendencydiffengs] (batchId=115) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[vectorized_ptf] (batchId=125) org.apache.hive.hcatalog.api.TestHCatClient.testTransportFailure (batchId=176) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8663/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8663/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8663/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 11 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12906481 - PreCommit-HIVE-Build > Vectorization: add the support of timestamp in > VectorizedPrimitiveColumnReader for parquet > -- > > Key: HIVE-18323 > URL: https://issues.apache.org/jira/browse/HIVE-18323 > Project: Hive > Issue Type: Sub-task > Components: Vectorization >Affects Versions: 3.0.0 >Reporter: Aihua Xu >Assignee: Vihang Karajgaonkar >Priority: Major > Attachments: HIVE-18323.02.patch, HIVE-18323.03.patch, > HIVE-18323.04.patch, HIVE-18323.05.patch, HIVE-18323.06.patch, > HIVE-18323.07.patch, HIVE-18323.08-branch-2.patch, HIVE-18323.1.patch > > > {noformat} > CREATE TABLE `t1`( > `ts` timestamp, > `s1` string) > STORED AS PARQUET; > set hive.vectorized.execution.enabled=true; > SELECT * from t1 SORT BY s1; > {noformat} > This query will throw exception since timestamp is not supported here yet. > {noformat} > Caused by: java.io.IOException: java.io.IOException: Unsupported type: > optional int96 ts > at > org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121) > at > org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77) > at > org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:365) > at > org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:116) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-14162) Allow disabling of long running job on Hive On Spark On YARN
[ https://issues.apache.org/jira/browse/HIVE-14162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329692#comment-16329692 ] Xuefu Zhang commented on HIVE-14162: Thanks, [~belugabehr]. I liked your thoughts and agreed that live drivers might be a concern for long idle sessions. Let's wait to get more inputs to see if it makes sense to add a knob on this. > Allow disabling of long running job on Hive On Spark On YARN > > > Key: HIVE-14162 > URL: https://issues.apache.org/jira/browse/HIVE-14162 > Project: Hive > Issue Type: New Feature > Components: Spark >Reporter: Thomas Scott >Assignee: Aihua Xu >Priority: Minor > Attachments: HIVE-14162.1.patch > > > Hive On Spark launches a long running process on the first query to handle > all queries for that user session. In some use cases this is not desired, for > instance when using Hue with large intervals between query executions. > Could we have a property that would cause long running spark jobs to be > terminated after each query execution and started again for the next one? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18449) Add configurable policy for choosing the HMS URI from hive.metastore.uris
[ https://issues.apache.org/jira/browse/HIVE-18449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329668#comment-16329668 ] Sahil Takiar commented on HIVE-18449: - [~thejas] the use case relates to how a standalone HMS service is deployed in High Availability mode. When multiple standalone HMS instances are deployed, one simple way to configure them in a active-passive HA mode is to have clients connect to the first URI in {{hive.metastore.uris}}. If the first URI in the list fails, then all connections are transferred to the second URI in the list. Alternatively, if a user wants a active-active HA deployment then the HMS client would randomly pick a URI from the URIS list. Allowing for a configurable policy enables users to pick between an active-active vs. a active-passive HA configuration. [~szehon] I briefly took a look at HIVE-18347, it looks very similar to what I had in mind here, although slightly different. It looks like your change allows for a pluggable way to resolve a given URI, but still uses the "random" policy for picking a URI from a set of URIs. I think they could be potentially combined, or if you plan on merging HIVE-18347 soon we can build upon it here. > Add configurable policy for choosing the HMS URI from hive.metastore.uris > - > > Key: HIVE-18449 > URL: https://issues.apache.org/jira/browse/HIVE-18449 > Project: Hive > Issue Type: Improvement > Components: Metastore >Reporter: Sahil Takiar >Assignee: Janaki Lahorani >Priority: Major > > HIVE-10815 added logic to randomly choose a HMS URI from > {{hive.metastore.uris}}. It would be nice if there was a configurable policy > that determined how a URI is chosen from this list - e.g. one option can be > to randomly pick a URI, another option can be to choose the first URI in the > list (which was the behavior prior to HIVE-10815). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-17396) Support DPP with map joins where the source and target belong in the same stage
[ https://issues.apache.org/jira/browse/HIVE-17396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Janaki Lahorani updated HIVE-17396: --- Attachment: (was: HIVE-17396.7.patch) > Support DPP with map joins where the source and target belong in the same > stage > --- > > Key: HIVE-17396 > URL: https://issues.apache.org/jira/browse/HIVE-17396 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani >Priority: Major > Attachments: HIVE-17396.1.patch, HIVE-17396.2.patch, > HIVE-17396.3.patch, HIVE-17396.4.patch, HIVE-17396.5.patch, > HIVE-17396.6.patch, HIVE-17396.7.patch > > > When the target of a partition pruning sink operator is in not the same as > the target of hash table sink operator, both source and target gets scheduled > within the same spark job, and that can result in File Not Found Exception. > HIVE-17225 has a fix to disable DPP in that scenario. This JIRA is to > support DPP for such cases. > Test Case: > SET hive.spark.dynamic.partition.pruning=true; > SET hive.auto.convert.join=true; > SET hive.strict.checks.cartesian.product=false; > CREATE TABLE part_table1 (col int) PARTITIONED BY (part1_col int); > CREATE TABLE part_table2 (col int) PARTITIONED BY (part2_col int); > CREATE TABLE reg_table (col int); > ALTER TABLE part_table1 ADD PARTITION (part1_col = 1); > ALTER TABLE part_table2 ADD PARTITION (part2_col = 1); > ALTER TABLE part_table2 ADD PARTITION (part2_col = 2); > INSERT INTO TABLE part_table1 PARTITION (part1_col = 1) VALUES (1); > INSERT INTO TABLE part_table2 PARTITION (part2_col = 1) VALUES (1); > INSERT INTO TABLE part_table2 PARTITION (part2_col = 2) VALUES (2); > INSERT INTO table reg_table VALUES (1), (2), (3), (4), (5), (6); > EXPLAIN SELECT * > FROM part_table1 pt1, >part_table2 pt2, >reg_table rt > WHERE rt.col = pt1.part1_col > ANDpt2.part2_col = pt1.part1_col; > Plan: > STAGE DEPENDENCIES: > Stage-2 is a root stage > Stage-1 depends on stages: Stage-2 > Stage-0 depends on stages: Stage-1 > STAGE PLANS: > Stage: Stage-2 > Spark > A masked pattern was here > Vertices: > Map 1 > Map Operator Tree: > TableScan > alias: pt1 > Statistics: Num rows: 1 Data size: 1 Basic stats: COMPLETE > Column stats: NONE > Select Operator > expressions: col (type: int), part1_col (type: int) > outputColumnNames: _col0, _col1 > Statistics: Num rows: 1 Data size: 1 Basic stats: > COMPLETE Column stats: NONE > Spark HashTable Sink Operator > keys: > 0 _col1 (type: int) > 1 _col1 (type: int) > 2 _col0 (type: int) > Select Operator > expressions: _col1 (type: int) > outputColumnNames: _col0 > Statistics: Num rows: 1 Data size: 1 Basic stats: > COMPLETE Column stats: NONE > Group By Operator > keys: _col0 (type: int) > mode: hash > outputColumnNames: _col0 > Statistics: Num rows: 1 Data size: 1 Basic stats: > COMPLETE Column stats: NONE > Spark Partition Pruning Sink Operator > Target column: part2_col (int) > partition key expr: part2_col > Statistics: Num rows: 1 Data size: 1 Basic stats: > COMPLETE Column stats: NONE > target work: Map 2 > Local Work: > Map Reduce Local Work > Map 2 > Map Operator Tree: > TableScan > alias: pt2 > Statistics: Num rows: 2 Data size: 2 Basic stats: COMPLETE > Column stats: NONE > Select Operator > expressions: col (type: int), part2_col (type: int) > outputColumnNames: _col0, _col1 > Statistics: Num rows: 2 Data size: 2 Basic stats: > COMPLETE Column stats: NONE > Spark HashTable Sink Operator > keys: > 0 _col1 (type: int) > 1 _col1 (type: int) > 2 _col0 (type: int) > Local Work: > Map Reduce Local Work > Stage: Stage-1 > Spark > A masked pattern was here > Vertices: > Map 3 > Map Operator Tree: > TableScan > alias: rt >
[jira] [Updated] (HIVE-18393) Error returned when some other type is read as string from parquet tables
[ https://issues.apache.org/jira/browse/HIVE-18393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Janaki Lahorani updated HIVE-18393: --- Attachment: HIVE-18393.4.patch > Error returned when some other type is read as string from parquet tables > - > > Key: HIVE-18393 > URL: https://issues.apache.org/jira/browse/HIVE-18393 > Project: Hive > Issue Type: Bug >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18393.1.patch, HIVE-18393.2.patch, > HIVE-18393.3.patch, HIVE-18393.4.patch, HIVE-18393.4.patch, HIVE-18393.4.patch > > > TimeStamp, Decimal, Double, Float, BigInt, Int, SmallInt, Tinyint and Boolean > when read as String, Varchar or Char should return the correct data. Now > this results in error for parquet tables. > Test Case: > {code} > drop table if exists testAltCol; > create table testAltCol > (cId TINYINT, > cTimeStamp TIMESTAMP, > cDecimal DECIMAL(38,18), > cDoubleDOUBLE, > cFloat FLOAT, > cBigIntBIGINT, > cInt INT, > cSmallInt SMALLINT, > cTinyint TINYINT, > cBoolean BOOLEAN); > insert into testAltCol values > (1, > '2017-11-07 09:02:49.9', > 12345678901234567890.123456789012345678, > 1.79e308, > 3.4e38, > 1234567890123456789, > 1234567890, > 12345, > 123, > TRUE); > insert into testAltCol values > (2, > '1400-01-01 01:01:01.1', > 1.1, > 2.2, > 3.3, > 1, > 2, > 3, > 4, > FALSE); > insert into testAltCol values > (3, > '1400-01-01 01:01:01.1', > 10.1, > 20.2, > 30.3, > 1234567890123456789, > 1234567890, > 12345, > 123, > TRUE); > select cId, cTimeStamp from testAltCol order by cId; > select cId, cDecimal, cDouble, cFloat from testAltCol order by cId; > select cId, cBigInt, cInt, cSmallInt, cTinyint from testAltCol order by cId; > select cId, cBoolean from testAltCol order by cId; > drop table if exists testAltColP; > create table testAltColP stored as parquet as select * from testAltCol; > select cId, cTimeStamp from testAltColP order by cId; > select cId, cDecimal, cDouble, cFloat from testAltColP order by cId; > select cId, cBigInt, cInt, cSmallInt, cTinyint from testAltColP order by cId; > select cId, cBoolean from testAltColP order by cId; > alter table testAltColP replace columns > (cId TINYINT, > cTimeStamp STRING, > cDecimal STRING, > cDoubleSTRING, > cFloat STRING, > cBigIntSTRING, > cInt STRING, > cSmallInt STRING, > cTinyint STRING, > cBoolean STRING); > select cId, cTimeStamp from testAltColP order by cId; > select cId, cDecimal, cDouble, cFloat from testAltColP order by cId; > select cId, cBigInt, cInt, cSmallInt, cTinyint from testAltColP order by cId; > select cId, cBoolean from testAltColP order by cId; > alter table testAltColP replace columns > (cId TINYINT, > cTimeStamp VARCHAR(100), > cDecimal VARCHAR(100), > cDoubleVARCHAR(100), > cFloat VARCHAR(100), > cBigIntVARCHAR(100), > cInt VARCHAR(100), > cSmallInt VARCHAR(100), > cTinyint VARCHAR(100), > cBoolean VARCHAR(100)); > select cId, cTimeStamp from testAltColP order by cId; > select cId, cDecimal, cDouble, cFloat from testAltColP order by cId; > select cId, cBigInt, cInt, cSmallInt, cTinyint from testAltColP order by cId; > select cId, cBoolean from testAltColP order by cId; > alter table testAltColP replace columns > (cId TINYINT, > cTimeStamp CHAR(100), > cDecimal CHAR(100), > cDoubleCHAR(100), > cFloat CHAR(100), > cBigIntCHAR(100), > cInt CHAR(100), > cSmallInt CHAR(100), > cTinyint CHAR(100), > cBoolean CHAR(100)); > select cId, cTimeStamp from testAltColP order by cId; > select cId, cDecimal, cDouble, cFloat from testAltColP order by cId; > select cId, cBigInt, cInt, cSmallInt, cTinyint from testAltColP order by cId; > select cId, cBoolean from testAltColP order by cId; > drop table if exists testAltColP; > {code} > {code} > Error: > FAILED: Execution Error, return code 2 from > org.apache.hadoop.hive.ql.exec.mr.MapRedTask > Excerpt for log: > 2018-01-05T15:54:05,756 ERROR [LocalJobRunner Map Task Executor #0] > mr.ExecMapper: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime > Error while processing row [Error getting row data with exception > java.lang.UnsupportedOperationException: Cannot inspect > org.apache.hadoop.hive.serde2.io.TimestampWritable > at > org.apache.hadoop.hive.ql.io.parquet.serde.primitive.ParquetStringInspector.getPrimitiveJavaObject(ParquetStringInspector.java:77) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18393) Error returned when some other type is read as string from parquet tables
[ https://issues.apache.org/jira/browse/HIVE-18393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Janaki Lahorani updated HIVE-18393: --- Attachment: (was: HIVE-18393.1.patch) > Error returned when some other type is read as string from parquet tables > - > > Key: HIVE-18393 > URL: https://issues.apache.org/jira/browse/HIVE-18393 > Project: Hive > Issue Type: Bug >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18393.1.patch, HIVE-18393.2.patch, > HIVE-18393.3.patch, HIVE-18393.4.patch, HIVE-18393.4.patch, HIVE-18393.4.patch > > > TimeStamp, Decimal, Double, Float, BigInt, Int, SmallInt, Tinyint and Boolean > when read as String, Varchar or Char should return the correct data. Now > this results in error for parquet tables. > Test Case: > {code} > drop table if exists testAltCol; > create table testAltCol > (cId TINYINT, > cTimeStamp TIMESTAMP, > cDecimal DECIMAL(38,18), > cDoubleDOUBLE, > cFloat FLOAT, > cBigIntBIGINT, > cInt INT, > cSmallInt SMALLINT, > cTinyint TINYINT, > cBoolean BOOLEAN); > insert into testAltCol values > (1, > '2017-11-07 09:02:49.9', > 12345678901234567890.123456789012345678, > 1.79e308, > 3.4e38, > 1234567890123456789, > 1234567890, > 12345, > 123, > TRUE); > insert into testAltCol values > (2, > '1400-01-01 01:01:01.1', > 1.1, > 2.2, > 3.3, > 1, > 2, > 3, > 4, > FALSE); > insert into testAltCol values > (3, > '1400-01-01 01:01:01.1', > 10.1, > 20.2, > 30.3, > 1234567890123456789, > 1234567890, > 12345, > 123, > TRUE); > select cId, cTimeStamp from testAltCol order by cId; > select cId, cDecimal, cDouble, cFloat from testAltCol order by cId; > select cId, cBigInt, cInt, cSmallInt, cTinyint from testAltCol order by cId; > select cId, cBoolean from testAltCol order by cId; > drop table if exists testAltColP; > create table testAltColP stored as parquet as select * from testAltCol; > select cId, cTimeStamp from testAltColP order by cId; > select cId, cDecimal, cDouble, cFloat from testAltColP order by cId; > select cId, cBigInt, cInt, cSmallInt, cTinyint from testAltColP order by cId; > select cId, cBoolean from testAltColP order by cId; > alter table testAltColP replace columns > (cId TINYINT, > cTimeStamp STRING, > cDecimal STRING, > cDoubleSTRING, > cFloat STRING, > cBigIntSTRING, > cInt STRING, > cSmallInt STRING, > cTinyint STRING, > cBoolean STRING); > select cId, cTimeStamp from testAltColP order by cId; > select cId, cDecimal, cDouble, cFloat from testAltColP order by cId; > select cId, cBigInt, cInt, cSmallInt, cTinyint from testAltColP order by cId; > select cId, cBoolean from testAltColP order by cId; > alter table testAltColP replace columns > (cId TINYINT, > cTimeStamp VARCHAR(100), > cDecimal VARCHAR(100), > cDoubleVARCHAR(100), > cFloat VARCHAR(100), > cBigIntVARCHAR(100), > cInt VARCHAR(100), > cSmallInt VARCHAR(100), > cTinyint VARCHAR(100), > cBoolean VARCHAR(100)); > select cId, cTimeStamp from testAltColP order by cId; > select cId, cDecimal, cDouble, cFloat from testAltColP order by cId; > select cId, cBigInt, cInt, cSmallInt, cTinyint from testAltColP order by cId; > select cId, cBoolean from testAltColP order by cId; > alter table testAltColP replace columns > (cId TINYINT, > cTimeStamp CHAR(100), > cDecimal CHAR(100), > cDoubleCHAR(100), > cFloat CHAR(100), > cBigIntCHAR(100), > cInt CHAR(100), > cSmallInt CHAR(100), > cTinyint CHAR(100), > cBoolean CHAR(100)); > select cId, cTimeStamp from testAltColP order by cId; > select cId, cDecimal, cDouble, cFloat from testAltColP order by cId; > select cId, cBigInt, cInt, cSmallInt, cTinyint from testAltColP order by cId; > select cId, cBoolean from testAltColP order by cId; > drop table if exists testAltColP; > {code} > {code} > Error: > FAILED: Execution Error, return code 2 from > org.apache.hadoop.hive.ql.exec.mr.MapRedTask > Excerpt for log: > 2018-01-05T15:54:05,756 ERROR [LocalJobRunner Map Task Executor #0] > mr.ExecMapper: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime > Error while processing row [Error getting row data with exception > java.lang.UnsupportedOperationException: Cannot inspect > org.apache.hadoop.hive.serde2.io.TimestampWritable > at > org.apache.hadoop.hive.ql.io.parquet.serde.primitive.ParquetStringInspector.getPrimitiveJavaObject(ParquetStringInspector.java:77) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18472) Beeline gives log4j warnings
[ https://issues.apache.org/jira/browse/HIVE-18472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Janaki Lahorani updated HIVE-18472: --- Attachment: HIVE-18472.1.patch > Beeline gives log4j warnings > > > Key: HIVE-18472 > URL: https://issues.apache.org/jira/browse/HIVE-18472 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18472.1.patch, HIVE-18472.1.patch > > > Starting Beeline gives the following warnings multiple times: > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/opt/cloudera/parcels/CDH-6.x-1.cdh6.x.p0.215261/jars/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/opt/cloudera/parcels/CDH-6.x-1.cdh6.x.p0.215261/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See [http://www.slf4j.org/codes.html#multiple_bindings] for an > explanation. > SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] > ERROR StatusLogger No log4j2 configuration file found. Using default > configuration: logging only errors to the console. Set system property > 'org.apache.logging.log4j.simplelog.StatusLogger.level' to TRACE to show > Log4j2 internal initialization logging. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18473) Infer timezone information correctly in DruidSerde
[ https://issues.apache.org/jira/browse/HIVE-18473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329653#comment-16329653 ] Hive QA commented on HIVE-18473: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12906485/HIVE-18473.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 20 failed/errored test(s), 11625 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] (batchId=12) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=151) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketizedhiveinputformat] (batchId=178) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[archive_partspec2] (batchId=94) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part] (batchId=94) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[materialized_view_authorization_create_no_grant] (batchId=94) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_publisher_error_1] (batchId=94) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_notin_implicit_gby] (batchId=94) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=121) org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore.testGetMetastoreUuid (batchId=205) org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=254) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=232) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=232) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=232) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8662/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8662/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8662/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 20 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12906485 - PreCommit-HIVE-Build > Infer timezone information correctly in DruidSerde > -- > > Key: HIVE-18473 > URL: https://issues.apache.org/jira/browse/HIVE-18473 > Project: Hive > Issue Type: Bug > Components: Druid integration >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-18473.patch > > > Currently timezone information is not being processed by DruidSerde (contrary > to other SerDes). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18386) Create dummy materialized views registry and make it configurable
[ https://issues.apache.org/jira/browse/HIVE-18386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-18386: --- Attachment: HIVE-18386.04.patch > Create dummy materialized views registry and make it configurable > - > > Key: HIVE-18386 > URL: https://issues.apache.org/jira/browse/HIVE-18386 > Project: Hive > Issue Type: Improvement > Components: Materialized views >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-18386.01.patch, HIVE-18386.02.patch, > HIVE-18386.04.patch > > > HiveMaterializedViewsRegistry keeps the materialized views plans in memory to > have quick access when queries are planned. For debugging purposes, we will > create a dummy materialized views registry that forwards all calls to > metastore and make the choice configurable. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18386) Create dummy materialized views registry and make it configurable
[ https://issues.apache.org/jira/browse/HIVE-18386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-18386: --- Attachment: (was: HIVE-18386.03.patch) > Create dummy materialized views registry and make it configurable > - > > Key: HIVE-18386 > URL: https://issues.apache.org/jira/browse/HIVE-18386 > Project: Hive > Issue Type: Improvement > Components: Materialized views >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-18386.01.patch, HIVE-18386.02.patch, > HIVE-18386.04.patch > > > HiveMaterializedViewsRegistry keeps the materialized views plans in memory to > have quick access when queries are planned. For debugging purposes, we will > create a dummy materialized views registry that forwards all calls to > metastore and make the choice configurable. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18386) Create dummy materialized views registry and make it configurable
[ https://issues.apache.org/jira/browse/HIVE-18386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-18386: --- Attachment: HIVE-18386.03.patch > Create dummy materialized views registry and make it configurable > - > > Key: HIVE-18386 > URL: https://issues.apache.org/jira/browse/HIVE-18386 > Project: Hive > Issue Type: Improvement > Components: Materialized views >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-18386.01.patch, HIVE-18386.02.patch, > HIVE-18386.03.patch > > > HiveMaterializedViewsRegistry keeps the materialized views plans in memory to > have quick access when queries are planned. For debugging purposes, we will > create a dummy materialized views registry that forwards all calls to > metastore and make the choice configurable. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18447) JDBC: Provide a way for JDBC users to pass cookie info via connection string
[ https://issues.apache.org/jira/browse/HIVE-18447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-18447: Attachment: HIVE-18447.1.patch > JDBC: Provide a way for JDBC users to pass cookie info via connection string > > > Key: HIVE-18447 > URL: https://issues.apache.org/jira/browse/HIVE-18447 > Project: Hive > Issue Type: Bug > Components: JDBC >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta >Priority: Major > Attachments: HIVE-18447.1.patch > > > Some authentication mechanisms like Single Sign On, need the ability to pass > a cookie to some intermediate authentication service like Knox via the JDBC > driver. We need to add the mechanism in Hive's JDBC driver (when used in HTTP > transport mode). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18447) JDBC: Provide a way for JDBC users to pass cookie info via connection string
[ https://issues.apache.org/jira/browse/HIVE-18447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-18447: Status: Patch Available (was: Open) > JDBC: Provide a way for JDBC users to pass cookie info via connection string > > > Key: HIVE-18447 > URL: https://issues.apache.org/jira/browse/HIVE-18447 > Project: Hive > Issue Type: Bug > Components: JDBC >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta >Priority: Major > Attachments: HIVE-18447.1.patch > > > Some authentication mechanisms like Single Sign On, need the ability to pass > a cookie to some intermediate authentication service like Knox via the JDBC > driver. We need to add the mechanism in Hive's JDBC driver (when used in HTTP > transport mode). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18473) Infer timezone information correctly in DruidSerde
[ https://issues.apache.org/jira/browse/HIVE-18473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329549#comment-16329549 ] Hive QA commented on HIVE-18473: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 25s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 27s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 12s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 10s{color} | {color:red} druid-handler: The patch generated 6 new + 103 unchanged - 4 fixed = 109 total (was 107) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 15m 28s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 01816fc | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8662/yetus/diff-checkstyle-druid-handler.txt | | modules | C: ql druid-handler itests U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8662/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Infer timezone information correctly in DruidSerde > -- > > Key: HIVE-18473 > URL: https://issues.apache.org/jira/browse/HIVE-18473 > Project: Hive > Issue Type: Bug > Components: Druid integration >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-18473.patch > > > Currently timezone information is not being processed by DruidSerde (contrary > to other SerDes). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18461) Fix precommit hive job
[ https://issues.apache.org/jira/browse/HIVE-18461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329515#comment-16329515 ] Sahil Takiar commented on HIVE-18461: - Yeah, unfortunately the files under {{testutils/ptest2/conf/deployed/}} are just for reference, and are probably out of date. > Fix precommit hive job > -- > > Key: HIVE-18461 > URL: https://issues.apache.org/jira/browse/HIVE-18461 > Project: Hive > Issue Type: Task > Components: Testing Infrastructure >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar >Priority: Blocker > Attachments: HIVE-18461.01.patch > > > JIRA was upgraded over the weekend and precommit job has been failing since > then. There are potentially two issues at play here. One is with the > precommit admin job which automate the patch testing. I think YETUS-594 > should fix the precommit admin job. But manually submission of Hive jobs is > failing with below exception. We should get this fix to get the automated > testing back on track. > {noformat} > + local > 'PTEST_CLASSPATH=/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/hive-ptest-3.0-classes.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/lib/*' > + java -cp > '/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/hive-ptest-3.0-classes.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/lib/*' > org.apache.hive.ptest.api.client.PTestClient --command testStart --outputDir > /home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target > --password '[***]' --testHandle PreCommit-HIVE-Build-8631 --endpoint > http://104.198.109.242:8080/hive-ptest-1.0 --logsEndpoint > http://104.198.109.242/logs/ --profile master-mr2 --patch > https://issues.apache.org/jira/secure/attachment/12906251/HIVE-18323.05.patch > --jira HIVE-18323 > Exception in thread "main" javax.net.ssl.SSLException: Received fatal alert: > protocol_version > at sun.security.ssl.Alerts.getSSLException(Alerts.java:208) > at sun.security.ssl.Alerts.getSSLException(Alerts.java:154) > at sun.security.ssl.SSLSocketImpl.recvAlert(SSLSocketImpl.java:1979) > at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1086) > at > sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1332) > at > sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1359) > at > sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1343) > at > sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559) > at > sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185) > at > sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1301) > at > sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:254) > at java.net.URL.openStream(URL.java:1041) > at > com.google.common.io.Resources$UrlByteSource.openStream(Resources.java:72) > at com.google.common.io.ByteSource.read(ByteSource.java:257) > at com.google.common.io.Resources.toByteArray(Resources.java:99) > at > org.apache.hive.ptest.api.client.PTestClient.testStart(PTestClient.java:126) > at > org.apache.hive.ptest.api.client.PTestClient.main(PTestClient.java:320) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18461) Fix precommit hive job
[ https://issues.apache.org/jira/browse/HIVE-18461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329500#comment-16329500 ] Vihang Karajgaonkar commented on HIVE-18461: updating testutils/ptest2/conf/deployed/master-mr2.properties will not "just" work. Ptest server side uses its own separate copy of this file so unfortunately, if you need to make changes you will have to get it reviewed by me, [~stakiar] or [~pvary] currently. We can help merge the changes once its reviewed. We may be able to fix this similar to what we did for testconfiguration.properties but it would need some code changes I guess. > Fix precommit hive job > -- > > Key: HIVE-18461 > URL: https://issues.apache.org/jira/browse/HIVE-18461 > Project: Hive > Issue Type: Task > Components: Testing Infrastructure >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar >Priority: Blocker > Attachments: HIVE-18461.01.patch > > > JIRA was upgraded over the weekend and precommit job has been failing since > then. There are potentially two issues at play here. One is with the > precommit admin job which automate the patch testing. I think YETUS-594 > should fix the precommit admin job. But manually submission of Hive jobs is > failing with below exception. We should get this fix to get the automated > testing back on track. > {noformat} > + local > 'PTEST_CLASSPATH=/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/hive-ptest-3.0-classes.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/lib/*' > + java -cp > '/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/hive-ptest-3.0-classes.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/lib/*' > org.apache.hive.ptest.api.client.PTestClient --command testStart --outputDir > /home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target > --password '[***]' --testHandle PreCommit-HIVE-Build-8631 --endpoint > http://104.198.109.242:8080/hive-ptest-1.0 --logsEndpoint > http://104.198.109.242/logs/ --profile master-mr2 --patch > https://issues.apache.org/jira/secure/attachment/12906251/HIVE-18323.05.patch > --jira HIVE-18323 > Exception in thread "main" javax.net.ssl.SSLException: Received fatal alert: > protocol_version > at sun.security.ssl.Alerts.getSSLException(Alerts.java:208) > at sun.security.ssl.Alerts.getSSLException(Alerts.java:154) > at sun.security.ssl.SSLSocketImpl.recvAlert(SSLSocketImpl.java:1979) > at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1086) > at > sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1332) > at > sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1359) > at > sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1343) > at > sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559) > at > sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185) > at > sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1301) > at > sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:254) > at java.net.URL.openStream(URL.java:1041) > at > com.google.common.io.Resources$UrlByteSource.openStream(Resources.java:72) > at com.google.common.io.ByteSource.read(ByteSource.java:257) > at com.google.common.io.Resources.toByteArray(Resources.java:99) > at > org.apache.hive.ptest.api.client.PTestClient.testStart(PTestClient.java:126) > at > org.apache.hive.ptest.api.client.PTestClient.main(PTestClient.java:320) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17396) Support DPP with map joins where the source and target belong in the same stage
[ https://issues.apache.org/jira/browse/HIVE-17396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329493#comment-16329493 ] Hive QA commented on HIVE-17396: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12906427/HIVE-17396.7.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 18 failed/errored test(s), 11623 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join25] (batchId=72) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=151) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mergejoin] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketizedhiveinputformat] (batchId=178) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part] (batchId=94) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=121) org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=254) org.apache.hive.hcatalog.pig.TestHCatLoaderComplexSchema.testMapWithComplexData[1] (batchId=191) org.apache.hive.hcatalog.pig.TestTextFileHCatStorer.testWriteDecimalX (batchId=191) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=232) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=232) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=232) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8661/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8661/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8661/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 18 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12906427 - PreCommit-HIVE-Build > Support DPP with map joins where the source and target belong in the same > stage > --- > > Key: HIVE-17396 > URL: https://issues.apache.org/jira/browse/HIVE-17396 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani >Priority: Major > Attachments: HIVE-17396.1.patch, HIVE-17396.2.patch, > HIVE-17396.3.patch, HIVE-17396.4.patch, HIVE-17396.5.patch, > HIVE-17396.6.patch, HIVE-17396.7.patch, HIVE-17396.7.patch > > > When the target of a partition pruning sink operator is in not the same as > the target of hash table sink operator, both source and target gets scheduled > within the same spark job, and that can result in File Not Found Exception. > HIVE-17225 has a fix to disable DPP in that scenario. This JIRA is to > support DPP for such cases. > Test Case: > SET hive.spark.dynamic.partition.pruning=true; > SET hive.auto.convert.join=true; > SET hive.strict.checks.cartesian.product=false; > CREATE TABLE part_table1 (col int) PARTITIONED BY (part1_col int); > CREATE TABLE part_table2 (col int) PARTITIONED BY (part2_col int); > CREATE TABLE reg_table (col int); > ALTER TABLE part_table1 ADD PARTITION (part1_col = 1); > ALTER TABLE part_table2 ADD PARTITION (part2_col = 1); > ALTER TABLE part_table2 ADD PARTITION (part2_col = 2); > INSERT INTO TABLE part_table1 PARTITION (part1_col = 1) VALUES (1); > INSERT INTO TABLE part_table2 PARTITION (part2_col = 1) VALUES (1); > INSERT INTO TABLE part_table2 PARTITION (part2_col = 2) VALUES (2); > INSERT INTO table reg_table VALUES (1), (2), (3), (4), (5), (6); > EXPLAIN SELECT * > FROM part_table1 pt1, >part_table2 pt2, >reg_table rt > WHERE rt.col = pt1.part1_col > ANDpt2.part2_col = pt1.part1_col; > Plan: > STAGE DEPENDENCIES: > Stage-2 is a root stage > Stage-1 depends on stages: Stage-2 > Stage-0 depends on stages: Stage-1 > STAGE PLANS: > Stage: Stage-2 > Spark > A masked
[jira] [Commented] (HIVE-18473) Infer timezone information correctly in DruidSerde
[ https://issues.apache.org/jira/browse/HIVE-18473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329484#comment-16329484 ] Jesus Camacho Rodriguez commented on HIVE-18473: [~ashutoshc], could you review? This fixes an issue with usage of timezone information in DruidSerDe. As the SerDe is being initialized differently than other SerDes, time zone information was not being transferred to column types. This issue is only present in Hive3, since it has to do with new 'timestamp with local time zone' type. Cc [~nishantbangarwa] > Infer timezone information correctly in DruidSerde > -- > > Key: HIVE-18473 > URL: https://issues.apache.org/jira/browse/HIVE-18473 > Project: Hive > Issue Type: Bug > Components: Druid integration >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-18473.patch > > > Currently timezone information is not being processed by DruidSerde (contrary > to other SerDes). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18473) Infer timezone information correctly in DruidSerde
[ https://issues.apache.org/jira/browse/HIVE-18473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-18473: --- Attachment: HIVE-18473.patch > Infer timezone information correctly in DruidSerde > -- > > Key: HIVE-18473 > URL: https://issues.apache.org/jira/browse/HIVE-18473 > Project: Hive > Issue Type: Bug > Components: Druid integration >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-18473.patch > > > Currently timezone information is not being processed by DruidSerde (contrary > to other SerDes). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work started] (HIVE-18473) Infer timezone information correctly in DruidSerde
[ https://issues.apache.org/jira/browse/HIVE-18473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-18473 started by Jesus Camacho Rodriguez. -- > Infer timezone information correctly in DruidSerde > -- > > Key: HIVE-18473 > URL: https://issues.apache.org/jira/browse/HIVE-18473 > Project: Hive > Issue Type: Bug > Components: Druid integration >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-18473.patch > > > Currently timezone information is not being processed by DruidSerde (contrary > to other SerDes). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18473) Infer timezone information correctly in DruidSerde
[ https://issues.apache.org/jira/browse/HIVE-18473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-18473: --- Status: Patch Available (was: In Progress) > Infer timezone information correctly in DruidSerde > -- > > Key: HIVE-18473 > URL: https://issues.apache.org/jira/browse/HIVE-18473 > Project: Hive > Issue Type: Bug > Components: Druid integration >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-18473.patch > > > Currently timezone information is not being processed by DruidSerde (contrary > to other SerDes). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18430) Add new determinism category for runtime constants (current_date, current_timestamp)
[ https://issues.apache.org/jira/browse/HIVE-18430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-18430: -- Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) Committed to master. > Add new determinism category for runtime constants (current_date, > current_timestamp) > > > Key: HIVE-18430 > URL: https://issues.apache.org/jira/browse/HIVE-18430 > Project: Hive > Issue Type: Bug > Components: UDF >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18430.1.patch, HIVE-18430.2.patch > > > Add a new piece of metadata to the UDFs to specify if whether a UDF is a > runtime constant. Runtime constants also exist in SQL Server, and this is > similar to Postgres' concept of STABLE functions. This metadata may be useful > for materialized views and query caching. > Some Hive functions such as the ones listed below are currently labelled as > deterministic, but really are runtime constants: > current_timestamp > current_date > current_user > current_database > The values for these functions are not deterministic between different > queries - for example current_timestamp will most likely be different every > query executed. This makes these functions ineligible for things like > materialized views or cached query results. > However the value for the current_timestamp should not change during the life > of a single query, which allows these values to be used in optimizations such > as constant folding. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18473) Infer timezone information correctly in DruidSerde
[ https://issues.apache.org/jira/browse/HIVE-18473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez reassigned HIVE-18473: -- > Infer timezone information correctly in DruidSerde > -- > > Key: HIVE-18473 > URL: https://issues.apache.org/jira/browse/HIVE-18473 > Project: Hive > Issue Type: Bug > Components: Druid integration >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > > Currently timezone information is not being processed by DruidSerde (contrary > to other SerDes). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18323) Vectorization: add the support of timestamp in VectorizedPrimitiveColumnReader for parquet
[ https://issues.apache.org/jira/browse/HIVE-18323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vihang Karajgaonkar updated HIVE-18323: --- Attachment: HIVE-18323.08-branch-2.patch > Vectorization: add the support of timestamp in > VectorizedPrimitiveColumnReader for parquet > -- > > Key: HIVE-18323 > URL: https://issues.apache.org/jira/browse/HIVE-18323 > Project: Hive > Issue Type: Sub-task > Components: Vectorization >Affects Versions: 3.0.0 >Reporter: Aihua Xu >Assignee: Vihang Karajgaonkar >Priority: Major > Attachments: HIVE-18323.02.patch, HIVE-18323.03.patch, > HIVE-18323.04.patch, HIVE-18323.05.patch, HIVE-18323.06.patch, > HIVE-18323.07.patch, HIVE-18323.08-branch-2.patch, HIVE-18323.1.patch > > > {noformat} > CREATE TABLE `t1`( > `ts` timestamp, > `s1` string) > STORED AS PARQUET; > set hive.vectorized.execution.enabled=true; > SELECT * from t1 SORT BY s1; > {noformat} > This query will throw exception since timestamp is not supported here yet. > {noformat} > Caused by: java.io.IOException: java.io.IOException: Unsupported type: > optional int96 ts > at > org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121) > at > org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77) > at > org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:365) > at > org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:116) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18323) Vectorization: add the support of timestamp in VectorizedPrimitiveColumnReader for parquet
[ https://issues.apache.org/jira/browse/HIVE-18323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vihang Karajgaonkar updated HIVE-18323: --- Attachment: (was: HIVE-18323.08-branch-2.patch) > Vectorization: add the support of timestamp in > VectorizedPrimitiveColumnReader for parquet > -- > > Key: HIVE-18323 > URL: https://issues.apache.org/jira/browse/HIVE-18323 > Project: Hive > Issue Type: Sub-task > Components: Vectorization >Affects Versions: 3.0.0 >Reporter: Aihua Xu >Assignee: Vihang Karajgaonkar >Priority: Major > Attachments: HIVE-18323.02.patch, HIVE-18323.03.patch, > HIVE-18323.04.patch, HIVE-18323.05.patch, HIVE-18323.06.patch, > HIVE-18323.07.patch, HIVE-18323.08-branch-2.patch, HIVE-18323.1.patch > > > {noformat} > CREATE TABLE `t1`( > `ts` timestamp, > `s1` string) > STORED AS PARQUET; > set hive.vectorized.execution.enabled=true; > SELECT * from t1 SORT BY s1; > {noformat} > This query will throw exception since timestamp is not supported here yet. > {noformat} > Caused by: java.io.IOException: java.io.IOException: Unsupported type: > optional int96 ts > at > org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121) > at > org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77) > at > org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:365) > at > org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:116) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18385) mergejoin fails with java.lang.IllegalStateException
[ https://issues.apache.org/jira/browse/HIVE-18385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-18385: -- Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) committed to master. > mergejoin fails with java.lang.IllegalStateException > > > Key: HIVE-18385 > URL: https://issues.apache.org/jira/browse/HIVE-18385 > Project: Hive > Issue Type: Bug >Reporter: Deepak Jaiswal >Assignee: Jason Dere >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18385.1.patch, HIVE-18385.2.patch, hive.log > > > mergejoin test fails with java.lang.IllegalStateException when run in > MiniLlapLocal. > This is the query for which it fails, > [ERROR] TestMiniLlapLocalCliDriver.testCliDriver:59 Client execution failed > with error code = 2 running " > select count(*) from tab a join tab_part b on a.key = b.key join src1 c on > a.value = c.value" fname=mergejoin.q > This is the stack trace, > failed due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: b initializer failed, > vertex=vertex_1515180518813_0001_42_05 [Map 8], java.lang.RuntimeException: > ORC split generation failed with exception: java.lang.IllegalStateException: > Failed to retrieve dynamic value for RS_12_a_key_min > at > org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1784) > at > org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getSplits(OrcInputFormat.java:1872) > at > org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:499) > at > org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:684) > at > org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:196) > at > org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:278) > at > org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:269) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962) > at > org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:269) > at > org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:253) > at > com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108) > at > com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41) > at > com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.util.concurrent.ExecutionException: > java.lang.IllegalStateException: Failed to retrieve dynamic value for > RS_12_a_key_min > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:192) > at > org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1778) > ... 17 more > Caused by: java.lang.IllegalStateException: Failed to retrieve dynamic value > for RS_12_a_key_min > at > org.apache.hadoop.hive.ql.plan.DynamicValue.getValue(DynamicValue.java:142) > at > org.apache.hadoop.hive.ql.plan.DynamicValue.getJavaValue(DynamicValue.java:97) > at > org.apache.hadoop.hive.ql.plan.DynamicValue.getLiteral(DynamicValue.java:93) > at > org.apache.hadoop.hive.ql.io.sarg.SearchArgumentImpl$PredicateLeafImpl.getLiteralList(SearchArgumentImpl.java:120) > at > org.apache.orc.impl.RecordReaderImpl.evaluatePredicateMinMax(RecordReaderImpl.java:553) > at > org.apache.orc.impl.RecordReaderImpl.evaluatePredicateRange(RecordReaderImpl.java:463) > at > org.apache.orc.impl.RecordReaderImpl.evaluatePredicate(RecordReaderImpl.java:440) > at > org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.isStripeSatisfyPredicate(OrcInputFormat.java:2163) > at > org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.pickStripesInternal(OrcInputFormat.java:2140) > at > org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.pickStripes(OrcInputFormat.java:2131) > at > org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.access$3000(OrcInputFormat.java:157) > at >
[jira] [Updated] (HIVE-18457) improve show plan output (triggers, mappings)
[ https://issues.apache.org/jira/browse/HIVE-18457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-18457: Attachment: HIVE-18457.01.patch > improve show plan output (triggers, mappings) > - > > Key: HIVE-18457 > URL: https://issues.apache.org/jira/browse/HIVE-18457 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-18457.01.patch, HIVE-18457.patch > > > Did the following sequence to add triggers to UNMANAGED. I can see the > triggers added to metastore by IS_IN_UNAMANGED flag is not set in metastore. > Also show resource plans does not show triggers in unmanaged pool. > {code} > 0: jdbc:hive2://localhost:1> show resource plans; > +--+--++ > | rp_name | status | query_parallelism | > +--+--++ > | global | ACTIVE | NULL | > | llap | ENABLED | NULL | > +--+--++ > 0: jdbc:hive2://localhost:1>ALTER RESOURCE PLAN llap ACTIVATE; > 0: jdbc:hive2://localhost:1>ALTER RESOURCE PLAN global DISABLE; > 0: jdbc:hive2://localhost:1>CREATE TRIGGER global.highly_parallel WHEN > TOTAL_TASKS > 40 DO KILL; > 0: jdbc:hive2://localhost:1>ALTER TRIGGER global.highly_parallel ADD TO > UNMANAGED; > 0: jdbc:hive2://localhost:1>CREATE TRIGGER global.big_hdfs_read WHEN > HDFS_BYTES_READ > 30 DO KILL; > 0: jdbc:hive2://localhost:1>ALTER TRIGGER global.big_hdfs_read ADD TO > UNMANAGED; > 0: jdbc:hive2://localhost:1>CREATE TRIGGER global.slow_query WHEN > EXECUTION_TIME > 10 DO KILL; > 0: jdbc:hive2://localhost:1>ALTER TRIGGER global.slow_query ADD TO > UNMANAGED; > 0: jdbc:hive2://localhost:1>CREATE TRIGGER global.some_spills WHEN > SPILLED_RECORDS > 10 DO KILL; > 0: jdbc:hive2://localhost:1>ALTER TRIGGER global.some_spills ADD TO > UNMANAGED; > 0: jdbc:hive2://localhost:1>ALTER RESOURCE PLAN global ENABLE; > 0: jdbc:hive2://localhost:1>ALTER RESOURCE PLAN global ACTIVATE; > 0: jdbc:hive2://localhost:1> show resource plan global; > ++ > |line| > ++ > | global[status=ACTIVE,parallelism=null,defaultPool=default] | > | default[allocFraction=1.0,schedulingPolicy=null,parallelism=4] | > ++ > {code} > {code:title=mysql} > mysql> select * from wm_trigger; > ++---+-+--+---+-+ > | TRIGGER_ID | RP_ID | NAME| TRIGGER_EXPRESSION | > ACTION_EXPRESSION | IS_IN_UNMANAGED | > ++---+-+--+---+-+ > | 29 | 1 | highly_parallel | TOTAL_TASKS > 40 | KILL > || > | 33 | 1 | big_hdfs_read | HDFS_BYTES_READ > 30 | KILL > || > | 34 | 1 | slow_query | EXECUTION_TIME > 10 | KILL > || > | 35 | 1 | some_spills | SPILLED_RECORDS > 10 | KILL > || > ++---+-+--+---+-+ > {code} > From the above mysql table, IS_IN_UNMANAGED is not set and 'show resource > plan global' is not showing triggers defined in unmanaged pool. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18323) Vectorization: add the support of timestamp in VectorizedPrimitiveColumnReader for parquet
[ https://issues.apache.org/jira/browse/HIVE-18323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vihang Karajgaonkar updated HIVE-18323: --- Attachment: HIVE-18323.08-branch-2.patch > Vectorization: add the support of timestamp in > VectorizedPrimitiveColumnReader for parquet > -- > > Key: HIVE-18323 > URL: https://issues.apache.org/jira/browse/HIVE-18323 > Project: Hive > Issue Type: Sub-task > Components: Vectorization >Affects Versions: 3.0.0 >Reporter: Aihua Xu >Assignee: Vihang Karajgaonkar >Priority: Major > Attachments: HIVE-18323.02.patch, HIVE-18323.03.patch, > HIVE-18323.04.patch, HIVE-18323.05.patch, HIVE-18323.06.patch, > HIVE-18323.07.patch, HIVE-18323.08-branch-2.patch, HIVE-18323.1.patch > > > {noformat} > CREATE TABLE `t1`( > `ts` timestamp, > `s1` string) > STORED AS PARQUET; > set hive.vectorized.execution.enabled=true; > SELECT * from t1 SORT BY s1; > {noformat} > This query will throw exception since timestamp is not supported here yet. > {noformat} > Caused by: java.io.IOException: java.io.IOException: Unsupported type: > optional int96 ts > at > org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121) > at > org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77) > at > org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:365) > at > org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:116) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18323) Vectorization: add the support of timestamp in VectorizedPrimitiveColumnReader for parquet
[ https://issues.apache.org/jira/browse/HIVE-18323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vihang Karajgaonkar updated HIVE-18323: --- Attachment: (was: HIVE-18323.08-branch-2.patch) > Vectorization: add the support of timestamp in > VectorizedPrimitiveColumnReader for parquet > -- > > Key: HIVE-18323 > URL: https://issues.apache.org/jira/browse/HIVE-18323 > Project: Hive > Issue Type: Sub-task > Components: Vectorization >Affects Versions: 3.0.0 >Reporter: Aihua Xu >Assignee: Vihang Karajgaonkar >Priority: Major > Attachments: HIVE-18323.02.patch, HIVE-18323.03.patch, > HIVE-18323.04.patch, HIVE-18323.05.patch, HIVE-18323.06.patch, > HIVE-18323.07.patch, HIVE-18323.08-branch-2.patch, HIVE-18323.1.patch > > > {noformat} > CREATE TABLE `t1`( > `ts` timestamp, > `s1` string) > STORED AS PARQUET; > set hive.vectorized.execution.enabled=true; > SELECT * from t1 SORT BY s1; > {noformat} > This query will throw exception since timestamp is not supported here yet. > {noformat} > Caused by: java.io.IOException: java.io.IOException: Unsupported type: > optional int96 ts > at > org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121) > at > org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77) > at > org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:365) > at > org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:116) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-17983) Make the standalone metastore generate tarballs etc.
[ https://issues.apache.org/jira/browse/HIVE-17983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Gates updated HIVE-17983: -- Attachment: (was: HIVE-17983.patch) > Make the standalone metastore generate tarballs etc. > > > Key: HIVE-17983 > URL: https://issues.apache.org/jira/browse/HIVE-17983 > Project: Hive > Issue Type: Sub-task > Components: Standalone Metastore >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Major > Labels: pull-request-available > Attachments: HIVE-17983.patch > > > In order to be separately installable the standalone metastore needs its own > tarballs, startup scripts, etc. All of the SQL installation and upgrade > scripts also need to move from metastore to standalone-metastore. > I also plan to create Dockerfiles for different database types so that > developers can test the SQL installation and upgrade scripts. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-17983) Make the standalone metastore generate tarballs etc.
[ https://issues.apache.org/jira/browse/HIVE-17983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Gates updated HIVE-17983: -- Attachment: HIVE-17983.patch > Make the standalone metastore generate tarballs etc. > > > Key: HIVE-17983 > URL: https://issues.apache.org/jira/browse/HIVE-17983 > Project: Hive > Issue Type: Sub-task > Components: Standalone Metastore >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Major > Labels: pull-request-available > Attachments: HIVE-17983.patch > > > In order to be separately installable the standalone metastore needs its own > tarballs, startup scripts, etc. All of the SQL installation and upgrade > scripts also need to move from metastore to standalone-metastore. > I also plan to create Dockerfiles for different database types so that > developers can test the SQL installation and upgrade scripts. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17396) Support DPP with map joins where the source and target belong in the same stage
[ https://issues.apache.org/jira/browse/HIVE-17396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329412#comment-16329412 ] Hive QA commented on HIVE-17396: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 35s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 30s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 11s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 49s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} The patch common passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 32s{color} | {color:green} ql: The patch generated 0 new + 21 unchanged - 2 fixed = 21 total (was 23) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 15m 46s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 456a651 | | Default Java | 1.8.0_111 | | modules | C: common ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8661/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Support DPP with map joins where the source and target belong in the same > stage > --- > > Key: HIVE-17396 > URL: https://issues.apache.org/jira/browse/HIVE-17396 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani >Priority: Major > Attachments: HIVE-17396.1.patch, HIVE-17396.2.patch, > HIVE-17396.3.patch, HIVE-17396.4.patch, HIVE-17396.5.patch, > HIVE-17396.6.patch, HIVE-17396.7.patch, HIVE-17396.7.patch > > > When the target of a partition pruning sink operator is in not the same as > the target of hash table sink operator, both source and target gets scheduled > within the same spark job, and that can result in File Not Found Exception. > HIVE-17225 has a fix to disable DPP in that scenario. This JIRA is to > support DPP for such cases. > Test Case: > SET hive.spark.dynamic.partition.pruning=true; > SET hive.auto.convert.join=true; > SET hive.strict.checks.cartesian.product=false; > CREATE TABLE part_table1 (col int) PARTITIONED BY (part1_col int); > CREATE TABLE part_table2 (col int) PARTITIONED BY (part2_col int); > CREATE TABLE reg_table (col int); > ALTER TABLE part_table1 ADD PARTITION (part1_col = 1); > ALTER TABLE part_table2
[jira] [Commented] (HIVE-18323) Vectorization: add the support of timestamp in VectorizedPrimitiveColumnReader for parquet
[ https://issues.apache.org/jira/browse/HIVE-18323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329408#comment-16329408 ] Vihang Karajgaonkar commented on HIVE-18323: Added a branch-2 version of patch which removes INT64 handling. Since we don't support it even for non-vectorized query I don't see a reason to try to support for vectorized case. We should be consistent with non-vectorized version of the code path in terms of behavior. Note that the vectorized_parquet_types.q is hanging for me on branch-2 even without the patch when run using TestMiniLlapCliDriver.java. The test seems to be broken anyways on branch-2. So I could not update the q.out for llap for that q test. I will create a separate JIRA for that so that someone who is familiar with LLAP might help. The q.out for TestCliDriver works and is updated. > Vectorization: add the support of timestamp in > VectorizedPrimitiveColumnReader for parquet > -- > > Key: HIVE-18323 > URL: https://issues.apache.org/jira/browse/HIVE-18323 > Project: Hive > Issue Type: Sub-task > Components: Vectorization >Affects Versions: 3.0.0 >Reporter: Aihua Xu >Assignee: Vihang Karajgaonkar >Priority: Major > Attachments: HIVE-18323.02.patch, HIVE-18323.03.patch, > HIVE-18323.04.patch, HIVE-18323.05.patch, HIVE-18323.06.patch, > HIVE-18323.07.patch, HIVE-18323.08-branch-2.patch, HIVE-18323.1.patch > > > {noformat} > CREATE TABLE `t1`( > `ts` timestamp, > `s1` string) > STORED AS PARQUET; > set hive.vectorized.execution.enabled=true; > SELECT * from t1 SORT BY s1; > {noformat} > This query will throw exception since timestamp is not supported here yet. > {noformat} > Caused by: java.io.IOException: java.io.IOException: Unsupported type: > optional int96 ts > at > org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121) > at > org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77) > at > org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:365) > at > org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:116) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18323) Vectorization: add the support of timestamp in VectorizedPrimitiveColumnReader for parquet
[ https://issues.apache.org/jira/browse/HIVE-18323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vihang Karajgaonkar updated HIVE-18323: --- Attachment: HIVE-18323.08-branch-2.patch > Vectorization: add the support of timestamp in > VectorizedPrimitiveColumnReader for parquet > -- > > Key: HIVE-18323 > URL: https://issues.apache.org/jira/browse/HIVE-18323 > Project: Hive > Issue Type: Sub-task > Components: Vectorization >Affects Versions: 3.0.0 >Reporter: Aihua Xu >Assignee: Vihang Karajgaonkar >Priority: Major > Attachments: HIVE-18323.02.patch, HIVE-18323.03.patch, > HIVE-18323.04.patch, HIVE-18323.05.patch, HIVE-18323.06.patch, > HIVE-18323.07.patch, HIVE-18323.08-branch-2.patch, HIVE-18323.1.patch > > > {noformat} > CREATE TABLE `t1`( > `ts` timestamp, > `s1` string) > STORED AS PARQUET; > set hive.vectorized.execution.enabled=true; > SELECT * from t1 SORT BY s1; > {noformat} > This query will throw exception since timestamp is not supported here yet. > {noformat} > Caused by: java.io.IOException: java.io.IOException: Unsupported type: > optional int96 ts > at > org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121) > at > org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77) > at > org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:365) > at > org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:116) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18460) Compactor doesn't pass Table properties to the Orc writer
[ https://issues.apache.org/jira/browse/HIVE-18460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-18460: -- Attachment: HIVE-18460.03.patch > Compactor doesn't pass Table properties to the Orc writer > - > > Key: HIVE-18460 > URL: https://issues.apache.org/jira/browse/HIVE-18460 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 0.13 >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Critical > Attachments: HIVE-18460.01.patch, HIVE-18460.02.patch, > HIVE-18460.03.patch > > > > CompactorMap.getWrite()/getDeleteEventWriter() both do > AcidOutputFormat.Options.tableProperties() but > OrcOutputFormat.getRawRecordWriter() does > {noformat} > final OrcFile.WriterOptions opts = > OrcFile.writerOptions(options.getConfiguration()); > {noformat} > which ignores tableProperties value. > It should do > {noformat} > final OrcFile.WriterOptions opts = > OrcFile.writerOptions(options.getTableProperties(), > options.getConfiguration()); > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18061) q.outs: be more selective with masking hdfs paths
[ https://issues.apache.org/jira/browse/HIVE-18061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329364#comment-16329364 ] Hive QA commented on HIVE-18061: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12906256/HIVE-18061.12.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 21 failed/errored test(s), 11624 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join25] (batchId=72) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] (batchId=12) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=151) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketizedhiveinputformat] (batchId=178) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part] (batchId=94) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_sortmerge_join_16] (batchId=130) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucket4] (batchId=140) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucketmapjoin7] (batchId=119) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[disable_merge_for_bucketing] (batchId=141) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=121) org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=254) org.apache.hive.hcatalog.mapreduce.TestHCatOutputFormat.testGetTableSchema (batchId=198) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=232) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=232) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=232) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8660/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8660/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8660/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 21 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12906256 - PreCommit-HIVE-Build > q.outs: be more selective with masking hdfs paths > - > > Key: HIVE-18061 > URL: https://issues.apache.org/jira/browse/HIVE-18061 > Project: Hive > Issue Type: Improvement >Reporter: Zoltan Haindrich >Assignee: Laszlo Bodor >Priority: Major > Attachments: HIVE-18061.01.patch, HIVE-18061.02.patch, > HIVE-18061.03.patch, HIVE-18061.04.patch, HIVE-18061.05.patch, > HIVE-18061.06.patch, HIVE-18061.07.patch, HIVE-18061.08.patch, > HIVE-18061.09.patch, HIVE-18061.10.patch, HIVE-18061.11.patch, > HIVE-18061.12.patch > > > currently any line which contains a path which looks like an hdfs location is > replaced with a "masked pattern was here"... > it might be releavant to record these messages; since even an exception > message might contain an hdfs location > noticed in > HIVE-18012 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18472) Beeline gives log4j warnings
[ https://issues.apache.org/jira/browse/HIVE-18472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Janaki Lahorani updated HIVE-18472: --- Attachment: HIVE-18472.1.patch > Beeline gives log4j warnings > > > Key: HIVE-18472 > URL: https://issues.apache.org/jira/browse/HIVE-18472 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18472.1.patch > > > Starting Beeline gives the following warnings multiple times: > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/opt/cloudera/parcels/CDH-6.x-1.cdh6.x.p0.215261/jars/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/opt/cloudera/parcels/CDH-6.x-1.cdh6.x.p0.215261/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See [http://www.slf4j.org/codes.html#multiple_bindings] for an > explanation. > SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] > ERROR StatusLogger No log4j2 configuration file found. Using default > configuration: logging only errors to the console. Set system property > 'org.apache.logging.log4j.simplelog.StatusLogger.level' to TRACE to show > Log4j2 internal initialization logging. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18472) Beeline gives log4j warnings
[ https://issues.apache.org/jira/browse/HIVE-18472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Janaki Lahorani updated HIVE-18472: --- Fix Version/s: 3.0.0 Status: Patch Available (was: Open) > Beeline gives log4j warnings > > > Key: HIVE-18472 > URL: https://issues.apache.org/jira/browse/HIVE-18472 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani >Priority: Major > Fix For: 3.0.0 > > > Starting Beeline gives the following warnings multiple times: > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/opt/cloudera/parcels/CDH-6.x-1.cdh6.x.p0.215261/jars/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/opt/cloudera/parcels/CDH-6.x-1.cdh6.x.p0.215261/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See [http://www.slf4j.org/codes.html#multiple_bindings] for an > explanation. > SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] > ERROR StatusLogger No log4j2 configuration file found. Using default > configuration: logging only errors to the console. Set system property > 'org.apache.logging.log4j.simplelog.StatusLogger.level' to TRACE to show > Log4j2 internal initialization logging. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-16605) Enforce NOT NULL constraints
[ https://issues.apache.org/jira/browse/HIVE-16605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329343#comment-16329343 ] Vineet Garg commented on HIVE-16605: There was a typo in testconfiguration in earlier patch which probably caused all of these failures. Retrying the patch. > Enforce NOT NULL constraints > > > Key: HIVE-16605 > URL: https://issues.apache.org/jira/browse/HIVE-16605 > Project: Hive > Issue Type: New Feature >Affects Versions: 3.0.0 >Reporter: Carter Shanklin >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-16605.1.patch, HIVE-16605.2.patch > > > Since NOT NULL is so common it would be great to have tables start to enforce > that. > [~ekoifman] described a possible approach in HIVE-16575: > {quote} > One way to enforce not null constraint is to have the optimizer add > enforce_not_null UDF which throws if it sees a NULL, otherwise it's pass > through. > So if 'b' has not null constraint, > Insert into T select a,b,c... would become > Insert into T select a, enforce_not_null(b), c. > This would work for any table type. > {quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-16605) Enforce NOT NULL constraints
[ https://issues.apache.org/jira/browse/HIVE-16605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-16605: --- Status: Patch Available (was: Open) > Enforce NOT NULL constraints > > > Key: HIVE-16605 > URL: https://issues.apache.org/jira/browse/HIVE-16605 > Project: Hive > Issue Type: New Feature >Affects Versions: 3.0.0 >Reporter: Carter Shanklin >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-16605.1.patch, HIVE-16605.2.patch > > > Since NOT NULL is so common it would be great to have tables start to enforce > that. > [~ekoifman] described a possible approach in HIVE-16575: > {quote} > One way to enforce not null constraint is to have the optimizer add > enforce_not_null UDF which throws if it sees a NULL, otherwise it's pass > through. > So if 'b' has not null constraint, > Insert into T select a,b,c... would become > Insert into T select a, enforce_not_null(b), c. > This would work for any table type. > {quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-16605) Enforce NOT NULL constraints
[ https://issues.apache.org/jira/browse/HIVE-16605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-16605: --- Attachment: HIVE-16605.2.patch > Enforce NOT NULL constraints > > > Key: HIVE-16605 > URL: https://issues.apache.org/jira/browse/HIVE-16605 > Project: Hive > Issue Type: New Feature >Affects Versions: 3.0.0 >Reporter: Carter Shanklin >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-16605.1.patch, HIVE-16605.2.patch > > > Since NOT NULL is so common it would be great to have tables start to enforce > that. > [~ekoifman] described a possible approach in HIVE-16575: > {quote} > One way to enforce not null constraint is to have the optimizer add > enforce_not_null UDF which throws if it sees a NULL, otherwise it's pass > through. > So if 'b' has not null constraint, > Insert into T select a,b,c... would become > Insert into T select a, enforce_not_null(b), c. > This would work for any table type. > {quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-16605) Enforce NOT NULL constraints
[ https://issues.apache.org/jira/browse/HIVE-16605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-16605: --- Status: Open (was: Patch Available) > Enforce NOT NULL constraints > > > Key: HIVE-16605 > URL: https://issues.apache.org/jira/browse/HIVE-16605 > Project: Hive > Issue Type: New Feature >Affects Versions: 3.0.0 >Reporter: Carter Shanklin >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-16605.1.patch, HIVE-16605.2.patch > > > Since NOT NULL is so common it would be great to have tables start to enforce > that. > [~ekoifman] described a possible approach in HIVE-16575: > {quote} > One way to enforce not null constraint is to have the optimizer add > enforce_not_null UDF which throws if it sees a NULL, otherwise it's pass > through. > So if 'b' has not null constraint, > Insert into T select a,b,c... would become > Insert into T select a, enforce_not_null(b), c. > This would work for any table type. > {quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-17833) Publish split generation counters
[ https://issues.apache.org/jira/browse/HIVE-17833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-17833: - Attachment: HIVE-17833.8.patch > Publish split generation counters > - > > Key: HIVE-17833 > URL: https://issues.apache.org/jira/browse/HIVE-17833 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-17833.1.patch, HIVE-17833.2.patch, > HIVE-17833.3.patch, HIVE-17833.4.patch, HIVE-17833.5.patch, > HIVE-17833.6.patch, HIVE-17833.7.patch, HIVE-17833.8.patch > > > With TEZ-3856, tez counters are exposed via input initializers which can be > used to publish split generation counters. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17833) Publish split generation counters
[ https://issues.apache.org/jira/browse/HIVE-17833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329305#comment-16329305 ] Sergey Shelukhin commented on HIVE-17833: - +1 pending tests, one small nit on RB > Publish split generation counters > - > > Key: HIVE-17833 > URL: https://issues.apache.org/jira/browse/HIVE-17833 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-17833.1.patch, HIVE-17833.2.patch, > HIVE-17833.3.patch, HIVE-17833.4.patch, HIVE-17833.5.patch, > HIVE-17833.6.patch, HIVE-17833.7.patch > > > With TEZ-3856, tez counters are exposed via input initializers which can be > used to publish split generation counters. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17833) Publish split generation counters
[ https://issues.apache.org/jira/browse/HIVE-17833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329303#comment-16329303 ] Prasanth Jayachandran commented on HIVE-17833: -- [~sershe] can you please take a look at the new changes? > Publish split generation counters > - > > Key: HIVE-17833 > URL: https://issues.apache.org/jira/browse/HIVE-17833 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-17833.1.patch, HIVE-17833.2.patch, > HIVE-17833.3.patch, HIVE-17833.4.patch, HIVE-17833.5.patch, > HIVE-17833.6.patch, HIVE-17833.7.patch > > > With TEZ-3856, tez counters are exposed via input initializers which can be > used to publish split generation counters. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-17833) Publish split generation counters
[ https://issues.apache.org/jira/browse/HIVE-17833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-17833: - Attachment: HIVE-17833.7.patch > Publish split generation counters > - > > Key: HIVE-17833 > URL: https://issues.apache.org/jira/browse/HIVE-17833 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-17833.1.patch, HIVE-17833.2.patch, > HIVE-17833.3.patch, HIVE-17833.4.patch, HIVE-17833.5.patch, > HIVE-17833.6.patch, HIVE-17833.7.patch > > > With TEZ-3856, tez counters are exposed via input initializers which can be > used to publish split generation counters. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18472) Beeline gives log4j warnings
[ https://issues.apache.org/jira/browse/HIVE-18472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Janaki Lahorani reassigned HIVE-18472: -- Assignee: Janaki Lahorani > Beeline gives log4j warnings > > > Key: HIVE-18472 > URL: https://issues.apache.org/jira/browse/HIVE-18472 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani >Priority: Major > > Starting Beeline gives the following warnings multiple times: > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/opt/cloudera/parcels/CDH-6.x-1.cdh6.x.p0.215261/jars/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/opt/cloudera/parcels/CDH-6.x-1.cdh6.x.p0.215261/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See [http://www.slf4j.org/codes.html#multiple_bindings] for an > explanation. > SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] > ERROR StatusLogger No log4j2 configuration file found. Using default > configuration: logging only errors to the console. Set system property > 'org.apache.logging.log4j.simplelog.StatusLogger.level' to TRACE to show > Log4j2 internal initialization logging. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18323) Vectorization: add the support of timestamp in VectorizedPrimitiveColumnReader for parquet
[ https://issues.apache.org/jira/browse/HIVE-18323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329266#comment-16329266 ] Aihua Xu commented on HIVE-18323: - Yeah. Right now we can't write timestamp as INT64 from hive. [~spena] Do we need to support reading INT64 since other components can write INT64 timestamp? > Vectorization: add the support of timestamp in > VectorizedPrimitiveColumnReader for parquet > -- > > Key: HIVE-18323 > URL: https://issues.apache.org/jira/browse/HIVE-18323 > Project: Hive > Issue Type: Sub-task > Components: Vectorization >Affects Versions: 3.0.0 >Reporter: Aihua Xu >Assignee: Vihang Karajgaonkar >Priority: Major > Attachments: HIVE-18323.02.patch, HIVE-18323.03.patch, > HIVE-18323.04.patch, HIVE-18323.05.patch, HIVE-18323.06.patch, > HIVE-18323.07.patch, HIVE-18323.1.patch > > > {noformat} > CREATE TABLE `t1`( > `ts` timestamp, > `s1` string) > STORED AS PARQUET; > set hive.vectorized.execution.enabled=true; > SELECT * from t1 SORT BY s1; > {noformat} > This query will throw exception since timestamp is not supported here yet. > {noformat} > Caused by: java.io.IOException: java.io.IOException: Unsupported type: > optional int96 ts > at > org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121) > at > org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77) > at > org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:365) > at > org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:116) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)