[jira] [Commented] (HIVE-19481) sample10.q returns wrong results
[ https://issues.apache.org/jira/browse/HIVE-19481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523232#comment-16523232 ] Hive QA commented on HIVE-19481: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12929066/HIVE-19481.4.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14607 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12142/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12142/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12142/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12929066 - PreCommit-HIVE-Build > sample10.q returns wrong results > - > > Key: HIVE-19481 > URL: https://issues.apache.org/jira/browse/HIVE-19481 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Steve Yeom >Assignee: Deepak Jaiswal >Priority: Major > Fix For: 3.1.0, 4.0.0 > > Attachments: HIVE-19481.1.patch, HIVE-19481.2.patch, > HIVE-19481.3.patch, HIVE-19481.4.patch > > > Ran "mvn test -Dtest=TestMiniLlapLocalCliDriver -Dqfile=sample10.q " after > changing the table to be > insert-only transactional. > The following queries returns couple of rows whereis no row results returns > for non-ACID table. > query: select ds, count(1) from srcpartbucket tablesample (bucket 2 out of 4 > on key) where ds is not null group by ds ORDER BY ds ASC > 2008-04-08 14 > 2008-04-09 14 > .. > query: select ds, count(1) from srcpartbucket tablesample (bucket 1 out of 2 > on key) where ds is not null group by ds ORDER BY ds ASC > 2008-04-08 4 > 2008-04-09 4 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18545) Add UDF to parse complex types from json
[ https://issues.apache.org/jira/browse/HIVE-18545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich updated HIVE-18545: Attachment: HIVE-18545.06.patch > Add UDF to parse complex types from json > > > Key: HIVE-18545 > URL: https://issues.apache.org/jira/browse/HIVE-18545 > Project: Hive > Issue Type: Improvement >Affects Versions: 4.0.0 >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-18545.02.patch, HIVE-18545.03.patch, > HIVE-18545.04.patch, HIVE-18545.05.patch, HIVE-18545.06.patch, > HIVE-18545.06.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18545) Add UDF to parse complex types from json
[ https://issues.apache.org/jira/browse/HIVE-18545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523230#comment-16523230 ] Zoltan Haindrich commented on HIVE-18545: - [~ashutoshc] could you please take a look? > Add UDF to parse complex types from json > > > Key: HIVE-18545 > URL: https://issues.apache.org/jira/browse/HIVE-18545 > Project: Hive > Issue Type: Improvement >Affects Versions: 4.0.0 >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-18545.02.patch, HIVE-18545.03.patch, > HIVE-18545.04.patch, HIVE-18545.05.patch, HIVE-18545.06.patch, > HIVE-18545.06.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19326) stats auto gather: incorrect aggregation during UNION queries (may lead to incorrect results)
[ https://issues.apache.org/jira/browse/HIVE-19326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich updated HIVE-19326: Attachment: HIVE-19326.09.patch > stats auto gather: incorrect aggregation during UNION queries (may lead to > incorrect results) > - > > Key: HIVE-19326 > URL: https://issues.apache.org/jira/browse/HIVE-19326 > Project: Hive > Issue Type: Bug > Components: Statistics >Reporter: Sergey Shelukhin >Assignee: Zoltan Haindrich >Priority: Critical > Attachments: HIVE-19326.01wip01.patch, HIVE-19326.02.patch, > HIVE-19326.03.patch, HIVE-19326.04.patch, HIVE-19326.05.patch, > HIVE-19326.06.patch, HIVE-19326.06wip01.patch, HIVE-19326.06wip02.patch, > HIVE-19326.06wip03.patch, HIVE-19326.06wip04.patch, HIVE-19326.06wip05.patch, > HIVE-19326.07.patch, HIVE-19326.08.patch, HIVE-19326.09.patch > > > Found when investigating the results change after converting tables to MM, > turns out the MM result is correct but the current one is not. > The test ends like so: > {noformat} > desc formatted small_alltypesorc_a; > ANALYZE TABLE small_alltypesorc_a COMPUTE STATISTICS; > desc formatted small_alltypesorc_a; > insert into table small_alltypesorc_a select * from small_alltypesorc1a; > desc formatted small_alltypesorc_a; > {noformat} > The results from the descs in the golden file are: > {noformat} > COLUMN_STATS_ACCURATE {\"BASIC_STATS\":\"true\"} > numFiles1 > numRows 5 > ... > COLUMN_STATS_ACCURATE {\"BASIC_STATS\":\"true\"} > numFiles1 > numRows 15 > ... > COLUMN_STATS_ACCURATE {\"BASIC_STATS\":\"true\"} > numFiles2 > numRows 20 > {noformat} > Note the result change after analyze - the original nomRows is inaccurate, > but BASIC_STATS is set to true. > I am assuming with metadata only optimization this can produce incorrect > results. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19980) GenericUDTFGetSplits fails when order by query returns 0 rows
[ https://issues.apache.org/jira/browse/HIVE-19980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-19980: - Resolution: Fixed Fix Version/s: 4.0.0 3.1.0 Status: Resolved (was: Patch Available) Committed to branch-3 and master. Thanks for the review! > GenericUDTFGetSplits fails when order by query returns 0 rows > - > > Key: HIVE-19980 > URL: https://issues.apache.org/jira/browse/HIVE-19980 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.0, 4.0.0 >Reporter: Kshitij Badani >Assignee: Prasanth Jayachandran >Priority: Major > Fix For: 3.1.0, 4.0.0 > > Attachments: HIVE-19980.1.patch, HIVE-19980.2.patch, > HIVE-19980.3.patch > > > When order by query returns 0 rows, there will not be any files in temporary > table location for GenericUDTFGetSplits > which results in the following exception > {code:java} > Caused by: java.lang.ArrayIndexOutOfBoundsException: 0 > at > org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:217) > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDTFGetSplits.getSplits(GenericUDTFGetSplits.java:420) > ... 52 more{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19951) Vectorization: Need to disable encoded LLAP I/O for ORC when there is data type conversion (Schema Evolution)
[ https://issues.apache.org/jira/browse/HIVE-19951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523179#comment-16523179 ] Matt McCline commented on HIVE-19951: - Follow-on is HIVE-19992. > Vectorization: Need to disable encoded LLAP I/O for ORC when there is data > type conversion (Schema Evolution) > -- > > Key: HIVE-19951 > URL: https://issues.apache.org/jira/browse/HIVE-19951 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-19951.01.patch, HIVE-19951.02.patch, > HIVE-19951.03.patch, HIVE-19951.04.patch, HIVE-19951.05.patch, > HIVE-19951.06.patch > > > Currently, reading encoded ORC data does not support data type conversion. > So, encoded reading and cache populating needs to be disabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19992) Vectorization: Follow-on to HIVE-19951 --> add call to SchemaEvolution.isOnlyImplicitConversion to disable encoded LLAP I/O for ORC only when data type conversion is not
[ https://issues.apache.org/jira/browse/HIVE-19992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline reassigned HIVE-19992: --- > Vectorization: Follow-on to HIVE-19951 --> add call to > SchemaEvolution.isOnlyImplicitConversion to disable encoded LLAP I/O for ORC > only when data type conversion is not implicit > -- > > Key: HIVE-19992 > URL: https://issues.apache.org/jira/browse/HIVE-19992 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > > When ORC-380 that adds the SchemaEvolution.isOnlyImplicitConversion call is > available in the ORC release used by Apache master (and branch-3), then > update LlapRecordReader (see comments in HIVE-19951 change). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19812) Disable external table replication by default via a configuration property
[ https://issues.apache.org/jira/browse/HIVE-19812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] mahesh kumar behera updated HIVE-19812: --- Attachment: HIVE-19812.09.patch > Disable external table replication by default via a configuration property > -- > > Key: HIVE-19812 > URL: https://issues.apache.org/jira/browse/HIVE-19812 > Project: Hive > Issue Type: Task > Components: repl >Affects Versions: 3.1.0, 4.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: pull-request-available > Fix For: 3.1.0, 4.0.0 > > Attachments: HIVE-19812.01.patch, HIVE-19812.02.patch, > HIVE-19812.03.patch, HIVE-19812.04.patch, HIVE-19812.05.patch, > HIVE-19812.06-branch-3.patch, HIVE-19812.06.patch, HIVE-19812.07.patch, > HIVE-19812.08.patch, HIVE-19812.09.patch > > > use a hive config property to allow external table replication. set this > property by default to prevent external table replication. > for metadata only hive repl always export metadata for external tables. > > REPL_DUMP_EXTERNAL_TABLES("hive.repl.dump.include.external.tables", false, > "Indicates if repl dump should include information about external tables. It > should be \n" > + "used in conjunction with 'hive.repl.dump.metadata.only' set to false. if > 'hive.repl.dump.metadata.only' \n" > + " is set to true then this config parameter has no effect as external table > meta data is flushed \n" > + " always by default.") > This should be done for only replication dump and not for export -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19991) msck repair table command not able to retrieve achieved data.
[ https://issues.apache.org/jira/browse/HIVE-19991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523172#comment-16523172 ] Manpreet Singh commented on HIVE-19991: --- This can be worked around by setting the location or using "alter table .. add partition .. location" instead of "msck". > msck repair table command not able to retrieve achieved data. > - > > Key: HIVE-19991 > URL: https://issues.apache.org/jira/browse/HIVE-19991 > Project: Hive > Issue Type: Bug >Affects Versions: 1.1.0 >Reporter: Manpreet Singh >Priority: Major > > Observed an issue when customer used msck repair tablename command on a > archeived data (data copied via hadoop fs -cp from another location) by the > command did not loaded the partitions in the table and showed zero results. > > Please find below complete steps for both archived and unarchived. > Copied table's partition data to another location & define a similar table > structure for new location an ran "msck repair table" command on the > destination and then ran a select * from table it produced zero > results, same thing is working for non achieved data. > > 1. Created two table with similar structure and different loctation. > create table sau_test1 ( a int, b string) partitioned by (dt string) stored > as parquet location '/user/hive/warehouse/sau_test1'; > create table sau_arch ( a int, b string) partitioned by (dt string) stored as > parquet location'/user/hive/warehouse/sau_arch'; > 2. Inserted data in source table > insert into sau_test1 partition(dt='dt1') select 1,'A1'; > select * from sau_test1; > +---+-++-+ > |sau_test1.a|sau_test1.b|sau_test1.dt| > +---+-++-+ > |1|A1|dt1| > +---+-++-+ > 3. Copied the content of directory source directory to destination directory. > hadoop fs -cp /user/hive/warehouse/sau_test1/* /user/hive/warehouse/sau_arch/ > 4. Running msck repair table and checking results. — for > unachieved data > msck repair table sau_arch ; > select * from sau_arch; > select * from sau_arch; > +--++++ > |sau_arch.a|sau_arch.b|sau_arch.dt| > +--++++ > |1|A1|dt1| > +--++++ > 5. Customer wants the same functionality for archived data and hence tried > below steps. > a) Dropped table partition in destination table "alter table sau_arch drop > partition(dt='dt1');" > b) set hive.archive.enabled=true; > alter table sau_test1 archive partition ( dt='dt1'); > c) copied the hdfs files from source table to destination tables. > hdfs dfs -ls /user/hive/warehouse/sau_test1/dt=dt1/ > drwxr-xr-x - hive supergroup 0 2018-06-08 13:26 > /user/hive/warehouse/sau_test1/dt=dt1/data.har > -rw-r--r-- 3 hive supergroup 0 2018-06-08 13:26 > /user/hive/warehouse/sau_test1/dt=dt1/data.har/_SUCCESS > -rw-r--r-- 3 hive supergroup 305 2018-06-08 13:26 > /user/hive/warehouse/sau_test1/dt=dt1/data.har/_index > -rw-r--r-- 3 hive supergroup 23 2018-06-08 13:26 > /user/hive/warehouse/sau_test1/dt=dt1/data.har/_masterindex > -rw-r--r-- 3 hive supergroup 286 2018-06-08 13:26 > /user/hive/warehouse/sau_test1/dt=dt1/data.har/part-0 > $ hdfs dfs -ls /user/hive/warehouse/sau_arch/dt=dt1/ > drwxr-xr-x - ngdb supergroup 0 2018-06-08 13:27 > /user/hive/warehouse/sau_arch/dt=dt1/data.har > -rw-r--r-- 3 ngdb supergroup 0 2018-06-08 13:27 > /user/hive/warehouse/sau_arch/dt=dt1/data.har/_SUCCESS > -rw-r--r-- 3 ngdb supergroup 305 2018-06-08 13:27 > /user/hive/warehouse/sau_arch/dt=dt1/data.har/_index > -rw-r--r-- 3 ngdb supergroup 23 2018-06-08 13:27 > /user/hive/warehouse/sau_arch/dt=dt1/data.har/_masterindex > -rw-r--r-- 3 ngdb supergroup 286 2018-06-08 13:27 > /user/hive/warehouse/sau_arch/dt=dt1/data.har/part-0 > d) msck repair table sau_arch; > e)select * from sau_arch . - No results shown > +--++++ > |sau_arch.a|sau_arch.b|sau_arch.dt| > +--++++ > +--++++ -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19980) GenericUDTFGetSplits fails when order by query returns 0 rows
[ https://issues.apache.org/jira/browse/HIVE-19980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523164#comment-16523164 ] Hive QA commented on HIVE-19980: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12929123/HIVE-19980.3.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14606 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12141/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12141/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12141/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12929123 - PreCommit-HIVE-Build > GenericUDTFGetSplits fails when order by query returns 0 rows > - > > Key: HIVE-19980 > URL: https://issues.apache.org/jira/browse/HIVE-19980 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.0, 4.0.0 >Reporter: Kshitij Badani >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19980.1.patch, HIVE-19980.2.patch, > HIVE-19980.3.patch > > > When order by query returns 0 rows, there will not be any files in temporary > table location for GenericUDTFGetSplits > which results in the following exception > {code:java} > Caused by: java.lang.ArrayIndexOutOfBoundsException: 0 > at > org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:217) > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDTFGetSplits.getSplits(GenericUDTFGetSplits.java:420) > ... 52 more{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19532) fix tests for master-txnstats branch
[ https://issues.apache.org/jira/browse/HIVE-19532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523160#comment-16523160 ] Sergey Shelukhin commented on HIVE-19532: - I committed a few test fixes to the branch and merged again. Looks like some out file changes are valid with new stats (e.g. mm_all). [~steveyeom2017] can you double check? See the list from the last run for CliDriver/etc tests, it's not that many tests now. Unless one of my fixes broke something... but the old list should be good. > fix tests for master-txnstats branch > > > Key: HIVE-19532 > URL: https://issues.apache.org/jira/browse/HIVE-19532 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Steve Yeom >Assignee: Steve Yeom >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-19532.01.patch, HIVE-19532.01.prepatch, > HIVE-19532.02.patch, HIVE-19532.02.prepatch, HIVE-19532.03.patch, > HIVE-19532.04.patch, HIVE-19532.05.patch, HIVE-19532.06.patch, > HIVE-19532.07.patch, HIVE-19532.08.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19532) fix tests for master-txnstats branch
[ https://issues.apache.org/jira/browse/HIVE-19532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin reassigned HIVE-19532: --- Assignee: Steve Yeom (was: Sergey Shelukhin) > fix tests for master-txnstats branch > > > Key: HIVE-19532 > URL: https://issues.apache.org/jira/browse/HIVE-19532 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Steve Yeom >Assignee: Steve Yeom >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-19532.01.patch, HIVE-19532.01.prepatch, > HIVE-19532.02.patch, HIVE-19532.02.prepatch, HIVE-19532.03.patch, > HIVE-19532.04.patch, HIVE-19532.05.patch, HIVE-19532.06.patch, > HIVE-19532.07.patch, HIVE-19532.08.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19532) fix tests for master-txnstats branch
[ https://issues.apache.org/jira/browse/HIVE-19532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin reassigned HIVE-19532: --- Assignee: Sergey Shelukhin (was: Steve Yeom) > fix tests for master-txnstats branch > > > Key: HIVE-19532 > URL: https://issues.apache.org/jira/browse/HIVE-19532 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Steve Yeom >Assignee: Sergey Shelukhin >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-19532.01.patch, HIVE-19532.01.prepatch, > HIVE-19532.02.patch, HIVE-19532.02.prepatch, HIVE-19532.03.patch, > HIVE-19532.04.patch, HIVE-19532.05.patch, HIVE-19532.06.patch, > HIVE-19532.07.patch, HIVE-19532.08.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19532) fix tests for master-txnstats branch
[ https://issues.apache.org/jira/browse/HIVE-19532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19532: Attachment: HIVE-19532.08.patch > fix tests for master-txnstats branch > > > Key: HIVE-19532 > URL: https://issues.apache.org/jira/browse/HIVE-19532 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Steve Yeom >Assignee: Sergey Shelukhin >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-19532.01.patch, HIVE-19532.01.prepatch, > HIVE-19532.02.patch, HIVE-19532.02.prepatch, HIVE-19532.03.patch, > HIVE-19532.04.patch, HIVE-19532.05.patch, HIVE-19532.06.patch, > HIVE-19532.07.patch, HIVE-19532.08.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19951) Vectorization: Need to disable encoded LLAP I/O for ORC when there is data type conversion (Schema Evolution)
[ https://issues.apache.org/jira/browse/HIVE-19951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523135#comment-16523135 ] Prasanth Jayachandran commented on HIVE-19951: -- +1, pending tests. Could you also create a followup Jira and link it here so that when orc-1.5.2 is released this redundant method can be removed? > Vectorization: Need to disable encoded LLAP I/O for ORC when there is data > type conversion (Schema Evolution) > -- > > Key: HIVE-19951 > URL: https://issues.apache.org/jira/browse/HIVE-19951 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-19951.01.patch, HIVE-19951.02.patch, > HIVE-19951.03.patch, HIVE-19951.04.patch, HIVE-19951.05.patch, > HIVE-19951.06.patch > > > Currently, reading encoded ORC data does not support data type conversion. > So, encoded reading and cache populating needs to be disabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19951) Vectorization: Need to disable encoded LLAP I/O for ORC when there is data type conversion (Schema Evolution)
[ https://issues.apache.org/jira/browse/HIVE-19951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-19951: Status: Patch Available (was: In Progress) > Vectorization: Need to disable encoded LLAP I/O for ORC when there is data > type conversion (Schema Evolution) > -- > > Key: HIVE-19951 > URL: https://issues.apache.org/jira/browse/HIVE-19951 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-19951.01.patch, HIVE-19951.02.patch, > HIVE-19951.03.patch, HIVE-19951.04.patch, HIVE-19951.05.patch, > HIVE-19951.06.patch > > > Currently, reading encoded ORC data does not support data type conversion. > So, encoded reading and cache populating needs to be disabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19951) Vectorization: Need to disable encoded LLAP I/O for ORC when there is data type conversion (Schema Evolution)
[ https://issues.apache.org/jira/browse/HIVE-19951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-19951: Attachment: HIVE-19951.06.patch > Vectorization: Need to disable encoded LLAP I/O for ORC when there is data > type conversion (Schema Evolution) > -- > > Key: HIVE-19951 > URL: https://issues.apache.org/jira/browse/HIVE-19951 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-19951.01.patch, HIVE-19951.02.patch, > HIVE-19951.03.patch, HIVE-19951.04.patch, HIVE-19951.05.patch, > HIVE-19951.06.patch > > > Currently, reading encoded ORC data does not support data type conversion. > So, encoded reading and cache populating needs to be disabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19951) Vectorization: Need to disable encoded LLAP I/O for ORC when there is data type conversion (Schema Evolution)
[ https://issues.apache.org/jira/browse/HIVE-19951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-19951: Status: In Progress (was: Patch Available) > Vectorization: Need to disable encoded LLAP I/O for ORC when there is data > type conversion (Schema Evolution) > -- > > Key: HIVE-19951 > URL: https://issues.apache.org/jira/browse/HIVE-19951 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-19951.01.patch, HIVE-19951.02.patch, > HIVE-19951.03.patch, HIVE-19951.04.patch, HIVE-19951.05.patch > > > Currently, reading encoded ORC data does not support data type conversion. > So, encoded reading and cache populating needs to be disabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19980) GenericUDTFGetSplits fails when order by query returns 0 rows
[ https://issues.apache.org/jira/browse/HIVE-19980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-19980: - Attachment: HIVE-19980.3.patch > GenericUDTFGetSplits fails when order by query returns 0 rows > - > > Key: HIVE-19980 > URL: https://issues.apache.org/jira/browse/HIVE-19980 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.0, 4.0.0 >Reporter: Kshitij Badani >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19980.1.patch, HIVE-19980.2.patch, > HIVE-19980.3.patch > > > When order by query returns 0 rows, there will not be any files in temporary > table location for GenericUDTFGetSplits > which results in the following exception > {code:java} > Caused by: java.lang.ArrayIndexOutOfBoundsException: 0 > at > org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:217) > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDTFGetSplits.getSplits(GenericUDTFGetSplits.java:420) > ... 52 more{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19890) ACID: Inherit bucket-id from original ROW_ID for delete deltas
[ https://issues.apache.org/jira/browse/HIVE-19890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gopal V updated HIVE-19890: --- Resolution: Fixed Fix Version/s: 3.1.0 Status: Resolved (was: Patch Available) > ACID: Inherit bucket-id from original ROW_ID for delete deltas > -- > > Key: HIVE-19890 > URL: https://issues.apache.org/jira/browse/HIVE-19890 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Gopal V >Assignee: Gopal V >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-19890.1.patch, HIVE-19890.2.patch, > HIVE-19890.3.patch, HIVE-19890.4-branch-3.patch > > > The ACID delete deltas for unbucketed tables are written to arbitrary files, > which should instead be shuffled using the bucket-id instead of hash(ROW__ID). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18434) Type is not determined correctly for comparison between decimal column and string constant
[ https://issues.apache.org/jira/browse/HIVE-18434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-18434: Resolution: Fixed Fix Version/s: 3.1.0 Status: Resolved (was: Patch Available) Pushed to master and branch-3. > Type is not determined correctly for comparison between decimal column and > string constant > -- > > Key: HIVE-18434 > URL: https://issues.apache.org/jira/browse/HIVE-18434 > Project: Hive > Issue Type: Bug > Components: Types >Reporter: Ashutosh Chauhan >Assignee: Ashutosh Chauhan >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-18434.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-12192) Hive should carry out timestamp computations in UTC
[ https://issues.apache.org/jira/browse/HIVE-12192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523044#comment-16523044 ] Hive QA commented on HIVE-12192: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12929098/HIVE-12192.26.patch {color:green}SUCCESS:{color} +1 due to 80 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 32 failed/errored test(s), 14590 tests executed *Failed tests:* {noformat} TestBeeLineDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=258) TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=258) TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=258) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cast_on_constant] (batchId=25) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_13] (batchId=53) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_7] (batchId=88) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorization_13] (batchId=52) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorization_7] (batchId=45) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorized_timestamp] (batchId=81) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_interval_2] (batchId=172) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_13] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_7] (batchId=166) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_timestamp] (batchId=175) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[parquet_vectorization_13] (batchId=131) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[parquet_vectorization_7] (batchId=146) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[vectorization_13] (batchId=130) org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query12] (batchId=260) org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query16] (batchId=260) org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query20] (batchId=260) org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query21] (batchId=260) org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query32] (batchId=260) org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query37] (batchId=260) org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query40] (batchId=260) org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query5] (batchId=260) org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query77] (batchId=260) org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query80] (batchId=260) org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query82] (batchId=260) org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query92] (batchId=260) org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query94] (batchId=260) org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query95] (batchId=260) org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query98] (batchId=260) org.apache.hive.jdbc.miniHS2.TestMiniHS2.testConfInSession (batchId=249) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12132/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12132/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12132/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 32 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12929098 - PreCommit-HIVE-Build > Hive should carry out timestamp computations in UTC > --- > > Key: HIVE-12192 > URL: https://issues.apache.org/jira/browse/HIVE-12192 > Project: Hive > Issue Type: Sub-task > Components: Hive >Reporter: Ryan Blue >Assignee: Jesus Camacho Rodriguez >Priority: Blocker > Labels: timestamp > Fix For: 3.1.0 > > Attachments: HIVE-12192.01.patch, HIVE-12192.02.patch, > HIVE-12192.03.patch, HIVE-12192.04.patch, HIVE-12192.05.patch, > HIVE-12192.06.patch, HIVE-12192.07.patch, HIVE-12192.08.patch, > HIVE-12192.09.patch, HIVE-12192.10.patch, HIVE-12192.11.patch, > HIVE-12192.12.patch, HIVE-12192.13.patch, HIVE-12192.14.patch, > HIVE-12192.15.patch, HIVE-12192.16.patch,
[jira] [Commented] (HIVE-19981) Managed tables converted to external tables by the HiveStrictManagedMigration utility should be set to delete data when the table is dropped
[ https://issues.apache.org/jira/browse/HIVE-19981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523029#comment-16523029 ] Jason Dere commented on HIVE-19981: --- RB at https://reviews.apache.org/r/67735/ [~ashutoshc] [~daijy] can you review? > Managed tables converted to external tables by the HiveStrictManagedMigration > utility should be set to delete data when the table is dropped > > > Key: HIVE-19981 > URL: https://issues.apache.org/jira/browse/HIVE-19981 > Project: Hive > Issue Type: Sub-task >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Attachments: HIVE-19981.1.patch > > > Using the HiveStrictManagedMigration utility, tables can be converted to > conform to the Hive strict managed tables mode. > For managed tables that are converted to external tables by the utility, > these tables should keep the "drop data on delete" semantics they had when > they were managed tables. > One way to do this is to introduce a table property "external.table.purge", > which if true (and if the table is an external table), will let Hive know to > delete the table data when the table is dropped. This property will be set by > the HiveStrictManagedMigration utility when managed tables are converted to > external tables. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18434) Type is not determined correctly for comparison between decimal column and string constant
[ https://issues.apache.org/jira/browse/HIVE-18434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523027#comment-16523027 ] Gunther Hagleitner commented on HIVE-18434: --- I discovered this issue and stumbled upon this patch. The patch actually looks good to me. Hive already narrows the type in comparisons for other numerical types. Not doing decimal seems more of an oversight or inconsistency than the other way around. I'm +1. > Type is not determined correctly for comparison between decimal column and > string constant > -- > > Key: HIVE-18434 > URL: https://issues.apache.org/jira/browse/HIVE-18434 > Project: Hive > Issue Type: Bug > Components: Types >Reporter: Ashutosh Chauhan >Assignee: Ashutosh Chauhan >Priority: Major > Attachments: HIVE-18434.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19981) Managed tables converted to external tables by the HiveStrictManagedMigration utility should be set to delete data when the table is dropped
[ https://issues.apache.org/jira/browse/HIVE-19981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-19981: -- Status: Patch Available (was: Open) > Managed tables converted to external tables by the HiveStrictManagedMigration > utility should be set to delete data when the table is dropped > > > Key: HIVE-19981 > URL: https://issues.apache.org/jira/browse/HIVE-19981 > Project: Hive > Issue Type: Sub-task >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Attachments: HIVE-19981.1.patch > > > Using the HiveStrictManagedMigration utility, tables can be converted to > conform to the Hive strict managed tables mode. > For managed tables that are converted to external tables by the utility, > these tables should keep the "drop data on delete" semantics they had when > they were managed tables. > One way to do this is to introduce a table property "external.table.purge", > which if true (and if the table is an external table), will let Hive know to > delete the table data when the table is dropped. This property will be set by > the HiveStrictManagedMigration utility when managed tables are converted to > external tables. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19981) Managed tables converted to external tables by the HiveStrictManagedMigration utility should be set to delete data when the table is dropped
[ https://issues.apache.org/jira/browse/HIVE-19981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-19981: -- Attachment: HIVE-19981.1.patch > Managed tables converted to external tables by the HiveStrictManagedMigration > utility should be set to delete data when the table is dropped > > > Key: HIVE-19981 > URL: https://issues.apache.org/jira/browse/HIVE-19981 > Project: Hive > Issue Type: Sub-task >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Attachments: HIVE-19981.1.patch > > > Using the HiveStrictManagedMigration utility, tables can be converted to > conform to the Hive strict managed tables mode. > For managed tables that are converted to external tables by the utility, > these tables should keep the "drop data on delete" semantics they had when > they were managed tables. > One way to do this is to introduce a table property "external.table.purge", > which if true (and if the table is an external table), will let Hive know to > delete the table data when the table is dropped. This property will be set by > the HiveStrictManagedMigration utility when managed tables are converted to > external tables. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19990) Query with interval literal in join condition fails
[ https://issues.apache.org/jira/browse/HIVE-19990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19990: --- Status: Patch Available (was: Open) > Query with interval literal in join condition fails > --- > > Key: HIVE-19990 > URL: https://issues.apache.org/jira/browse/HIVE-19990 > Project: Hive > Issue Type: Bug > Components: Query Planning >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-19990.1.patch > > > *Reproducer* > {code:sql} > > create table date_dim_d1( > d_week_seqint, > d_datestring); > > SELECT >d1.d_week_seq > FROM >date_dim_d1 d1 >JOIN date_dim_d1 d3 > WHERE >Cast(d3.d_date AS date) > Cast(d1.d_date AS date) + INTERVAL '5' day ; > {code} > *Exception* > {code} > org.apache.hadoop.hive.ql.parse.SemanticException: '5 00:00:00.0' > encountered with 0 children > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondPopulateAlias(SemanticAnalyzer.java:2780) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondPopulateAlias(SemanticAnalyzer.java:2775) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondition(SemanticAnalyzer.java:3060) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondition(SemanticAnalyzer.java:2959) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genJoinTree(SemanticAnalyzer.java:9633) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:11380) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:11285) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genOPTree(SemanticAnalyzer.java:12071) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:593) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12150) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:330) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:288) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:658) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1829) > at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1776) > at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1771) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:126) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:214) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) > at > org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:832) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:770) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:694) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.util.RunJar.run(RunJar.java:239) > at org.apache.hadoop.util.RunJar.main(RunJar.java:153) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19990) Query with interval literal in join condition fails
[ https://issues.apache.org/jira/browse/HIVE-19990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19990: --- Attachment: HIVE-19989.1.patch > Query with interval literal in join condition fails > --- > > Key: HIVE-19990 > URL: https://issues.apache.org/jira/browse/HIVE-19990 > Project: Hive > Issue Type: Bug > Components: Query Planning >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-19990.1.patch > > > *Reproducer* > {code:sql} > > create table date_dim_d1( > d_week_seqint, > d_datestring); > > SELECT >d1.d_week_seq > FROM >date_dim_d1 d1 >JOIN date_dim_d1 d3 > WHERE >Cast(d3.d_date AS date) > Cast(d1.d_date AS date) + INTERVAL '5' day ; > {code} > *Exception* > {code} > org.apache.hadoop.hive.ql.parse.SemanticException: '5 00:00:00.0' > encountered with 0 children > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondPopulateAlias(SemanticAnalyzer.java:2780) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondPopulateAlias(SemanticAnalyzer.java:2775) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondition(SemanticAnalyzer.java:3060) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondition(SemanticAnalyzer.java:2959) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genJoinTree(SemanticAnalyzer.java:9633) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:11380) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:11285) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genOPTree(SemanticAnalyzer.java:12071) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:593) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12150) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:330) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:288) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:658) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1829) > at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1776) > at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1771) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:126) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:214) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) > at > org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:832) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:770) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:694) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.util.RunJar.run(RunJar.java:239) > at org.apache.hadoop.util.RunJar.main(RunJar.java:153) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19990) Query with interval literal in join condition fails
[ https://issues.apache.org/jira/browse/HIVE-19990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19990: --- Attachment: (was: HIVE-19989.1.patch) > Query with interval literal in join condition fails > --- > > Key: HIVE-19990 > URL: https://issues.apache.org/jira/browse/HIVE-19990 > Project: Hive > Issue Type: Bug > Components: Query Planning >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-19990.1.patch > > > *Reproducer* > {code:sql} > > create table date_dim_d1( > d_week_seqint, > d_datestring); > > SELECT >d1.d_week_seq > FROM >date_dim_d1 d1 >JOIN date_dim_d1 d3 > WHERE >Cast(d3.d_date AS date) > Cast(d1.d_date AS date) + INTERVAL '5' day ; > {code} > *Exception* > {code} > org.apache.hadoop.hive.ql.parse.SemanticException: '5 00:00:00.0' > encountered with 0 children > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondPopulateAlias(SemanticAnalyzer.java:2780) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondPopulateAlias(SemanticAnalyzer.java:2775) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondition(SemanticAnalyzer.java:3060) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondition(SemanticAnalyzer.java:2959) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genJoinTree(SemanticAnalyzer.java:9633) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:11380) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:11285) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genOPTree(SemanticAnalyzer.java:12071) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:593) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12150) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:330) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:288) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:658) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1829) > at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1776) > at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1771) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:126) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:214) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) > at > org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:832) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:770) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:694) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.util.RunJar.run(RunJar.java:239) > at org.apache.hadoop.util.RunJar.main(RunJar.java:153) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19990) Query with interval literal in join condition fails
[ https://issues.apache.org/jira/browse/HIVE-19990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19990: --- Attachment: HIVE-19990.1.patch > Query with interval literal in join condition fails > --- > > Key: HIVE-19990 > URL: https://issues.apache.org/jira/browse/HIVE-19990 > Project: Hive > Issue Type: Bug > Components: Query Planning >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-19990.1.patch > > > *Reproducer* > {code:sql} > > create table date_dim_d1( > d_week_seqint, > d_datestring); > > SELECT >d1.d_week_seq > FROM >date_dim_d1 d1 >JOIN date_dim_d1 d3 > WHERE >Cast(d3.d_date AS date) > Cast(d1.d_date AS date) + INTERVAL '5' day ; > {code} > *Exception* > {code} > org.apache.hadoop.hive.ql.parse.SemanticException: '5 00:00:00.0' > encountered with 0 children > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondPopulateAlias(SemanticAnalyzer.java:2780) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondPopulateAlias(SemanticAnalyzer.java:2775) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondition(SemanticAnalyzer.java:3060) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondition(SemanticAnalyzer.java:2959) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genJoinTree(SemanticAnalyzer.java:9633) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:11380) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:11285) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genOPTree(SemanticAnalyzer.java:12071) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:593) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12150) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:330) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:288) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:658) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1829) > at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1776) > at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1771) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:126) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:214) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) > at > org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:832) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:770) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:694) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.util.RunJar.run(RunJar.java:239) > at org.apache.hadoop.util.RunJar.main(RunJar.java:153) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HIVE-19867) Test and verify Concurrent INSERTS
[ https://issues.apache.org/jira/browse/HIVE-19867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522959#comment-16522959 ] Steve Yeom edited comment on HIVE-19867 at 6/26/18 12:45 AM: - Eugene and I talked. What I missed from the above is that, when we have two concurrent INSERTs only one can have the other's write id in its writeIdList. But a possible solution is, if atomicity is guaranteed, to check either of the two condition is true 1. old stats' writeIdList in TBLS/PARTITIONS has the new updater's writeId 2. new updater's writeIdList has the old stats' writeId (to be saved in TBLS/PARTITIONS). If then, we can say we have a concurrent INSERTs. But we have to make sure these two cases only happen for concurrent INSERTs, not for the other cases to prevent a miscalculation. was (Author: steveyeom2017): Eugene and I talked. What I missed from the above is that, when we have two concurrent INSERT only one can have the other's write id in its writeIdList. But a possible solution is, if atomicity is guaranteed, to check either of the two condition is true 1. old stats' writeIdList in TBLS/PARTITIONS has the new updater's writeId 2. new updater's writeIdList has the old stats' writeId (to be saved in TBLS/PARTITIONS). If then, we can say we have a concurrent INSERTs. > Test and verify Concurrent INSERTS > > > Key: HIVE-19867 > URL: https://issues.apache.org/jira/browse/HIVE-19867 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 4.0.0 >Reporter: Steve Yeom >Assignee: Steve Yeom >Priority: Major > Fix For: 4.0.0 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HIVE-19867) Test and verify Concurrent INSERTS
[ https://issues.apache.org/jira/browse/HIVE-19867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522959#comment-16522959 ] Steve Yeom edited comment on HIVE-19867 at 6/26/18 12:31 AM: - Eugene and I talked. What I missed from the above is that, when we have two concurrent INSERT only one can have the other's write id in its writeIdList. But a possible solution is, if atomicity is guaranteed, to check either of the two condition is true 1. old stats' writeIdList in TBLS/PARTITIONS has the new updater's writeId 2. new updater's writeIdList has the old stats' writeId (to be saved in TBLS/PARTITIONS). If then, we can say we have a concurrent INSERTs. was (Author: steveyeom2017): A simple idea is that 1. We save writeId of the stats updater into TBLS/PARTITIONS. 2. When we update stats, we check whether the new stats updater's writeId is in the old stats updater's writeIdList and check whether the old stats updater's writeId is in the current stats updater's writeIdList. If both are true it is concurrent update. Thus we turn to false the COLUMN_STATS_ACCURATE of the current table/partition. > Test and verify Concurrent INSERTS > > > Key: HIVE-19867 > URL: https://issues.apache.org/jira/browse/HIVE-19867 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 4.0.0 >Reporter: Steve Yeom >Assignee: Steve Yeom >Priority: Major > Fix For: 4.0.0 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19990) Query with interval literal in join condition fails
[ https://issues.apache.org/jira/browse/HIVE-19990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19990: --- Description: *Reproducer* {code:sql} > create table date_dim_d1( d_week_seqint, d_datestring); > SELECT d1.d_week_seq FROM date_dim_d1 d1 JOIN date_dim_d1 d3 WHERE Cast(d3.d_date AS date) > Cast(d1.d_date AS date) + INTERVAL '5' day ; {code} *Exception* {code} org.apache.hadoop.hive.ql.parse.SemanticException: '5 00:00:00.0' encountered with 0 children at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondPopulateAlias(SemanticAnalyzer.java:2780) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondPopulateAlias(SemanticAnalyzer.java:2775) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondition(SemanticAnalyzer.java:3060) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondition(SemanticAnalyzer.java:2959) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genJoinTree(SemanticAnalyzer.java:9633) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:11380) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:11285) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genOPTree(SemanticAnalyzer.java:12071) at org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:593) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12150) at org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:330) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:288) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:658) at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1829) at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1776) at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1771) at org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:126) at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:214) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:832) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:770) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:694) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:239) at org.apache.hadoop.util.RunJar.main(RunJar.java:153) {code} was: *Reproducer* {code:sql} SELECT d1.d_week_seq FROM date_dim d1 WHERE Cast(d1.d_date AS date) > INTERVAL '5' day ; {code} *Exception* {code} org.apache.hadoop.hive.ql.parse.SemanticException: '5 00:00:00.0' encountered with 0 children at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondPopulateAlias(SemanticAnalyzer.java:2780) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondPopulateAlias(SemanticAnalyzer.java:2775) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondition(SemanticAnalyzer.java:3060) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondition(SemanticAnalyzer.java:2959) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genJoinTree(SemanticAnalyzer.java:9633) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:11380) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:11285) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genOPTree(SemanticAnalyzer.java:12071) at org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:593) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12150) at org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:330) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:288) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:658) at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1829)
[jira] [Commented] (HIVE-19867) Test and verify Concurrent INSERTS
[ https://issues.apache.org/jira/browse/HIVE-19867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522994#comment-16522994 ] Sergey Shelukhin commented on HIVE-19867: - We were discussing the partition case with [~ekoifman]. Tangentially based on that, I don't think we need this multi insert detection with current code. We already have valid write ID list "isEquivalent" check, so after multiple inserts in parallel, it doesn't matter who writes stats last, it will simply become not isEquivalent, so no extra checks are needed. Can you describe a scenario where reader gets invalid stats with concurrent writers (i.e. where isEquivalent will return true but stats are still invalid?). From the above I cannot see it happening. However Eugene was suggesting that we actually redo the whole stats correctness to rely mostly on write path, in that case this approach (or rather similar more comprehensive one that handles couple more special cases) will help. Actually we may not even need to store write ID list and txn in that case, only the last write ID. But we'd also need to ensure that every query affecting data affects stats, either by updating them, or by removing the flag/write ID (including queries with stats collection disabled, alters, etc.). I'll send an email with details to discuss. > Test and verify Concurrent INSERTS > > > Key: HIVE-19867 > URL: https://issues.apache.org/jira/browse/HIVE-19867 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 4.0.0 >Reporter: Steve Yeom >Assignee: Steve Yeom >Priority: Major > Fix For: 4.0.0 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19990) Query with interval literal in join condition fails
[ https://issues.apache.org/jira/browse/HIVE-19990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg reassigned HIVE-19990: -- > Query with interval literal in join condition fails > --- > > Key: HIVE-19990 > URL: https://issues.apache.org/jira/browse/HIVE-19990 > Project: Hive > Issue Type: Bug > Components: Query Planning >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > > *Reproducer* > {code:sql} > SELECT >d1.d_week_seq > FROM >date_dim d1 > WHERE >Cast(d1.d_date AS date) > INTERVAL '5' day ; > {code} > *Exception* > {code} > org.apache.hadoop.hive.ql.parse.SemanticException: '5 00:00:00.0' > encountered with 0 children > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondPopulateAlias(SemanticAnalyzer.java:2780) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondPopulateAlias(SemanticAnalyzer.java:2775) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondition(SemanticAnalyzer.java:3060) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.parseJoinCondition(SemanticAnalyzer.java:2959) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genJoinTree(SemanticAnalyzer.java:9633) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:11380) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:11285) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genOPTree(SemanticAnalyzer.java:12071) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:593) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12150) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:330) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:288) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:658) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1829) > at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1776) > at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1771) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:126) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:214) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) > at > org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:832) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:770) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:694) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.util.RunJar.run(RunJar.java:239) > at org.apache.hadoop.util.RunJar.main(RunJar.java:153) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18118) Explain Extended should indicate if a file being read is an EC file
[ https://issues.apache.org/jira/browse/HIVE-18118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Sherman updated HIVE-18118: -- Attachment: HIVE-18118.7.patch > Explain Extended should indicate if a file being read is an EC file > --- > > Key: HIVE-18118 > URL: https://issues.apache.org/jira/browse/HIVE-18118 > Project: Hive > Issue Type: Sub-task >Reporter: Sahil Takiar >Assignee: Andrew Sherman >Priority: Major > Attachments: HIVE-18118.1.patch, HIVE-18118.2.patch, > HIVE-18118.3.patch, HIVE-18118.4.patch, HIVE-18118.5.patch, > HIVE-18118.6.patch, HIVE-18118.7.patch > > > We already print out the files Hive will read in the explain extended > command, we just have to modify it to say whether or not its an EC file. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19867) Test and verify Concurrent INSERTS
[ https://issues.apache.org/jira/browse/HIVE-19867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522959#comment-16522959 ] Steve Yeom commented on HIVE-19867: --- A simple idea is that 1. We save writeId of the stats updater into TBLS/PARTITIONS. 2. When we update stats, we check whether the new stats updater's writeId is in the old stats updater's writeIdList and check whether the old stats updater's writeId is in the current stats updater's writeIdList. If both are true it is concurrent update. Thus we turn to false the COLUMN_STATS_ACCURATE of the current table/partition. > Test and verify Concurrent INSERTS > > > Key: HIVE-19867 > URL: https://issues.apache.org/jira/browse/HIVE-19867 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 4.0.0 >Reporter: Steve Yeom >Assignee: Steve Yeom >Priority: Major > Fix For: 4.0.0 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19921) Fix perf duration and queue name in HiveProtoLoggingHook
[ https://issues.apache.org/jira/browse/HIVE-19921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-19921: - Resolution: Fixed Status: Resolved (was: Patch Available) Committed to branch-3. The branch-3 test failures are seen for other patches as well. > Fix perf duration and queue name in HiveProtoLoggingHook > > > Key: HIVE-19921 > URL: https://issues.apache.org/jira/browse/HIVE-19921 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Harish Jaiprakash >Assignee: Harish Jaiprakash >Priority: Major > Attachments: HIVE-19921.01-branch-3.patch, HIVE-19921.01.patch, > HIVE-19921.02-branch-3.patch > > > The perf log should return duration instead of end time. > The queue name should be llap queue for llap queries. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19921) Fix perf duration and queue name in HiveProtoLoggingHook
[ https://issues.apache.org/jira/browse/HIVE-19921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-19921: - Fix Version/s: 3.1.0 > Fix perf duration and queue name in HiveProtoLoggingHook > > > Key: HIVE-19921 > URL: https://issues.apache.org/jira/browse/HIVE-19921 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Harish Jaiprakash >Assignee: Harish Jaiprakash >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-19921.01-branch-3.patch, HIVE-19921.01.patch, > HIVE-19921.02-branch-3.patch > > > The perf log should return duration instead of end time. > The queue name should be llap queue for llap queries. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19989) Metastore uses wrong application name for HADOOP2 metrics
[ https://issues.apache.org/jira/browse/HIVE-19989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522936#comment-16522936 ] Alan Gates commented on HIVE-19989: --- +1 > Metastore uses wrong application name for HADOOP2 metrics > - > > Key: HIVE-19989 > URL: https://issues.apache.org/jira/browse/HIVE-19989 > Project: Hive > Issue Type: Bug > Components: Standalone Metastore >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-19989.1.patch > > > Right now it is hardcoded as 'metastore'. It should instead be fetched from > config like it was previously. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19989) Metastore uses wrong application name for HADOOP2 metrics
[ https://issues.apache.org/jira/browse/HIVE-19989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19989: --- Status: Patch Available (was: Open) > Metastore uses wrong application name for HADOOP2 metrics > - > > Key: HIVE-19989 > URL: https://issues.apache.org/jira/browse/HIVE-19989 > Project: Hive > Issue Type: Bug > Components: Standalone Metastore >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-19989.1.patch > > > Right now it is hardcoded as 'metastore'. It should instead be fetched from > config like it was previously. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19989) Metastore uses wrong application name for HADOOP2 metrics
[ https://issues.apache.org/jira/browse/HIVE-19989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522928#comment-16522928 ] Vineet Garg commented on HIVE-19989: [~alangates] Would you mind taking a look at it? > Metastore uses wrong application name for HADOOP2 metrics > - > > Key: HIVE-19989 > URL: https://issues.apache.org/jira/browse/HIVE-19989 > Project: Hive > Issue Type: Bug > Components: Standalone Metastore >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-19989.1.patch > > > Right now it is hardcoded as 'metastore'. It should instead be fetched from > config like it was previously. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19989) Metastore uses wrong application name for HADOOP2 metrics
[ https://issues.apache.org/jira/browse/HIVE-19989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19989: --- Description: Right now it is hardcoded as 'metastore'. It should instead be fetched from config like it was previously. > Metastore uses wrong application name for HADOOP2 metrics > - > > Key: HIVE-19989 > URL: https://issues.apache.org/jira/browse/HIVE-19989 > Project: Hive > Issue Type: Bug > Components: Standalone Metastore >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-19989.1.patch > > > Right now it is hardcoded as 'metastore'. It should instead be fetched from > config like it was previously. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19989) Metastore uses wrong application name for HADOOP2 metrics
[ https://issues.apache.org/jira/browse/HIVE-19989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19989: --- Attachment: HIVE-19989.1.patch > Metastore uses wrong application name for HADOOP2 metrics > - > > Key: HIVE-19989 > URL: https://issues.apache.org/jira/browse/HIVE-19989 > Project: Hive > Issue Type: Bug > Components: Standalone Metastore >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-19989.1.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-12192) Hive should carry out timestamp computations in UTC
[ https://issues.apache.org/jira/browse/HIVE-12192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-12192: --- Status: Patch Available (was: Reopened) > Hive should carry out timestamp computations in UTC > --- > > Key: HIVE-12192 > URL: https://issues.apache.org/jira/browse/HIVE-12192 > Project: Hive > Issue Type: Sub-task > Components: Hive >Reporter: Ryan Blue >Assignee: Jesus Camacho Rodriguez >Priority: Blocker > Labels: timestamp > Fix For: 3.1.0 > > Attachments: HIVE-12192.01.patch, HIVE-12192.02.patch, > HIVE-12192.03.patch, HIVE-12192.04.patch, HIVE-12192.05.patch, > HIVE-12192.06.patch, HIVE-12192.07.patch, HIVE-12192.08.patch, > HIVE-12192.09.patch, HIVE-12192.10.patch, HIVE-12192.11.patch, > HIVE-12192.12.patch, HIVE-12192.13.patch, HIVE-12192.14.patch, > HIVE-12192.15.patch, HIVE-12192.16.patch, HIVE-12192.17.patch, > HIVE-12192.18.patch, HIVE-12192.19.patch, HIVE-12192.20.patch, > HIVE-12192.21.patch, HIVE-12192.22.patch, HIVE-12192.23.patch, > HIVE-12192.24.patch, HIVE-12192.25.patch, HIVE-12192.26.patch, > HIVE-12192.patch > > > Hive currently uses the "local" time of a java.sql.Timestamp to represent the > SQL data type TIMESTAMP WITHOUT TIME ZONE. The purpose is to be able to use > {{Timestamp#getYear()}} and similar methods to implement SQL functions like > {{year}}. > When the SQL session's time zone is a DST zone, such as America/Los_Angeles > that alternates between PST and PDT, there are times that cannot be > represented because the effective zone skips them. > {code} > hive> select TIMESTAMP '2015-03-08 02:10:00.101'; > 2015-03-08 03:10:00.101 > {code} > Using UTC instead of the SQL session time zone as the underlying zone for a > java.sql.Timestamp avoids this bug, while still returning correct values for > {{getYear}} etc. Using UTC as the convenience representation (timestamp > without time zone has no real zone) would make timestamp calculations more > consistent and avoid similar problems in the future. > Notably, this would break the {{unix_timestamp}} UDF that specifies the > result is with respect to ["the default timezone and default > locale"|https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-DateFunctions]. > That function would need to be updated to use the > {{System.getProperty("user.timezone")}} zone. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-12192) Hive should carry out timestamp computations in UTC
[ https://issues.apache.org/jira/browse/HIVE-12192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522906#comment-16522906 ] Jesus Camacho Rodriguez commented on HIVE-12192: Rebased and uploaded the patch one more time... I cannot reproduce any of those driver timeouts locally and logs are not present in jenkins anymore. Wondering whether splitting those drivers into different groups would help... > Hive should carry out timestamp computations in UTC > --- > > Key: HIVE-12192 > URL: https://issues.apache.org/jira/browse/HIVE-12192 > Project: Hive > Issue Type: Sub-task > Components: Hive >Reporter: Ryan Blue >Assignee: Jesus Camacho Rodriguez >Priority: Blocker > Labels: timestamp > Fix For: 3.1.0 > > Attachments: HIVE-12192.01.patch, HIVE-12192.02.patch, > HIVE-12192.03.patch, HIVE-12192.04.patch, HIVE-12192.05.patch, > HIVE-12192.06.patch, HIVE-12192.07.patch, HIVE-12192.08.patch, > HIVE-12192.09.patch, HIVE-12192.10.patch, HIVE-12192.11.patch, > HIVE-12192.12.patch, HIVE-12192.13.patch, HIVE-12192.14.patch, > HIVE-12192.15.patch, HIVE-12192.16.patch, HIVE-12192.17.patch, > HIVE-12192.18.patch, HIVE-12192.19.patch, HIVE-12192.20.patch, > HIVE-12192.21.patch, HIVE-12192.22.patch, HIVE-12192.23.patch, > HIVE-12192.24.patch, HIVE-12192.25.patch, HIVE-12192.26.patch, > HIVE-12192.patch > > > Hive currently uses the "local" time of a java.sql.Timestamp to represent the > SQL data type TIMESTAMP WITHOUT TIME ZONE. The purpose is to be able to use > {{Timestamp#getYear()}} and similar methods to implement SQL functions like > {{year}}. > When the SQL session's time zone is a DST zone, such as America/Los_Angeles > that alternates between PST and PDT, there are times that cannot be > represented because the effective zone skips them. > {code} > hive> select TIMESTAMP '2015-03-08 02:10:00.101'; > 2015-03-08 03:10:00.101 > {code} > Using UTC instead of the SQL session time zone as the underlying zone for a > java.sql.Timestamp avoids this bug, while still returning correct values for > {{getYear}} etc. Using UTC as the convenience representation (timestamp > without time zone has no real zone) would make timestamp calculations more > consistent and avoid similar problems in the future. > Notably, this would break the {{unix_timestamp}} UDF that specifies the > result is with respect to ["the default timezone and default > locale"|https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-DateFunctions]. > That function would need to be updated to use the > {{System.getProperty("user.timezone")}} zone. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-12192) Hive should carry out timestamp computations in UTC
[ https://issues.apache.org/jira/browse/HIVE-12192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-12192: --- Attachment: HIVE-12192.26.patch > Hive should carry out timestamp computations in UTC > --- > > Key: HIVE-12192 > URL: https://issues.apache.org/jira/browse/HIVE-12192 > Project: Hive > Issue Type: Sub-task > Components: Hive >Reporter: Ryan Blue >Assignee: Jesus Camacho Rodriguez >Priority: Blocker > Labels: timestamp > Fix For: 3.1.0 > > Attachments: HIVE-12192.01.patch, HIVE-12192.02.patch, > HIVE-12192.03.patch, HIVE-12192.04.patch, HIVE-12192.05.patch, > HIVE-12192.06.patch, HIVE-12192.07.patch, HIVE-12192.08.patch, > HIVE-12192.09.patch, HIVE-12192.10.patch, HIVE-12192.11.patch, > HIVE-12192.12.patch, HIVE-12192.13.patch, HIVE-12192.14.patch, > HIVE-12192.15.patch, HIVE-12192.16.patch, HIVE-12192.17.patch, > HIVE-12192.18.patch, HIVE-12192.19.patch, HIVE-12192.20.patch, > HIVE-12192.21.patch, HIVE-12192.22.patch, HIVE-12192.23.patch, > HIVE-12192.24.patch, HIVE-12192.25.patch, HIVE-12192.26.patch, > HIVE-12192.patch > > > Hive currently uses the "local" time of a java.sql.Timestamp to represent the > SQL data type TIMESTAMP WITHOUT TIME ZONE. The purpose is to be able to use > {{Timestamp#getYear()}} and similar methods to implement SQL functions like > {{year}}. > When the SQL session's time zone is a DST zone, such as America/Los_Angeles > that alternates between PST and PDT, there are times that cannot be > represented because the effective zone skips them. > {code} > hive> select TIMESTAMP '2015-03-08 02:10:00.101'; > 2015-03-08 03:10:00.101 > {code} > Using UTC instead of the SQL session time zone as the underlying zone for a > java.sql.Timestamp avoids this bug, while still returning correct values for > {{getYear}} etc. Using UTC as the convenience representation (timestamp > without time zone has no real zone) would make timestamp calculations more > consistent and avoid similar problems in the future. > Notably, this would break the {{unix_timestamp}} UDF that specifies the > result is with respect to ["the default timezone and default > locale"|https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-DateFunctions]. > That function would need to be updated to use the > {{System.getProperty("user.timezone")}} zone. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19975) Checking writeIdList per table may not check the commit level of a partition on a partitioned table.
[ https://issues.apache.org/jira/browse/HIVE-19975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522899#comment-16522899 ] Steve Yeom commented on HIVE-19975: --- Actually I am checking Sergey's new test for partitioned table at his stats updater patch to find a test case scenario where we have a hole. > Checking writeIdList per table may not check the commit level of a partition > on a partitioned table. > > > Key: HIVE-19975 > URL: https://issues.apache.org/jira/browse/HIVE-19975 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 4.0.0 >Reporter: Steve Yeom >Assignee: Steve Yeom >Priority: Major > Fix For: 4.0.0 > > > writeIdList is per table entity but stats for a partitioned table are per > partition. > I.e., each record in PARTITIONS has an independent stats. > So if we check the validity of a partition's stats, we need to check in the > context of > a partiton. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19967) SMB Join : Need Optraits for PTFOperator ala GBY Op
[ https://issues.apache.org/jira/browse/HIVE-19967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522888#comment-16522888 ] Deepak Jaiswal commented on HIVE-19967: --- The only test failure is in the new test added. It failed as filter expr is added, probably a recent change which caused this. Will commit the updated result. > SMB Join : Need Optraits for PTFOperator ala GBY Op > --- > > Key: HIVE-19967 > URL: https://issues.apache.org/jira/browse/HIVE-19967 > Project: Hive > Issue Type: Task >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Fix For: 3.1.0, 4.0.0 > > Attachments: HIVE-19967.1.patch, HIVE-19967.2.patch, > HIVE-19967.3.patch, HIVE-19967.4.patch > > > The SMB join on one or more PTF Ops should reset the optraits keys just like > GBY Op does. > Currently there is no implementation of PTFOp optraits. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18729) Druid Time column type
[ https://issues.apache.org/jira/browse/HIVE-18729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-18729: --- Attachment: HIVE-18729.01.branch-3.patch > Druid Time column type > -- > > Key: HIVE-18729 > URL: https://issues.apache.org/jira/browse/HIVE-18729 > Project: Hive > Issue Type: Task > Components: Druid integration >Reporter: slim bouguerra >Assignee: Jesus Camacho Rodriguez >Priority: Blocker > Attachments: HIVE-18729.01.branch-3.patch, HIVE-18729.branch-3.patch, > HIVE-18729.patch > > > I have talked Offline with [~jcamachorodriguez] about this and agreed that > the best way to go is to support both cases where Druid time column can be > Timestamp or Timestamp with local time zone. > In fact, for the Hive-Druid internal table, this makes perfect sense since we > have Hive metadata about the time column during the CTAS statement then we > can handle both cases as we do for another type of storage eg ORC. > For the Druid external tables, we can have a default type and allow the user > to override that via table properties. > CC [~ashutoshc] and [~nishantbangarwa]. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19481) sample10.q returns wrong results
[ https://issues.apache.org/jira/browse/HIVE-19481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522884#comment-16522884 ] Deepak Jaiswal commented on HIVE-19481: --- Yes the results are valid. If you look at the logic in Partition.java, if the number of files is not equal to number of partitions, it would blow, which is now fixed, giving correct number of rows. Thanks for the review [~sershe] > sample10.q returns wrong results > - > > Key: HIVE-19481 > URL: https://issues.apache.org/jira/browse/HIVE-19481 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Steve Yeom >Assignee: Deepak Jaiswal >Priority: Major > Fix For: 3.1.0, 4.0.0 > > Attachments: HIVE-19481.1.patch, HIVE-19481.2.patch, > HIVE-19481.3.patch, HIVE-19481.4.patch > > > Ran "mvn test -Dtest=TestMiniLlapLocalCliDriver -Dqfile=sample10.q " after > changing the table to be > insert-only transactional. > The following queries returns couple of rows whereis no row results returns > for non-ACID table. > query: select ds, count(1) from srcpartbucket tablesample (bucket 2 out of 4 > on key) where ds is not null group by ds ORDER BY ds ASC > 2008-04-08 14 > 2008-04-09 14 > .. > query: select ds, count(1) from srcpartbucket tablesample (bucket 1 out of 2 > on key) where ds is not null group by ds ORDER BY ds ASC > 2008-04-08 4 > 2008-04-09 4 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19988) Precommit jobs erroring out
[ https://issues.apache.org/jira/browse/HIVE-19988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522881#comment-16522881 ] Vihang Karajgaonkar commented on HIVE-19988: After more investigation, it looks like the error is on the client side. The jenkins job does not work whenever it runs on H19 node. It works fine when it runs on other node. > Precommit jobs erroring out > --- > > Key: HIVE-19988 > URL: https://issues.apache.org/jira/browse/HIVE-19988 > Project: Hive > Issue Type: Bug >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar >Priority: Blocker > > {code} > + mvn clean package -B -DskipTests -Drat.numUnapprovedLicenses=1000 > -Dmaven.repo.local=/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/.m2/repository > [INFO] Scanning for projects... > [INFO] > [INFO] > > [INFO] Building hive-ptest 3.0 > [INFO] > > [INFO] Downloading from central: > https://repo.maven.apache.org/maven2/org/apache/maven/plugins/maven-clean-plugin/2.5/maven-clean-plugin-2.5.pom > [INFO] > > [INFO] BUILD FAILURE > [INFO] > > [INFO] Total time: 0.925 s > [INFO] Finished at: 2018-06-25T20:46:27Z > [INFO] Final Memory: 24M/1447M > [INFO] > > [ERROR] Plugin org.apache.maven.plugins:maven-clean-plugin:2.5 or one of its > dependencies could not be resolved: Failed to read artifact descriptor for > org.apache.maven.plugins:maven-clean-plugin:jar:2.5: Could not transfer > artifact org.apache.maven.plugins:maven-clean-plugin:pom:2.5 from/to central > (https://repo.maven.apache.org/maven2): Received fatal alert: > protocol_version -> [Help 1] > [ERROR] > [ERROR] To see the full stack trace of the errors, re-run Maven with the -e > switch. > [ERROR] Re-run Maven using the -X switch to enable full debug logging. > [ERROR] > [ERROR] For more information about the errors and possible solutions, please > read the following articles: > [ERROR] [Help 1] > http://cwiki.apache.org/confluence/display/MAVEN/PluginResolutionException > + return 1 > + ret=1 > + unpack_test_results > + '[' -z > /home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build ']' > + cd > /home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target > jenkins-execute-build.sh: line 61: cd: > /home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target: > No such file or directory > + [[ -f test-results.tar.gz ]] > + exit 1 > + rm -f /tmp/tmp.LFKzzyYwIt > Build step 'Execute shell' marked build as failure > Recording test results > ERROR: Step ?Publish JUnit test result report? failed: No test report files > were found. Configuration error? > [description-setter] Description set: HIVE-19980 / master-mr2 > Finished: FAILURE > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-17852) remove support for list bucketing "stored as directories" in 3.0
[ https://issues.apache.org/jira/browse/HIVE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Laszlo Bodor updated HIVE-17852: Attachment: HIVE-17852.13.patch > remove support for list bucketing "stored as directories" in 3.0 > > > Key: HIVE-17852 > URL: https://issues.apache.org/jira/browse/HIVE-17852 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Laszlo Bodor >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-17852.01.patch, HIVE-17852.02.patch, > HIVE-17852.03.patch, HIVE-17852.04.patch, HIVE-17852.05.patch, > HIVE-17852.06.patch, HIVE-17852.07.patch, HIVE-17852.08.patch, > HIVE-17852.09.patch, HIVE-17852.10.patch, HIVE-17852.11.patch, > HIVE-17852.12.patch, HIVE-17852.13.patch > > > From the email thread: > 1) LB, when stored as directories, adds a lot of low-level complexity to Hive > tables that has to be accounted for in many places in the code where the > files are written or modified - from FSOP to ACID/replication/export. > 2) While working on some FSOP code I noticed that some of that logic is > broken - e.g. the duplicate file removal from tasks, a pretty fundamental > correctness feature in Hive, may be broken. LB also doesn’t appear to be > compatible with e.g. regular bucketing. > 3) The feature hasn’t seen development activity in a while; it also doesn’t > appear to be used a lot. > Keeping with the theme of cleaning up “legacy” code for 3.0, I was proposing > we remove it. > (2) also suggested that, if needed, it might be easier to implement similar > functionality by adding some flexibility to partitions (which LB directories > look like anyway); that would also keep the logic on a higher level of > abstraction (split generation, partition pruning) as opposed to many > low-level places like FSOP, etc. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19967) SMB Join : Need Optraits for PTFOperator ala GBY Op
[ https://issues.apache.org/jira/browse/HIVE-19967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522878#comment-16522878 ] Hive QA commented on HIVE-19967: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12929067/HIVE-19967.4.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 14606 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_smb_ptf] (batchId=161) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12120/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12120/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12120/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12929067 - PreCommit-HIVE-Build > SMB Join : Need Optraits for PTFOperator ala GBY Op > --- > > Key: HIVE-19967 > URL: https://issues.apache.org/jira/browse/HIVE-19967 > Project: Hive > Issue Type: Task >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Fix For: 3.1.0, 4.0.0 > > Attachments: HIVE-19967.1.patch, HIVE-19967.2.patch, > HIVE-19967.3.patch, HIVE-19967.4.patch > > > The SMB join on one or more PTF Ops should reset the optraits keys just like > GBY Op does. > Currently there is no implementation of PTFOp optraits. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19989) Metastore uses wrong application name for HADOOP2 metrics
[ https://issues.apache.org/jira/browse/HIVE-19989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg reassigned HIVE-19989: -- > Metastore uses wrong application name for HADOOP2 metrics > - > > Key: HIVE-19989 > URL: https://issues.apache.org/jira/browse/HIVE-19989 > Project: Hive > Issue Type: Improvement > Components: Standalone Metastore >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19989) Metastore uses wrong application name for HADOOP2 metrics
[ https://issues.apache.org/jira/browse/HIVE-19989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19989: --- Issue Type: Bug (was: Improvement) > Metastore uses wrong application name for HADOOP2 metrics > - > > Key: HIVE-19989 > URL: https://issues.apache.org/jira/browse/HIVE-19989 > Project: Hive > Issue Type: Bug > Components: Standalone Metastore >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19967) SMB Join : Need Optraits for PTFOperator ala GBY Op
[ https://issues.apache.org/jira/browse/HIVE-19967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522845#comment-16522845 ] Hive QA commented on HIVE-19967: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 48s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 7s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 57s{color} | {color:blue} ql in master has 2280 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 38s{color} | {color:red} ql: The patch generated 1 new + 14 unchanged - 1 fixed = 15 total (was 15) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 23m 37s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-12120/dev-support/hive-personality.sh | | git revision | master / 1a2e378 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-12120/yetus/diff-checkstyle-ql.txt | | modules | C: itests ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-12120/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > SMB Join : Need Optraits for PTFOperator ala GBY Op > --- > > Key: HIVE-19967 > URL: https://issues.apache.org/jira/browse/HIVE-19967 > Project: Hive > Issue Type: Task >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Fix For: 3.1.0, 4.0.0 > > Attachments: HIVE-19967.1.patch, HIVE-19967.2.patch, > HIVE-19967.3.patch, HIVE-19967.4.patch > > > The SMB join on one or more PTF Ops should reset the optraits keys just like > GBY Op does. > Currently there is no implementation of PTFOp optraits. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-12192) Hive should carry out timestamp computations in UTC
[ https://issues.apache.org/jira/browse/HIVE-12192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522843#comment-16522843 ] Jesus Camacho Rodriguez commented on HIVE-12192: [~findepi], [~haozhun], yes, got lost between all these ptest runs, sorry about that. 1. Yes, that table seems to summarize current status and roadmap. {{After HIVE-12192}} would mean after 3.1.0. 2. We would like to have timestamp with tz too. However, adding a new type requires some work and there is no timeline right now. 3. Date can be considered to follow similar evolution as Timestamp: internal representation will be as LocalDate after 3.1.0. The change is part of this patch. bq. Note, since other products cannot be made dependent on single Hive version, it's critical to understand semantics differences introduced by this issue, if any. The expectation is that from user perspective, date and timestamp should be compliant with SQL semantics. In addition, different timezones will not result in any unexpected results from now on. > Hive should carry out timestamp computations in UTC > --- > > Key: HIVE-12192 > URL: https://issues.apache.org/jira/browse/HIVE-12192 > Project: Hive > Issue Type: Sub-task > Components: Hive >Reporter: Ryan Blue >Assignee: Jesus Camacho Rodriguez >Priority: Blocker > Labels: timestamp > Fix For: 3.1.0 > > Attachments: HIVE-12192.01.patch, HIVE-12192.02.patch, > HIVE-12192.03.patch, HIVE-12192.04.patch, HIVE-12192.05.patch, > HIVE-12192.06.patch, HIVE-12192.07.patch, HIVE-12192.08.patch, > HIVE-12192.09.patch, HIVE-12192.10.patch, HIVE-12192.11.patch, > HIVE-12192.12.patch, HIVE-12192.13.patch, HIVE-12192.14.patch, > HIVE-12192.15.patch, HIVE-12192.16.patch, HIVE-12192.17.patch, > HIVE-12192.18.patch, HIVE-12192.19.patch, HIVE-12192.20.patch, > HIVE-12192.21.patch, HIVE-12192.22.patch, HIVE-12192.23.patch, > HIVE-12192.24.patch, HIVE-12192.25.patch, HIVE-12192.patch > > > Hive currently uses the "local" time of a java.sql.Timestamp to represent the > SQL data type TIMESTAMP WITHOUT TIME ZONE. The purpose is to be able to use > {{Timestamp#getYear()}} and similar methods to implement SQL functions like > {{year}}. > When the SQL session's time zone is a DST zone, such as America/Los_Angeles > that alternates between PST and PDT, there are times that cannot be > represented because the effective zone skips them. > {code} > hive> select TIMESTAMP '2015-03-08 02:10:00.101'; > 2015-03-08 03:10:00.101 > {code} > Using UTC instead of the SQL session time zone as the underlying zone for a > java.sql.Timestamp avoids this bug, while still returning correct values for > {{getYear}} etc. Using UTC as the convenience representation (timestamp > without time zone has no real zone) would make timestamp calculations more > consistent and avoid similar problems in the future. > Notably, this would break the {{unix_timestamp}} UDF that specifies the > result is with respect to ["the default timezone and default > locale"|https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-DateFunctions]. > That function would need to be updated to use the > {{System.getProperty("user.timezone")}} zone. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19481) sample10.q returns wrong results
[ https://issues.apache.org/jira/browse/HIVE-19481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522837#comment-16522837 ] Sergey Shelukhin commented on HIVE-19481: - [~djaiswal] are test changes valid? Looks like sample results have more rows now. If the new results are valid +1 > sample10.q returns wrong results > - > > Key: HIVE-19481 > URL: https://issues.apache.org/jira/browse/HIVE-19481 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Steve Yeom >Assignee: Deepak Jaiswal >Priority: Major > Fix For: 3.1.0, 4.0.0 > > Attachments: HIVE-19481.1.patch, HIVE-19481.2.patch, > HIVE-19481.3.patch, HIVE-19481.4.patch > > > Ran "mvn test -Dtest=TestMiniLlapLocalCliDriver -Dqfile=sample10.q " after > changing the table to be > insert-only transactional. > The following queries returns couple of rows whereis no row results returns > for non-ACID table. > query: select ds, count(1) from srcpartbucket tablesample (bucket 2 out of 4 > on key) where ds is not null group by ds ORDER BY ds ASC > 2008-04-08 14 > 2008-04-09 14 > .. > query: select ds, count(1) from srcpartbucket tablesample (bucket 1 out of 2 > on key) where ds is not null group by ds ORDER BY ds ASC > 2008-04-08 4 > 2008-04-09 4 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19933) ALTER TABLE DROP PARTITION - Partition Not Found
[ https://issues.apache.org/jira/browse/HIVE-19933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alice Fan reassigned HIVE-19933: Assignee: Alice Fan (was: Naveen Gangam) > ALTER TABLE DROP PARTITION - Partition Not Found > > > Key: HIVE-19933 > URL: https://issues.apache.org/jira/browse/HIVE-19933 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Affects Versions: 1.2.2 >Reporter: BELUGA BEHR >Assignee: Alice Fan >Priority: Major > > {code:sql} > ALTER TABLE web_logsz DROP PARTITION (`date`='xyz') > -- SemanticException [Error 10001]: Table not found web_logsz > ALTER TABLE web_logs DROP PARTITION (`date`='xyz') > -- Success. > {code} > There is no 'xyz' partition for the 'date' column. To make this more > consistent, the query should fail if the user tries to drop a partition that > does not exist -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19988) Precommit jobs erroring out
[ https://issues.apache.org/jira/browse/HIVE-19988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vihang Karajgaonkar updated HIVE-19988: --- Priority: Blocker (was: Major) > Precommit jobs erroring out > --- > > Key: HIVE-19988 > URL: https://issues.apache.org/jira/browse/HIVE-19988 > Project: Hive > Issue Type: Bug >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar >Priority: Blocker > > {code} > + mvn clean package -B -DskipTests -Drat.numUnapprovedLicenses=1000 > -Dmaven.repo.local=/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/.m2/repository > [INFO] Scanning for projects... > [INFO] > [INFO] > > [INFO] Building hive-ptest 3.0 > [INFO] > > [INFO] Downloading from central: > https://repo.maven.apache.org/maven2/org/apache/maven/plugins/maven-clean-plugin/2.5/maven-clean-plugin-2.5.pom > [INFO] > > [INFO] BUILD FAILURE > [INFO] > > [INFO] Total time: 0.925 s > [INFO] Finished at: 2018-06-25T20:46:27Z > [INFO] Final Memory: 24M/1447M > [INFO] > > [ERROR] Plugin org.apache.maven.plugins:maven-clean-plugin:2.5 or one of its > dependencies could not be resolved: Failed to read artifact descriptor for > org.apache.maven.plugins:maven-clean-plugin:jar:2.5: Could not transfer > artifact org.apache.maven.plugins:maven-clean-plugin:pom:2.5 from/to central > (https://repo.maven.apache.org/maven2): Received fatal alert: > protocol_version -> [Help 1] > [ERROR] > [ERROR] To see the full stack trace of the errors, re-run Maven with the -e > switch. > [ERROR] Re-run Maven using the -X switch to enable full debug logging. > [ERROR] > [ERROR] For more information about the errors and possible solutions, please > read the following articles: > [ERROR] [Help 1] > http://cwiki.apache.org/confluence/display/MAVEN/PluginResolutionException > + return 1 > + ret=1 > + unpack_test_results > + '[' -z > /home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build ']' > + cd > /home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target > jenkins-execute-build.sh: line 61: cd: > /home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target: > No such file or directory > + [[ -f test-results.tar.gz ]] > + exit 1 > + rm -f /tmp/tmp.LFKzzyYwIt > Build step 'Execute shell' marked build as failure > Recording test results > ERROR: Step ?Publish JUnit test result report? failed: No test report files > were found. Configuration error? > [description-setter] Description set: HIVE-19980 / master-mr2 > Finished: FAILURE > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19988) Precommit jobs erroring out
[ https://issues.apache.org/jira/browse/HIVE-19988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vihang Karajgaonkar reassigned HIVE-19988: -- > Precommit jobs erroring out > --- > > Key: HIVE-19988 > URL: https://issues.apache.org/jira/browse/HIVE-19988 > Project: Hive > Issue Type: Bug >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar >Priority: Major > > {code} > + mvn clean package -B -DskipTests -Drat.numUnapprovedLicenses=1000 > -Dmaven.repo.local=/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/.m2/repository > [INFO] Scanning for projects... > [INFO] > [INFO] > > [INFO] Building hive-ptest 3.0 > [INFO] > > [INFO] Downloading from central: > https://repo.maven.apache.org/maven2/org/apache/maven/plugins/maven-clean-plugin/2.5/maven-clean-plugin-2.5.pom > [INFO] > > [INFO] BUILD FAILURE > [INFO] > > [INFO] Total time: 0.925 s > [INFO] Finished at: 2018-06-25T20:46:27Z > [INFO] Final Memory: 24M/1447M > [INFO] > > [ERROR] Plugin org.apache.maven.plugins:maven-clean-plugin:2.5 or one of its > dependencies could not be resolved: Failed to read artifact descriptor for > org.apache.maven.plugins:maven-clean-plugin:jar:2.5: Could not transfer > artifact org.apache.maven.plugins:maven-clean-plugin:pom:2.5 from/to central > (https://repo.maven.apache.org/maven2): Received fatal alert: > protocol_version -> [Help 1] > [ERROR] > [ERROR] To see the full stack trace of the errors, re-run Maven with the -e > switch. > [ERROR] Re-run Maven using the -X switch to enable full debug logging. > [ERROR] > [ERROR] For more information about the errors and possible solutions, please > read the following articles: > [ERROR] [Help 1] > http://cwiki.apache.org/confluence/display/MAVEN/PluginResolutionException > + return 1 > + ret=1 > + unpack_test_results > + '[' -z > /home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build ']' > + cd > /home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target > jenkins-execute-build.sh: line 61: cd: > /home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target: > No such file or directory > + [[ -f test-results.tar.gz ]] > + exit 1 > + rm -f /tmp/tmp.LFKzzyYwIt > Build step 'Execute shell' marked build as failure > Recording test results > ERROR: Step ?Publish JUnit test result report? failed: No test report files > were found. Configuration error? > [description-setter] Description set: HIVE-19980 / master-mr2 > Finished: FAILURE > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19986) Add logging of runtime statistics indicating when Hdfs Erasure Coding is used by MR
[ https://issues.apache.org/jira/browse/HIVE-19986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Sherman updated HIVE-19986: -- Issue Type: Sub-task (was: Task) Parent: HIVE-18116 > Add logging of runtime statistics indicating when Hdfs Erasure Coding is used > by MR > --- > > Key: HIVE-19986 > URL: https://issues.apache.org/jira/browse/HIVE-19986 > Project: Hive > Issue Type: Sub-task >Reporter: Andrew Sherman >Assignee: Andrew Sherman >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19987) Add logging of runtime statistics indicating when Hdfs Erasure Coding is used by Spark
[ https://issues.apache.org/jira/browse/HIVE-19987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Sherman updated HIVE-19987: -- Issue Type: Sub-task (was: Task) Parent: HIVE-18116 > Add logging of runtime statistics indicating when Hdfs Erasure Coding is used > by Spark > -- > > Key: HIVE-19987 > URL: https://issues.apache.org/jira/browse/HIVE-19987 > Project: Hive > Issue Type: Sub-task >Reporter: Andrew Sherman >Assignee: Andrew Sherman >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19986) Add logging of runtime statistics indicating when Hdfs Erasure Coding is used by MR
[ https://issues.apache.org/jira/browse/HIVE-19986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Sherman reassigned HIVE-19986: - > Add logging of runtime statistics indicating when Hdfs Erasure Coding is used > by MR > --- > > Key: HIVE-19986 > URL: https://issues.apache.org/jira/browse/HIVE-19986 > Project: Hive > Issue Type: Task >Reporter: Andrew Sherman >Assignee: Andrew Sherman >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19987) Add logging of runtime statistics indicating when Hdfs Erasure Coding is used by Spark
[ https://issues.apache.org/jira/browse/HIVE-19987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Sherman reassigned HIVE-19987: - > Add logging of runtime statistics indicating when Hdfs Erasure Coding is used > by Spark > -- > > Key: HIVE-19987 > URL: https://issues.apache.org/jira/browse/HIVE-19987 > Project: Hive > Issue Type: Task >Reporter: Andrew Sherman >Assignee: Andrew Sherman >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19980) GenericUDTFGetSplits fails when order by query returns 0 rows
[ https://issues.apache.org/jira/browse/HIVE-19980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522823#comment-16522823 ] Prasanth Jayachandran commented on HIVE-19980: -- not sure why precommit failed the .1 patch. Another try.. > GenericUDTFGetSplits fails when order by query returns 0 rows > - > > Key: HIVE-19980 > URL: https://issues.apache.org/jira/browse/HIVE-19980 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.0, 4.0.0 >Reporter: Kshitij Badani >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19980.1.patch, HIVE-19980.2.patch > > > When order by query returns 0 rows, there will not be any files in temporary > table location for GenericUDTFGetSplits > which results in the following exception > {code:java} > Caused by: java.lang.ArrayIndexOutOfBoundsException: 0 > at > org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:217) > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDTFGetSplits.getSplits(GenericUDTFGetSplits.java:420) > ... 52 more{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19980) GenericUDTFGetSplits fails when order by query returns 0 rows
[ https://issues.apache.org/jira/browse/HIVE-19980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-19980: - Attachment: HIVE-19980.2.patch > GenericUDTFGetSplits fails when order by query returns 0 rows > - > > Key: HIVE-19980 > URL: https://issues.apache.org/jira/browse/HIVE-19980 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.0, 4.0.0 >Reporter: Kshitij Badani >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19980.1.patch, HIVE-19980.2.patch > > > When order by query returns 0 rows, there will not be any files in temporary > table location for GenericUDTFGetSplits > which results in the following exception > {code:java} > Caused by: java.lang.ArrayIndexOutOfBoundsException: 0 > at > org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:217) > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDTFGetSplits.getSplits(GenericUDTFGetSplits.java:420) > ... 52 more{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-12192) Hive should carry out timestamp computations in UTC
[ https://issues.apache.org/jira/browse/HIVE-12192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522801#comment-16522801 ] Piotr Findeisen commented on HIVE-12192: hi [~jcamachorodriguez], i guess [~haozhun]'s comment got lost among the comments by [~hiveqa], so let me bump the question -- https://issues.apache.org/jira/browse/HIVE-12192?focusedCommentId=16520636=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16520636 Does [~haozhun]'s table represent the semantics of TIMESTAMP, TIMESTAMP WITH LOCAL TIME ZONE? Note, since other products cannot be made dependent on single Hive version, it's critical to understand semantics differences introduced by this issue, if any. Were there any semantics differences in DATE type? Looking forward for clarification. > Hive should carry out timestamp computations in UTC > --- > > Key: HIVE-12192 > URL: https://issues.apache.org/jira/browse/HIVE-12192 > Project: Hive > Issue Type: Sub-task > Components: Hive >Reporter: Ryan Blue >Assignee: Jesus Camacho Rodriguez >Priority: Blocker > Labels: timestamp > Fix For: 3.1.0 > > Attachments: HIVE-12192.01.patch, HIVE-12192.02.patch, > HIVE-12192.03.patch, HIVE-12192.04.patch, HIVE-12192.05.patch, > HIVE-12192.06.patch, HIVE-12192.07.patch, HIVE-12192.08.patch, > HIVE-12192.09.patch, HIVE-12192.10.patch, HIVE-12192.11.patch, > HIVE-12192.12.patch, HIVE-12192.13.patch, HIVE-12192.14.patch, > HIVE-12192.15.patch, HIVE-12192.16.patch, HIVE-12192.17.patch, > HIVE-12192.18.patch, HIVE-12192.19.patch, HIVE-12192.20.patch, > HIVE-12192.21.patch, HIVE-12192.22.patch, HIVE-12192.23.patch, > HIVE-12192.24.patch, HIVE-12192.25.patch, HIVE-12192.patch > > > Hive currently uses the "local" time of a java.sql.Timestamp to represent the > SQL data type TIMESTAMP WITHOUT TIME ZONE. The purpose is to be able to use > {{Timestamp#getYear()}} and similar methods to implement SQL functions like > {{year}}. > When the SQL session's time zone is a DST zone, such as America/Los_Angeles > that alternates between PST and PDT, there are times that cannot be > represented because the effective zone skips them. > {code} > hive> select TIMESTAMP '2015-03-08 02:10:00.101'; > 2015-03-08 03:10:00.101 > {code} > Using UTC instead of the SQL session time zone as the underlying zone for a > java.sql.Timestamp avoids this bug, while still returning correct values for > {{getYear}} etc. Using UTC as the convenience representation (timestamp > without time zone has no real zone) would make timestamp calculations more > consistent and avoid similar problems in the future. > Notably, this would break the {{unix_timestamp}} UDF that specifies the > result is with respect to ["the default timezone and default > locale"|https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-DateFunctions]. > That function would need to be updated to use the > {{System.getProperty("user.timezone")}} zone. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19964) Apply resource plan fails if trigger expression has quotes
[ https://issues.apache.org/jira/browse/HIVE-19964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-19964: - Resolution: Fixed Fix Version/s: 4.0.0 3.1.0 Status: Resolved (was: Patch Available) Committed to branch-3 and master > Apply resource plan fails if trigger expression has quotes > -- > > Key: HIVE-19964 > URL: https://issues.apache.org/jira/browse/HIVE-19964 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.0, 4.0.0 >Reporter: Aswathy Chellammal Sreekumar >Assignee: Sergey Shelukhin >Priority: Major > Fix For: 3.1.0, 4.0.0 > > Attachments: HIVE-19964.1.patch, HIVE-19964.2.patch, > HIVE-19964.3.patch > > > {code:java} > 0: jdbc:hive2://localhost:1> CREATE TRIGGER global.big_hdfs_read WHEN > HDFS_BYTES_READ > '300kb' DO KILL; > INFO : Compiling > command(queryId=pjayachandran_20180621131017_72b1441b-d790-4db7-83ca-479735843890): > CREATE TRIGGER global.big_hdfs_read WHEN HDFS_BYTES_READ > '300kb' DO KILL > INFO : Semantic Analysis Completed (retrial = false) > INFO : Returning Hive schema: Schema(fieldSchemas:null, properties:null) > INFO : Completed compiling > command(queryId=pjayachandran_20180621131017_72b1441b-d790-4db7-83ca-479735843890); > Time taken: 0.015 seconds > INFO : Executing > command(queryId=pjayachandran_20180621131017_72b1441b-d790-4db7-83ca-479735843890): > CREATE TRIGGER global.big_hdfs_read WHEN HDFS_BYTES_READ > '300kb' DO KILL > INFO : Starting task [Stage-0:DDL] in serial mode > INFO : Completed executing > command(queryId=pjayachandran_20180621131017_72b1441b-d790-4db7-83ca-479735843890); > Time taken: 0.025 seconds > INFO : OK > No rows affected (0.054 seconds) > 0: jdbc:hive2://localhost:1> ALTER TRIGGER global.big_hdfs_read ADD TO > UNMANAGED; > INFO : Compiling > command(queryId=pjayachandran_20180621131031_dd489324-db23-412f-9409-32ba697a10e5): > ALTER TRIGGER global.big_hdfs_read ADD TO UNMANAGED > INFO : Semantic Analysis Completed (retrial = false) > INFO : Returning Hive schema: Schema(fieldSchemas:null, properties:null) > INFO : Completed compiling > command(queryId=pjayachandran_20180621131031_dd489324-db23-412f-9409-32ba697a10e5); > Time taken: 0.014 seconds > INFO : Executing > command(queryId=pjayachandran_20180621131031_dd489324-db23-412f-9409-32ba697a10e5): > ALTER TRIGGER global.big_hdfs_read ADD TO UNMANAGED > INFO : Starting task [Stage-0:DDL] in serial mode > INFO : Completed executing > command(queryId=pjayachandran_20180621131031_dd489324-db23-412f-9409-32ba697a10e5); > Time taken: 0.029 seconds > INFO : OK > No rows affected (0.054 seconds) > 0: jdbc:hive2://localhost:1> ALTER RESOURCE PLAN global ENABLE; > INFO : Compiling > command(queryId=pjayachandran_20180621131036_26a5f4f3-91e3-4bec-ab42-800adb90104e): > ALTER RESOURCE PLAN global ENABLE > INFO : Semantic Analysis Completed (retrial = false) > INFO : Returning Hive schema: Schema(fieldSchemas:null, properties:null) > INFO : Completed compiling > command(queryId=pjayachandran_20180621131036_26a5f4f3-91e3-4bec-ab42-800adb90104e); > Time taken: 0.012 seconds > INFO : Executing > command(queryId=pjayachandran_20180621131036_26a5f4f3-91e3-4bec-ab42-800adb90104e): > ALTER RESOURCE PLAN global ENABLE > INFO : Starting task [Stage-0:DDL] in serial mode > INFO : Completed executing > command(queryId=pjayachandran_20180621131036_26a5f4f3-91e3-4bec-ab42-800adb90104e); > Time taken: 0.021 seconds > INFO : OK > No rows affected (0.045 seconds) > 0: jdbc:hive2://localhost:1> ALTER RESOURCE PLAN global ACTIVATE; > INFO : Compiling > command(queryId=pjayachandran_20180621131037_551b2af0-321b-4638-8ac0-76771a159f4b): > ALTER RESOURCE PLAN global ACTIVATE > INFO : Semantic Analysis Completed (retrial = false) > INFO : Returning Hive schema: Schema(fieldSchemas:null, properties:null) > INFO : Completed compiling > command(queryId=pjayachandran_20180621131037_551b2af0-321b-4638-8ac0-76771a159f4b); > Time taken: 0.017 seconds > INFO : Executing > command(queryId=pjayachandran_20180621131037_551b2af0-321b-4638-8ac0-76771a159f4b): > ALTER RESOURCE PLAN global ACTIVATE > INFO : Starting task [Stage-0:DDL] in serial mode > ERROR : FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.DDLTask. Invalid expression: HDFS_BYTES_READ > > 300kb > INFO : Completed executing > command(queryId=pjayachandran_20180621131037_551b2af0-321b-4638-8ac0-76771a159f4b); > Time taken: 0.037 seconds > Error: Error while processing statement: FAILED: Execution Error, return code > 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Invalid expression: > HDFS_BYTES_READ > 300kb (state=08S01,code=1){code} -- This message was sent by Atlassian JIRA
[jira] [Assigned] (HIVE-19964) Apply resource plan fails if trigger expression has quotes
[ https://issues.apache.org/jira/browse/HIVE-19964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran reassigned HIVE-19964: Assignee: Sergey Shelukhin (was: Prasanth Jayachandran) > Apply resource plan fails if trigger expression has quotes > -- > > Key: HIVE-19964 > URL: https://issues.apache.org/jira/browse/HIVE-19964 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.0, 4.0.0 >Reporter: Aswathy Chellammal Sreekumar >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19964.1.patch, HIVE-19964.2.patch, > HIVE-19964.3.patch > > > {code:java} > 0: jdbc:hive2://localhost:1> CREATE TRIGGER global.big_hdfs_read WHEN > HDFS_BYTES_READ > '300kb' DO KILL; > INFO : Compiling > command(queryId=pjayachandran_20180621131017_72b1441b-d790-4db7-83ca-479735843890): > CREATE TRIGGER global.big_hdfs_read WHEN HDFS_BYTES_READ > '300kb' DO KILL > INFO : Semantic Analysis Completed (retrial = false) > INFO : Returning Hive schema: Schema(fieldSchemas:null, properties:null) > INFO : Completed compiling > command(queryId=pjayachandran_20180621131017_72b1441b-d790-4db7-83ca-479735843890); > Time taken: 0.015 seconds > INFO : Executing > command(queryId=pjayachandran_20180621131017_72b1441b-d790-4db7-83ca-479735843890): > CREATE TRIGGER global.big_hdfs_read WHEN HDFS_BYTES_READ > '300kb' DO KILL > INFO : Starting task [Stage-0:DDL] in serial mode > INFO : Completed executing > command(queryId=pjayachandran_20180621131017_72b1441b-d790-4db7-83ca-479735843890); > Time taken: 0.025 seconds > INFO : OK > No rows affected (0.054 seconds) > 0: jdbc:hive2://localhost:1> ALTER TRIGGER global.big_hdfs_read ADD TO > UNMANAGED; > INFO : Compiling > command(queryId=pjayachandran_20180621131031_dd489324-db23-412f-9409-32ba697a10e5): > ALTER TRIGGER global.big_hdfs_read ADD TO UNMANAGED > INFO : Semantic Analysis Completed (retrial = false) > INFO : Returning Hive schema: Schema(fieldSchemas:null, properties:null) > INFO : Completed compiling > command(queryId=pjayachandran_20180621131031_dd489324-db23-412f-9409-32ba697a10e5); > Time taken: 0.014 seconds > INFO : Executing > command(queryId=pjayachandran_20180621131031_dd489324-db23-412f-9409-32ba697a10e5): > ALTER TRIGGER global.big_hdfs_read ADD TO UNMANAGED > INFO : Starting task [Stage-0:DDL] in serial mode > INFO : Completed executing > command(queryId=pjayachandran_20180621131031_dd489324-db23-412f-9409-32ba697a10e5); > Time taken: 0.029 seconds > INFO : OK > No rows affected (0.054 seconds) > 0: jdbc:hive2://localhost:1> ALTER RESOURCE PLAN global ENABLE; > INFO : Compiling > command(queryId=pjayachandran_20180621131036_26a5f4f3-91e3-4bec-ab42-800adb90104e): > ALTER RESOURCE PLAN global ENABLE > INFO : Semantic Analysis Completed (retrial = false) > INFO : Returning Hive schema: Schema(fieldSchemas:null, properties:null) > INFO : Completed compiling > command(queryId=pjayachandran_20180621131036_26a5f4f3-91e3-4bec-ab42-800adb90104e); > Time taken: 0.012 seconds > INFO : Executing > command(queryId=pjayachandran_20180621131036_26a5f4f3-91e3-4bec-ab42-800adb90104e): > ALTER RESOURCE PLAN global ENABLE > INFO : Starting task [Stage-0:DDL] in serial mode > INFO : Completed executing > command(queryId=pjayachandran_20180621131036_26a5f4f3-91e3-4bec-ab42-800adb90104e); > Time taken: 0.021 seconds > INFO : OK > No rows affected (0.045 seconds) > 0: jdbc:hive2://localhost:1> ALTER RESOURCE PLAN global ACTIVATE; > INFO : Compiling > command(queryId=pjayachandran_20180621131037_551b2af0-321b-4638-8ac0-76771a159f4b): > ALTER RESOURCE PLAN global ACTIVATE > INFO : Semantic Analysis Completed (retrial = false) > INFO : Returning Hive schema: Schema(fieldSchemas:null, properties:null) > INFO : Completed compiling > command(queryId=pjayachandran_20180621131037_551b2af0-321b-4638-8ac0-76771a159f4b); > Time taken: 0.017 seconds > INFO : Executing > command(queryId=pjayachandran_20180621131037_551b2af0-321b-4638-8ac0-76771a159f4b): > ALTER RESOURCE PLAN global ACTIVATE > INFO : Starting task [Stage-0:DDL] in serial mode > ERROR : FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.DDLTask. Invalid expression: HDFS_BYTES_READ > > 300kb > INFO : Completed executing > command(queryId=pjayachandran_20180621131037_551b2af0-321b-4638-8ac0-76771a159f4b); > Time taken: 0.037 seconds > Error: Error while processing statement: FAILED: Execution Error, return code > 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Invalid expression: > HDFS_BYTES_READ > 300kb (state=08S01,code=1){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19964) Apply resource plan fails if trigger expression has quotes
[ https://issues.apache.org/jira/browse/HIVE-19964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522786#comment-16522786 ] Hive QA commented on HIVE-19964: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12929058/HIVE-19964.3.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14605 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12113/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12113/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12113/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12929058 - PreCommit-HIVE-Build > Apply resource plan fails if trigger expression has quotes > -- > > Key: HIVE-19964 > URL: https://issues.apache.org/jira/browse/HIVE-19964 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.0, 4.0.0 >Reporter: Aswathy Chellammal Sreekumar >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19964.1.patch, HIVE-19964.2.patch, > HIVE-19964.3.patch > > > {code:java} > 0: jdbc:hive2://localhost:1> CREATE TRIGGER global.big_hdfs_read WHEN > HDFS_BYTES_READ > '300kb' DO KILL; > INFO : Compiling > command(queryId=pjayachandran_20180621131017_72b1441b-d790-4db7-83ca-479735843890): > CREATE TRIGGER global.big_hdfs_read WHEN HDFS_BYTES_READ > '300kb' DO KILL > INFO : Semantic Analysis Completed (retrial = false) > INFO : Returning Hive schema: Schema(fieldSchemas:null, properties:null) > INFO : Completed compiling > command(queryId=pjayachandran_20180621131017_72b1441b-d790-4db7-83ca-479735843890); > Time taken: 0.015 seconds > INFO : Executing > command(queryId=pjayachandran_20180621131017_72b1441b-d790-4db7-83ca-479735843890): > CREATE TRIGGER global.big_hdfs_read WHEN HDFS_BYTES_READ > '300kb' DO KILL > INFO : Starting task [Stage-0:DDL] in serial mode > INFO : Completed executing > command(queryId=pjayachandran_20180621131017_72b1441b-d790-4db7-83ca-479735843890); > Time taken: 0.025 seconds > INFO : OK > No rows affected (0.054 seconds) > 0: jdbc:hive2://localhost:1> ALTER TRIGGER global.big_hdfs_read ADD TO > UNMANAGED; > INFO : Compiling > command(queryId=pjayachandran_20180621131031_dd489324-db23-412f-9409-32ba697a10e5): > ALTER TRIGGER global.big_hdfs_read ADD TO UNMANAGED > INFO : Semantic Analysis Completed (retrial = false) > INFO : Returning Hive schema: Schema(fieldSchemas:null, properties:null) > INFO : Completed compiling > command(queryId=pjayachandran_20180621131031_dd489324-db23-412f-9409-32ba697a10e5); > Time taken: 0.014 seconds > INFO : Executing > command(queryId=pjayachandran_20180621131031_dd489324-db23-412f-9409-32ba697a10e5): > ALTER TRIGGER global.big_hdfs_read ADD TO UNMANAGED > INFO : Starting task [Stage-0:DDL] in serial mode > INFO : Completed executing > command(queryId=pjayachandran_20180621131031_dd489324-db23-412f-9409-32ba697a10e5); > Time taken: 0.029 seconds > INFO : OK > No rows affected (0.054 seconds) > 0: jdbc:hive2://localhost:1> ALTER RESOURCE PLAN global ENABLE; > INFO : Compiling > command(queryId=pjayachandran_20180621131036_26a5f4f3-91e3-4bec-ab42-800adb90104e): > ALTER RESOURCE PLAN global ENABLE > INFO : Semantic Analysis Completed (retrial = false) > INFO : Returning Hive schema: Schema(fieldSchemas:null, properties:null) > INFO : Completed compiling > command(queryId=pjayachandran_20180621131036_26a5f4f3-91e3-4bec-ab42-800adb90104e); > Time taken: 0.012 seconds > INFO : Executing > command(queryId=pjayachandran_20180621131036_26a5f4f3-91e3-4bec-ab42-800adb90104e): > ALTER RESOURCE PLAN global ENABLE > INFO : Starting task [Stage-0:DDL] in serial mode > INFO : Completed executing > command(queryId=pjayachandran_20180621131036_26a5f4f3-91e3-4bec-ab42-800adb90104e); > Time taken: 0.021 seconds > INFO : OK > No rows affected (0.045 seconds) > 0: jdbc:hive2://localhost:1> ALTER RESOURCE PLAN global ACTIVATE; > INFO : Compiling > command(queryId=pjayachandran_20180621131037_551b2af0-321b-4638-8ac0-76771a159f4b): > ALTER RESOURCE PLAN global ACTIVATE > INFO : Semantic Analysis Completed (retrial = false) > INFO : Returning Hive schema: Schema(fieldSchemas:null, properties:null) > INFO : Completed compiling > command(queryId=pjayachandran_20180621131037_551b2af0-321b-4638-8ac0-76771a159f4b); > Time taken: 0.017 seconds > INFO : Executing >
[jira] [Updated] (HIVE-19581) view do not support unicode characters well
[ https://issues.apache.org/jira/browse/HIVE-19581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Sherman updated HIVE-19581: -- Attachment: HIVE-19581.3.patch > view do not support unicode characters well > --- > > Key: HIVE-19581 > URL: https://issues.apache.org/jira/browse/HIVE-19581 > Project: Hive > Issue Type: Bug >Affects Versions: 1.1.0 >Reporter: kai >Assignee: Andrew Sherman >Priority: Major > Attachments: HIVE-19581.1.patch, HIVE-19581.2.patch, > HIVE-19581.3.patch, explain.png, metastore.png > > > create table t_test (name ,string) ; > insert into table t_test VALUES ('李四'); > create view t_view_test as select * from t_test where name='李四'; > when select * from t_view_test no records return -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19985) ACID: Skip decoding the ROW__ID sections for read-only queries
[ https://issues.apache.org/jira/browse/HIVE-19985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gopal V reassigned HIVE-19985: -- Assignee: Gopal V > ACID: Skip decoding the ROW__ID sections for read-only queries > --- > > Key: HIVE-19985 > URL: https://issues.apache.org/jira/browse/HIVE-19985 > Project: Hive > Issue Type: Improvement > Components: Transactions >Reporter: Gopal V >Assignee: Gopal V >Priority: Major > > For a base_n file there are no aborted transactions within the file and if > there are no pending delete deltas, the entire ACID ROW__ID can be skipped > for all read-only queries (i.e SELECT), though it still needs to be projected > out for MERGE, UPDATE and DELETE queries. > This patch tries to entirely ignore the ACID ROW__ID fields for all tables > where there are no possible deletes or aborted transactions for an ACID split. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17751) Separate HMS Client and HMS server into separate sub-modules
[ https://issues.apache.org/jira/browse/HIVE-17751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522762#comment-16522762 ] Alan Gates commented on HIVE-17751: --- The compactor code is in ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/ (thus not in the metastore) but is started by HiveMetaStore. See HiveMetaStore.startMetaStoreThreads() > Separate HMS Client and HMS server into separate sub-modules > > > Key: HIVE-17751 > URL: https://issues.apache.org/jira/browse/HIVE-17751 > Project: Hive > Issue Type: Sub-task > Components: Standalone Metastore >Reporter: Vihang Karajgaonkar >Assignee: Alexander Kolbasov >Priority: Major > Attachments: HIVE-17751.01.patch, > HIVE-17751.06-standalone-metastore.patch > > > external applications which are interfacing with HMS should ideally only > include HMSClient library instead of one big library containing server as > well. We should ideally have a thin client library so that cross version > support for external applications is easier. We should sub-divide the > standalone module into possibly 3 modules (one for common classes, one for > client classes and one for server) or 2 sub-modules (one for client and one > for server) so that we can generate separate jars for HMS client and server. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19532) fix tests for master-txnstats branch
[ https://issues.apache.org/jira/browse/HIVE-19532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522759#comment-16522759 ] Steve Yeom commented on HIVE-19532: --- OK. let me check. > fix tests for master-txnstats branch > > > Key: HIVE-19532 > URL: https://issues.apache.org/jira/browse/HIVE-19532 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Steve Yeom >Assignee: Steve Yeom >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-19532.01.patch, HIVE-19532.01.prepatch, > HIVE-19532.02.patch, HIVE-19532.02.prepatch, HIVE-19532.03.patch, > HIVE-19532.04.patch, HIVE-19532.05.patch, HIVE-19532.06.patch, > HIVE-19532.07.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19902) Provide Metastore micro-benchmarks
[ https://issues.apache.org/jira/browse/HIVE-19902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Kolbasov updated HIVE-19902: -- Attachment: HIVE-19902.02.patch > Provide Metastore micro-benchmarks > -- > > Key: HIVE-19902 > URL: https://issues.apache.org/jira/browse/HIVE-19902 > Project: Hive > Issue Type: Improvement > Components: Standalone Metastore >Affects Versions: 3.1.0, 4.0.0 >Reporter: Alexander Kolbasov >Assignee: Alexander Kolbasov >Priority: Major > Attachments: HIVE-19902.01.patch, HIVE-19902.02.patch > > > It would be very useful to have metastore benchmarks to be able to track perf > issues. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19964) Apply resource plan fails if trigger expression has quotes
[ https://issues.apache.org/jira/browse/HIVE-19964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522748#comment-16522748 ] Hive QA commented on HIVE-19964: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 49s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 49s{color} | {color:blue} ql in master has 2280 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 23m 19s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-12113/dev-support/hive-personality.sh | | git revision | master / 1f2419d | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-12113/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Apply resource plan fails if trigger expression has quotes > -- > > Key: HIVE-19964 > URL: https://issues.apache.org/jira/browse/HIVE-19964 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.0, 4.0.0 >Reporter: Aswathy Chellammal Sreekumar >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19964.1.patch, HIVE-19964.2.patch, > HIVE-19964.3.patch > > > {code:java} > 0: jdbc:hive2://localhost:1> CREATE TRIGGER global.big_hdfs_read WHEN > HDFS_BYTES_READ > '300kb' DO KILL; > INFO : Compiling > command(queryId=pjayachandran_20180621131017_72b1441b-d790-4db7-83ca-479735843890): > CREATE TRIGGER global.big_hdfs_read WHEN HDFS_BYTES_READ > '300kb' DO KILL > INFO : Semantic Analysis Completed (retrial = false) > INFO : Returning Hive schema: Schema(fieldSchemas:null, properties:null) > INFO : Completed compiling > command(queryId=pjayachandran_20180621131017_72b1441b-d790-4db7-83ca-479735843890); > Time taken: 0.015 seconds > INFO : Executing > command(queryId=pjayachandran_20180621131017_72b1441b-d790-4db7-83ca-479735843890): > CREATE TRIGGER global.big_hdfs_read WHEN HDFS_BYTES_READ > '300kb' DO KILL > INFO : Starting task [Stage-0:DDL] in serial mode > INFO : Completed executing > command(queryId=pjayachandran_20180621131017_72b1441b-d790-4db7-83ca-479735843890); > Time taken: 0.025 seconds > INFO : OK > No rows affected (0.054 seconds) > 0: jdbc:hive2://localhost:1> ALTER TRIGGER global.big_hdfs_read ADD TO > UNMANAGED; > INFO : Compiling >
[jira] [Commented] (HIVE-19532) fix tests for master-txnstats branch
[ https://issues.apache.org/jira/browse/HIVE-19532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522745#comment-16522745 ] Sergey Shelukhin commented on HIVE-19532: - [~steveyeom2017] what are the test fixes compared to the last batch w/139 failures? Many failures on non-CliDriver tests look like they are probably some NPE or other init issue (since all the test cases failed). > fix tests for master-txnstats branch > > > Key: HIVE-19532 > URL: https://issues.apache.org/jira/browse/HIVE-19532 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Steve Yeom >Assignee: Steve Yeom >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-19532.01.patch, HIVE-19532.01.prepatch, > HIVE-19532.02.patch, HIVE-19532.02.prepatch, HIVE-19532.03.patch, > HIVE-19532.04.patch, HIVE-19532.05.patch, HIVE-19532.06.patch, > HIVE-19532.07.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-15976) Support CURRENT_CATALOG and CURRENT_SCHEMA
[ https://issues.apache.org/jira/browse/HIVE-15976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Gates updated HIVE-15976: -- Resolution: Fixed Fix Version/s: 4.0.0 Status: Resolved (was: Patch Available) Patch committed to master. Thanks Laszlo for the contribution. > Support CURRENT_CATALOG and CURRENT_SCHEMA > -- > > Key: HIVE-15976 > URL: https://issues.apache.org/jira/browse/HIVE-15976 > Project: Hive > Issue Type: Sub-task > Components: SQL >Reporter: Carter Shanklin >Assignee: Laszlo Bodor >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-15976.01.patch, HIVE-15976.02.patch, > HIVE-15976.03.patch, HIVE-15976.04.patch, HIVE-15976.05.patch, > HIVE-15976.06.patch, HIVE-15976.07.patch, HIVE-15976.08.patch > > > Support these keywords for querying the current catalog and schema. SQL > reference: section 6.4 > *oracle* > CREATE TABLE CURRENT_SCHEMA (col VARCHAR2(1)); -- ok > SELECT CURRENT_SCHEMA FROM DUAL; -- error, ORA-00904: "CURRENT_SCHEMA": > invalid identifier > SELECT CURRENT_SCHEMA() FROM DUAL; -- error, ORA-00904: "CURRENT_SCHEMA": > invalid identifier > *postgres* > CREATE TABLE CURRENT_SCHEMA (col VARCHAR(1)); -- error: syntax error at or > near "CURRENT_SCHEMA" > SELECT CURRENT_SCHEMA; -- ok, "public" > SELECT CURRENT_SCHEMA(); -- ok, "public" > *mysql* > CREATE TABLE CURRENT_SCHEMA (col VARCHAR(1)); -- ok > SELECT CURRENT_SCHEMA; -- error, Unknown column 'CURRENT_SCHEMA' in 'field > list' > SELECT CURRENT_SCHEMA(); -- error, FUNCTION db_9_e28e6f.CURRENT_SCHEMA does > not exist -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19984) Backport HIVE-15976 to branch-3
[ https://issues.apache.org/jira/browse/HIVE-19984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Gates reassigned HIVE-19984: - > Backport HIVE-15976 to branch-3 > --- > > Key: HIVE-19984 > URL: https://issues.apache.org/jira/browse/HIVE-19984 > Project: Hive > Issue Type: Bug > Components: UDF >Affects Versions: 3.1.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19983) Backport HIVE-19769 to branch-3
[ https://issues.apache.org/jira/browse/HIVE-19983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Gates reassigned HIVE-19983: - > Backport HIVE-19769 to branch-3 > --- > > Key: HIVE-19983 > URL: https://issues.apache.org/jira/browse/HIVE-19983 > Project: Hive > Issue Type: Bug > Components: storage-api >Affects Versions: 3.1.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Major > > This patch will be needed for other catalog related work to be backported to > branch-3. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19581) view do not support unicode characters well
[ https://issues.apache.org/jira/browse/HIVE-19581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522720#comment-16522720 ] Hive QA commented on HIVE-19581: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12929056/HIVE-19581.2.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 14604 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[unicode_data] (batchId=81) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12112/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12112/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12112/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12929056 - PreCommit-HIVE-Build > view do not support unicode characters well > --- > > Key: HIVE-19581 > URL: https://issues.apache.org/jira/browse/HIVE-19581 > Project: Hive > Issue Type: Bug >Affects Versions: 1.1.0 >Reporter: kai >Assignee: Andrew Sherman >Priority: Major > Attachments: HIVE-19581.1.patch, HIVE-19581.2.patch, explain.png, > metastore.png > > > create table t_test (name ,string) ; > insert into table t_test VALUES ('李四'); > create view t_view_test as select * from t_test where name='李四'; > when select * from t_view_test no records return -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17751) Separate HMS Client and HMS server into separate sub-modules
[ https://issues.apache.org/jira/browse/HIVE-17751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522715#comment-16522715 ] Vihang Karajgaonkar commented on HIVE-17751: Hi [~alangates] bq. Is this any different from the compactor situation? In both cases we have the metastore kicking off processes that require Hive's execution engine. Can you please point me to the code in metastore which does this? I am not very familiar with Compactor/Stats updater. > Separate HMS Client and HMS server into separate sub-modules > > > Key: HIVE-17751 > URL: https://issues.apache.org/jira/browse/HIVE-17751 > Project: Hive > Issue Type: Sub-task > Components: Standalone Metastore >Reporter: Vihang Karajgaonkar >Assignee: Alexander Kolbasov >Priority: Major > Attachments: HIVE-17751.01.patch, > HIVE-17751.06-standalone-metastore.patch > > > external applications which are interfacing with HMS should ideally only > include HMSClient library instead of one big library containing server as > well. We should ideally have a thin client library so that cross version > support for external applications is easier. We should sub-divide the > standalone module into possibly 3 modules (one for common classes, one for > client classes and one for server) or 2 sub-modules (one for client and one > for server) so that we can generate separate jars for HMS client and server. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17852) remove support for list bucketing "stored as directories" in 3.0
[ https://issues.apache.org/jira/browse/HIVE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522704#comment-16522704 ] Hive QA commented on HIVE-17852: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 37s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 14s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 17s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 87m 44s{color} | {color:green} master passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 37s{color} | {color:red} branch/itests/hive-unit cannot run setBugDatabaseInfo from findbugs {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 22s{color} | {color:red} branch/metastore cannot run setBugDatabaseInfo from findbugs {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 3m 58s{color} | {color:red} branch/ql cannot run setBugDatabaseInfo from findbugs {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 3m 9s{color} | {color:red} branch/standalone-metastore cannot run setBugDatabaseInfo from findbugs {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 8m 44s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 13s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 50m 41s{color} | {color:red} root: The patch generated 1342 new + 249579 unchanged - 1421 fixed = 250921 total (was 251000) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 4m 31s{color} | {color:red} itests/hive-unit: The patch generated 150 new + 12402 unchanged - 157 fixed = 12552 total (was 12559) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 28m 22s{color} | {color:red} ql: The patch generated 738 new + 128934 unchanged - 807 fixed = 129672 total (was 129741) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 4m 11s{color} | {color:red} standalone-metastore: The patch generated 454 new + 19547 unchanged - 457 fixed = 20001 total (was 20004) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 39s{color} | {color:red} patch/itests/hive-unit cannot run setBugDatabaseInfo from findbugs {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 22s{color} | {color:red} patch/metastore cannot run setBugDatabaseInfo from findbugs {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 3m 52s{color} | {color:red} patch/ql cannot run setBugDatabaseInfo from findbugs {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 3m 6s{color} | {color:red} patch/standalone-metastore cannot run setBugDatabaseInfo from findbugs {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 5m 59s{color} | {color:red} root generated 2 new + 367 unchanged - 2 fixed = 369 total (was 369) {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 54s{color} | {color:red} ql generated 2 new + 98 unchanged - 2 fixed = 100 total (was 100) {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} |
[jira] [Updated] (HIVE-19948) HiveCli is not splitting the command by semicolon properly if quotes are inside the string
[ https://issues.apache.org/jira/browse/HIVE-19948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aihua Xu updated HIVE-19948: Resolution: Fixed Fix Version/s: 4.0.0 Status: Resolved (was: Patch Available) Pushed to master. This test failure seems flaky. > HiveCli is not splitting the command by semicolon properly if quotes are > inside the string > --- > > Key: HIVE-19948 > URL: https://issues.apache.org/jira/browse/HIVE-19948 > Project: Hive > Issue Type: Bug > Components: CLI >Affects Versions: 2.2.0 >Reporter: Aihua Xu >Assignee: Aihua Xu >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-19948.1.patch, HIVE-19948.2.patch, > HIVE-19948.3.patch > > > HIVE-15297 tries to split the command by considering semicolon inside string, > but it doesn't consider the case that quotes can also be inside string. > For the following command {{insert into escape1 partition (ds='1', part='3') > values ("abc' ");}}, it will fail with > {noformat} > 18/06/19 16:37:05 ERROR ql.Driver: FAILED: ParseException line 1:64 > extraneous input ';' expecting EOF near '' > org.apache.hadoop.hive.ql.parse.ParseException: line 1:64 extraneous input > ';' expecting EOF near '' > at > org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:220) > at org.apache.hadoop.hive.ql.parse.ParseUtils.parse(ParseUtils.java:74) > at org.apache.hadoop.hive.ql.parse.ParseUtils.parse(ParseUtils.java:67) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:606) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1686) > at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1633) > at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1628) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:126) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:214) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) > at > org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.util.RunJar.run(RunJar.java:239) > at org.apache.hadoop.util.RunJar.main(RunJar.java:153) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HIVE-19952) Error
[ https://issues.apache.org/jira/browse/HIVE-19952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Gates resolved HIVE-19952. --- Resolution: Invalid > Error > - > > Key: HIVE-19952 > URL: https://issues.apache.org/jira/browse/HIVE-19952 > Project: Hive > Issue Type: Bug >Reporter: Sebastian >Priority: Major > > I am joining 2 tables (hive tables in radoop rapidminer) and I am getting > this error, someone knows why? > * Message: HiveQL error. Message: > org.apache.hive.service.cli.HiveSQLException: Error while processing > statement: FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.IllegalArgumentException: > Can not create a Path from an ... > Thanks -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19967) SMB Join : Need Optraits for PTFOperator ala GBY Op
[ https://issues.apache.org/jira/browse/HIVE-19967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522671#comment-16522671 ] Deepak Jaiswal commented on HIVE-19967: --- [~hagleitn] [~jdere] can you please take a look? > SMB Join : Need Optraits for PTFOperator ala GBY Op > --- > > Key: HIVE-19967 > URL: https://issues.apache.org/jira/browse/HIVE-19967 > Project: Hive > Issue Type: Task >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Fix For: 3.1.0, 4.0.0 > > Attachments: HIVE-19967.1.patch, HIVE-19967.2.patch, > HIVE-19967.3.patch, HIVE-19967.4.patch > > > The SMB join on one or more PTF Ops should reset the optraits keys just like > GBY Op does. > Currently there is no implementation of PTFOp optraits. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19957) DOCO - not clear which execution engines support LLAP
[ https://issues.apache.org/jira/browse/HIVE-19957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522668#comment-16522668 ] Alan Gates commented on HIVE-19957: --- Short answer: no, you can't use Spark as the execution engine with LLAP. LLAP is tightly integrated with Tez. > DOCO - not clear which execution engines support LLAP > - > > Key: HIVE-19957 > URL: https://issues.apache.org/jira/browse/HIVE-19957 > Project: Hive > Issue Type: Improvement > Components: Documentation, llap, Spark, Tez >Reporter: t oo >Priority: Major > > couldn't see any info on whether 'hive on spark' supports LLAP -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19957) DOCO - not clear which execution engines support LLAP
[ https://issues.apache.org/jira/browse/HIVE-19957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Gates updated HIVE-19957: -- Component/s: Documentation > DOCO - not clear which execution engines support LLAP > - > > Key: HIVE-19957 > URL: https://issues.apache.org/jira/browse/HIVE-19957 > Project: Hive > Issue Type: Improvement > Components: Documentation, llap, Spark, Tez >Reporter: t oo >Priority: Major > > couldn't see any info on whether 'hive on spark' supports LLAP -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17751) Separate HMS Client and HMS server into separate sub-modules
[ https://issues.apache.org/jira/browse/HIVE-17751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522666#comment-16522666 ] Sergey Shelukhin commented on HIVE-17751: - MM compactor and stats updater are both Hive query based, so they still require Hive execution engine to run. I think there are tentative plans to move ACID compactor in the same directions, for various benefits e.g. running on Tez/LLAP, and some others that were discussed that I don't recall... If everything is query based, we could have an interface to run the query that would abstract this, but it would still basically require some Hive... it could be embedded Hive (no HS2) running queries on Tez/MR that also don't require any services being present; developing a non-query-based MM compactor (MM is not tied to ORC so it has to support any formats and all that stuff) and stats updater (same) seems like it's not worth the effort. > Separate HMS Client and HMS server into separate sub-modules > > > Key: HIVE-17751 > URL: https://issues.apache.org/jira/browse/HIVE-17751 > Project: Hive > Issue Type: Sub-task > Components: Standalone Metastore >Reporter: Vihang Karajgaonkar >Assignee: Alexander Kolbasov >Priority: Major > Attachments: HIVE-17751.01.patch, > HIVE-17751.06-standalone-metastore.patch > > > external applications which are interfacing with HMS should ideally only > include HMSClient library instead of one big library containing server as > well. We should ideally have a thin client library so that cross version > support for external applications is easier. We should sub-divide the > standalone module into possibly 3 modules (one for common classes, one for > client classes and one for server) or 2 sub-modules (one for client and one > for server) so that we can generate separate jars for HMS client and server. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19581) view do not support unicode characters well
[ https://issues.apache.org/jira/browse/HIVE-19581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522659#comment-16522659 ] Hive QA commented on HIVE-19581: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 0m 46s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-12112/dev-support/hive-personality.sh | | git revision | master / f2c4f31 | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-12112/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > view do not support unicode characters well > --- > > Key: HIVE-19581 > URL: https://issues.apache.org/jira/browse/HIVE-19581 > Project: Hive > Issue Type: Bug >Affects Versions: 1.1.0 >Reporter: kai >Assignee: Andrew Sherman >Priority: Major > Attachments: HIVE-19581.1.patch, HIVE-19581.2.patch, explain.png, > metastore.png > > > create table t_test (name ,string) ; > insert into table t_test VALUES ('李四'); > create view t_view_test as select * from t_test where name='李四'; > when select * from t_view_test no records return -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19980) GenericUDTFGetSplits fails when order by query returns 0 rows
[ https://issues.apache.org/jira/browse/HIVE-19980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522655#comment-16522655 ] Jason Dere commented on HIVE-19980: --- +1 > GenericUDTFGetSplits fails when order by query returns 0 rows > - > > Key: HIVE-19980 > URL: https://issues.apache.org/jira/browse/HIVE-19980 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.0, 4.0.0 >Reporter: Kshitij Badani >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19980.1.patch > > > When order by query returns 0 rows, there will not be any files in temporary > table location for GenericUDTFGetSplits > which results in the following exception > {code:java} > Caused by: java.lang.ArrayIndexOutOfBoundsException: 0 > at > org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:217) > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDTFGetSplits.getSplits(GenericUDTFGetSplits.java:420) > ... 52 more{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19980) GenericUDTFGetSplits fails when order by query returns 0 rows
[ https://issues.apache.org/jira/browse/HIVE-19980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-19980: - Reporter: Kshitij Badani (was: Prasanth Jayachandran) > GenericUDTFGetSplits fails when order by query returns 0 rows > - > > Key: HIVE-19980 > URL: https://issues.apache.org/jira/browse/HIVE-19980 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.0, 4.0.0 >Reporter: Kshitij Badani >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19980.1.patch > > > When order by query returns 0 rows, there will not be any files in temporary > table location for GenericUDTFGetSplits > which results in the following exception > {code:java} > Caused by: java.lang.ArrayIndexOutOfBoundsException: 0 > at > org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:217) > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDTFGetSplits.getSplits(GenericUDTFGetSplits.java:420) > ... 52 more{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19980) GenericUDTFGetSplits fails when order by query returns 0 rows
[ https://issues.apache.org/jira/browse/HIVE-19980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-19980: - Status: Patch Available (was: Open) > GenericUDTFGetSplits fails when order by query returns 0 rows > - > > Key: HIVE-19980 > URL: https://issues.apache.org/jira/browse/HIVE-19980 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.0, 4.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19980.1.patch > > > When order by query returns 0 rows, there will not be any files in temporary > table location for GenericUDTFGetSplits > which results in the following exception > {code:java} > Caused by: java.lang.ArrayIndexOutOfBoundsException: 0 > at > org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:217) > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDTFGetSplits.getSplits(GenericUDTFGetSplits.java:420) > ... 52 more{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19980) GenericUDTFGetSplits fails when order by query returns 0 rows
[ https://issues.apache.org/jira/browse/HIVE-19980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522651#comment-16522651 ] Prasanth Jayachandran commented on HIVE-19980: -- [~jdere] can you please take a look? small patch > GenericUDTFGetSplits fails when order by query returns 0 rows > - > > Key: HIVE-19980 > URL: https://issues.apache.org/jira/browse/HIVE-19980 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.0, 4.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19980.1.patch > > > When order by query returns 0 rows, there will not be any files in temporary > table location for GenericUDTFGetSplits > which results in the following exception > {code:java} > Caused by: java.lang.ArrayIndexOutOfBoundsException: 0 > at > org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:217) > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDTFGetSplits.getSplits(GenericUDTFGetSplits.java:420) > ... 52 more{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19981) Managed tables converted to external tables by the HiveStrictManagedMigration utility should be set to delete data when the table is dropped
[ https://issues.apache.org/jira/browse/HIVE-19981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-19981: -- Issue Type: Sub-task (was: Bug) Parent: HIVE-19753 > Managed tables converted to external tables by the HiveStrictManagedMigration > utility should be set to delete data when the table is dropped > > > Key: HIVE-19981 > URL: https://issues.apache.org/jira/browse/HIVE-19981 > Project: Hive > Issue Type: Sub-task >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > > Using the HiveStrictManagedMigration utility, tables can be converted to > conform to the Hive strict managed tables mode. > For managed tables that are converted to external tables by the utility, > these tables should keep the "drop data on delete" semantics they had when > they were managed tables. > One way to do this is to introduce a table property "external.table.purge", > which if true (and if the table is an external table), will let Hive know to > delete the table data when the table is dropped. This property will be set by > the HiveStrictManagedMigration utility when managed tables are converted to > external tables. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19981) Managed tables converted to external tables by the HiveStrictManagedMigration utility should be set to delete data when the table is dropped
[ https://issues.apache.org/jira/browse/HIVE-19981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere reassigned HIVE-19981: - > Managed tables converted to external tables by the HiveStrictManagedMigration > utility should be set to delete data when the table is dropped > > > Key: HIVE-19981 > URL: https://issues.apache.org/jira/browse/HIVE-19981 > Project: Hive > Issue Type: Bug >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > > Using the HiveStrictManagedMigration utility, tables can be converted to > conform to the Hive strict managed tables mode. > For managed tables that are converted to external tables by the utility, > these tables should keep the "drop data on delete" semantics they had when > they were managed tables. > One way to do this is to introduce a table property "external.table.purge", > which if true (and if the table is an external table), will let Hive know to > delete the table data when the table is dropped. This property will be set by > the HiveStrictManagedMigration utility when managed tables are converted to > external tables. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19980) GenericUDTFGetSplits fails when order by query returns 0 rows
[ https://issues.apache.org/jira/browse/HIVE-19980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-19980: - Attachment: HIVE-19980.1.patch > GenericUDTFGetSplits fails when order by query returns 0 rows > - > > Key: HIVE-19980 > URL: https://issues.apache.org/jira/browse/HIVE-19980 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.0, 4.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19980.1.patch > > > When order by query returns 0 rows, there will not be any files in temporary > table location for GenericUDTFGetSplits > which results in the following exception > {code:java} > Caused by: java.lang.ArrayIndexOutOfBoundsException: 0 > at > org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:217) > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDTFGetSplits.getSplits(GenericUDTFGetSplits.java:420) > ... 52 more{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17751) Separate HMS Client and HMS server into separate sub-modules
[ https://issues.apache.org/jira/browse/HIVE-17751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522635#comment-16522635 ] Alan Gates commented on HIVE-17751: --- Is this any different from the compactor situation? In both cases we have the metastore kicking off processes that require Hive's execution engine. On the compactor we don't have agreement on how to move forward. [~eugene.koifman] believes we should move the compactor to HS2. I see the appeal there since HS2 has all the necessary tools. My concern is that this locks non-Hive users out of writing ACID files (unless there is also a Hive instance using the metastore). In order to keep the compactor in the metastore we would have to make the engine that executes the jobs pluggable so that others could use the compactor if they were willing to implement the necessary execution logic. It seems to me after a quick glance that the same applies here. Stats being updated in the background benefits all engines. We don't want to tie that functionality only to Hive. But it will require abstracting out execution logic that Hive already has and others could potentially implement. I think we should come to agreement on this before we start moving the current compactor and stats background updater code around. > Separate HMS Client and HMS server into separate sub-modules > > > Key: HIVE-17751 > URL: https://issues.apache.org/jira/browse/HIVE-17751 > Project: Hive > Issue Type: Sub-task > Components: Standalone Metastore >Reporter: Vihang Karajgaonkar >Assignee: Alexander Kolbasov >Priority: Major > Attachments: HIVE-17751.01.patch, > HIVE-17751.06-standalone-metastore.patch > > > external applications which are interfacing with HMS should ideally only > include HMSClient library instead of one big library containing server as > well. We should ideally have a thin client library so that cross version > support for external applications is easier. We should sub-divide the > standalone module into possibly 3 modules (one for common classes, one for > client classes and one for server) or 2 sub-modules (one for client and one > for server) so that we can generate separate jars for HMS client and server. -- This message was sent by Atlassian JIRA (v7.6.3#76005)