[jira] [Commented] (HIVE-17417) LazySimple Timestamp is very expensive

2017-11-11 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248804#comment-16248804
 ] 

Lefty Leverenz commented on HIVE-17417:
---

[~prasanth_j], please add 2.4.0 to the fix version since you also committed the 
patch to branch-2.

Thanks.

> LazySimple Timestamp is very expensive
> --
>
> Key: HIVE-17417
> URL: https://issues.apache.org/jira/browse/HIVE-17417
> Project: Hive
>  Issue Type: Bug
>  Components: Serializers/Deserializers
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HIVE-17417.1.patch, HIVE-17417.2.patch, 
> HIVE-17417.3.patch, HIVE-17417.4.patch, HIVE-17417.5.patch, 
> HIVE-17417.6.patch, date-serialize.png, timestamp-serialize.png, 
> ts-jmh-perf.png
>
>
> In a specific case where a schema contains array with timestamp and 
> date fields (array size >1). Any access to this column very very 
> expensive in terms of CPU as most of the time is serialization of timestamp 
> and date. Refer attached profiles. >70% time spent in serialization + 
> tostring conversions. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HIVE-17113) Duplicate bucket files can get written to table by runaway task

2017-11-11 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16112279#comment-16112279
 ] 

Lefty Leverenz edited comment on HIVE-17113 at 11/12/17 7:11 AM:
-

Doc note:  This adds *hive.exec.move.files.from.source.dir* to HiveConf.java, 
so it needs to be documented in the wiki.

* [Configuration Properties -- Query and DDL Execution | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-QueryandDDLExecution]

Added a TODOC3.0 label.

Update 12/Nov/17:  HIVE-17963 removes *hive.exec.move.files.from.source.dir* 
for the same release, so it doesn't need to be documented after all.  I removed 
the TODOC3.0 label.


was (Author: le...@hortonworks.com):
Doc note:  This adds *hive.exec.move.files.from.source.dir* to HiveConf.java, 
so it needs to be documented in the wiki.

* [Configuration Properties -- Query and DDL Execution | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-QueryandDDLExecution]

Added a TODOC3.0 label.

> Duplicate bucket files can get written to table by runaway task
> ---
>
> Key: HIVE-17113
> URL: https://issues.apache.org/jira/browse/HIVE-17113
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Jason Dere
>Assignee: Jason Dere
> Fix For: 3.0.0
>
> Attachments: HIVE-17113.1.patch, HIVE-17113.2.patch, 
> HIVE-17113.3.patch
>
>
> Saw a table get a duplicate bucket file from a Hive query. It looks like the 
> following happened:
> 1. Task attempt A_0 starts,but then stops making progress
> 2. The job was running with speculative execution on, and task attempt A_1 is 
> started
> 3. Task attempt A_1 finishes execution and saves its output to the temp 
> directory.
> 5. A task kill is sent to A_0, though this does appear to actually kill A_0
> 6. The job for the query finishes and Utilities.mvFileToFinalPath() calls 
> Utilities.removeTempOrDuplicateFiles() to check for duplicate bucket files
> 7. A_0 (still running) finally finishes and saves its file to the temp 
> directory. At this point we now have duplicate bucket files - oops!
> 8. Utilities.removeTempOrDuplicateFiles() moves the temp directory to the 
> final location, where it is later moved to the partition directory.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17113) Duplicate bucket files can get written to table by runaway task

2017-11-11 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-17113:
--
Labels:   (was: TODOC3.0)

> Duplicate bucket files can get written to table by runaway task
> ---
>
> Key: HIVE-17113
> URL: https://issues.apache.org/jira/browse/HIVE-17113
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Jason Dere
>Assignee: Jason Dere
> Fix For: 3.0.0
>
> Attachments: HIVE-17113.1.patch, HIVE-17113.2.patch, 
> HIVE-17113.3.patch
>
>
> Saw a table get a duplicate bucket file from a Hive query. It looks like the 
> following happened:
> 1. Task attempt A_0 starts,but then stops making progress
> 2. The job was running with speculative execution on, and task attempt A_1 is 
> started
> 3. Task attempt A_1 finishes execution and saves its output to the temp 
> directory.
> 5. A task kill is sent to A_0, though this does appear to actually kill A_0
> 6. The job for the query finishes and Utilities.mvFileToFinalPath() calls 
> Utilities.removeTempOrDuplicateFiles() to check for duplicate bucket files
> 7. A_0 (still running) finally finishes and saves its file to the temp 
> directory. At this point we now have duplicate bucket files - oops!
> 8. Utilities.removeTempOrDuplicateFiles() moves the temp directory to the 
> final location, where it is later moved to the partition directory.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17963) Fix for HIVE-17113 can be improved for non-blobstore filesystems

2017-11-11 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248800#comment-16248800
 ] 

Lefty Leverenz commented on HIVE-17963:
---

No documentation needed:  This removes *hive.exec.move.files.from.source.dir* 
which was added by HIVE-17113 for the same release.

> Fix for HIVE-17113 can be improved for non-blobstore filesystems
> 
>
> Key: HIVE-17963
> URL: https://issues.apache.org/jira/browse/HIVE-17963
> Project: Hive
>  Issue Type: Bug
>Reporter: Jason Dere
>Assignee: Jason Dere
> Fix For: 3.0.0
>
> Attachments: HIVE-17963.1.patch, HIVE-17963.2.patch
>
>
> HIVE-17113/HIVE-17813 fix the duplicate file issue by performing file moves 
> on a file-by-file basis. For non-blobstore filesystems this results in many 
> more filesystem/namenode operations compared to the previous 
> Utilities.mvFileToFinalPath() behavior (dedup files in src dir, rename src 
> dir to final dir).
> For non-blobstore filesystems, a better solution would be the one described 
> [here|https://issues.apache.org/jira/browse/HIVE-17113?focusedCommentId=16100564=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16100564]:
> 1) Move the temp directory to a new directory name, to prevent additional 
> files from being added by any runaway processes.
> 2) Run removeTempOrDuplicateFiles() on this renamed temp directory
> 3) Run renameOrMoveFiles() to move the renamed temp directory to the final 
> location.
> This results in only one additional file operation in non-blobstore FSes 
> compared to the original Utilities.mvFileToFinalPath() behavior.
> The proposal is to do away with the config setting 
> hive.exec.move.files.from.source.dir and always have behavior that should 
> take care of the duplicate file issue described in HIVE-17113. For 
> non-blobstore filesystems we will do steps 1-3 described above. For blobstore 
> filesystems we will do the solution done in HIVE-17113/HIVE-17813 which does 
> the file-by-file copy - this should have the same number of file operations 
> as doing a rename directory on blobstore, which effectively results in file 
> moves on a file-by-file basis.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-16917) HiveServer2 guard rails - Limit concurrent connections from user

2017-11-11 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248797#comment-16248797
 ] 

Lefty Leverenz commented on HIVE-16917:
---

Doc note:  This adds *hive.server2.limit.connections.per.user*, 
*hive.server2.limit.connections.per.ipaddress*, and 
*hive.server2.limit.connections.per.user.ipaddress* to HiveConf.java, so they 
need to be documented in the wiki.

* [Configuration Properties -- HiveServer2 | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-HiveServer2]

Added a TODOC3.0 label.

> HiveServer2 guard rails - Limit concurrent connections from user
> 
>
> Key: HIVE-16917
> URL: https://issues.apache.org/jira/browse/HIVE-16917
> Project: Hive
>  Issue Type: New Feature
>  Components: HiveServer2
>Reporter: Thejas M Nair
>Assignee: Prasanth Jayachandran
>  Labels: TODOC3.0
> Fix For: 3.0.0
>
> Attachments: HIVE-16917.1.patch, HIVE-16917.2.patch, 
> HIVE-16917.3.patch, HIVE-16917.4.patch, HIVE-16917.5.patch
>
>
> Rogue applications can make HS2 unusable for others by making too many 
> connections at a time.
> HS2 should start rejecting the number of connections from a user, after it 
> has reached a configurable threshold.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-16917) HiveServer2 guard rails - Limit concurrent connections from user

2017-11-11 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-16917:
--
Labels: TODOC3.0  (was: )

> HiveServer2 guard rails - Limit concurrent connections from user
> 
>
> Key: HIVE-16917
> URL: https://issues.apache.org/jira/browse/HIVE-16917
> Project: Hive
>  Issue Type: New Feature
>  Components: HiveServer2
>Reporter: Thejas M Nair
>Assignee: Prasanth Jayachandran
>  Labels: TODOC3.0
> Fix For: 3.0.0
>
> Attachments: HIVE-16917.1.patch, HIVE-16917.2.patch, 
> HIVE-16917.3.patch, HIVE-16917.4.patch, HIVE-16917.5.patch
>
>
> Rogue applications can make HS2 unusable for others by making too many 
> connections at a time.
> HS2 should start rejecting the number of connections from a user, after it 
> has reached a configurable threshold.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17907) enable and apply resource plan commands in HS2

2017-11-11 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248789#comment-16248789
 ] 

Lefty Leverenz commented on HIVE-17907:
---

If this needs to be documented in the wiki, please add a TODOC3.0 label.  
Thanks.

> enable and apply resource plan commands in HS2
> --
>
> Key: HIVE-17907
> URL: https://issues.apache.org/jira/browse/HIVE-17907
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: 3.0.0
>
> Attachments: HIVE-17907.01.patch, HIVE-17907.02.patch, 
> HIVE-17907.02.patch, HIVE-17907.only.nogen.patch, HIVE-17907.patch
>
>
> Enabling and applying the RP should only be runnable in HS2 with active WM. 
> Both should validate the full resource plan (or at least enable should; users 
> cannot modify the RP via normal means once enabled, but it might be worth 
> double checking since we have to fetch it anyway to apply).
> Then, apply should propagate the resource plan to the WM instance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17902) add notions of default pool and start adding unmanaged mapping

2017-11-11 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248783#comment-16248783
 ] 

Lefty Leverenz commented on HIVE-17902:
---

Doc note:  This adds *hive.metastore.wm.default.pool.size* to HiveConf.java, so 
it needs to be documented in the wiki.  (Perhaps the LLAP section of 
Configuration Properties will have a subsection for workload management.)

* [Configuration Properties -- LLAP | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-LLAP]

Added a TODOC3.0 label.

> add notions of default pool and start adding unmanaged mapping
> --
>
> Key: HIVE-17902
> URL: https://issues.apache.org/jira/browse/HIVE-17902
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>  Labels: TODOC3.0
> Fix For: 3.0.0
>
> Attachments: HIVE-17902.01.patch, HIVE-17902.02.patch, 
> HIVE-17902.03.patch, HIVE-17902.04.patch, HIVE-17902.05.patch, 
> HIVE-17902.06.patch, HIVE-17902.07.patch, HIVE-17902.08.patch, 
> HIVE-17902.09.patch, HIVE-17902.10.patch, HIVE-17902.patch
>
>
> This is needed to map queries between WM and non-WM execution



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17902) add notions of default pool and start adding unmanaged mapping

2017-11-11 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-17902:
--
Labels: TODOC3.0  (was: )

> add notions of default pool and start adding unmanaged mapping
> --
>
> Key: HIVE-17902
> URL: https://issues.apache.org/jira/browse/HIVE-17902
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>  Labels: TODOC3.0
> Fix For: 3.0.0
>
> Attachments: HIVE-17902.01.patch, HIVE-17902.02.patch, 
> HIVE-17902.03.patch, HIVE-17902.04.patch, HIVE-17902.05.patch, 
> HIVE-17902.06.patch, HIVE-17902.07.patch, HIVE-17902.08.patch, 
> HIVE-17902.09.patch, HIVE-17902.10.patch, HIVE-17902.patch
>
>
> This is needed to map queries between WM and non-WM execution



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17926) Support triggers for non-pool sessions

2017-11-11 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248781#comment-16248781
 ] 

Lefty Leverenz commented on HIVE-17926:
---

Okay, thanks Prasanth.  I'll add a TODOC3.0 label as a reminder.

> Support triggers for non-pool sessions
> --
>
> Key: HIVE-17926
> URL: https://issues.apache.org/jira/browse/HIVE-17926
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>  Labels: TODOC3.0
> Fix For: 3.0.0
>
> Attachments: HIVE-17926.1.patch, HIVE-17926.1.patch, 
> HIVE-17926.2.patch, HIVE-17926.3.patch, HIVE-17926.3.patch, HIVE-17926.4.patch
>
>
> Current trigger implementation works only with tez session pools. In case 
> when tez sessions pools are not used, a new session gets created for every 
> query in which case trigger validation does not happen. It will be good to 
> support such one-off session case as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17926) Support triggers for non-pool sessions

2017-11-11 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-17926:
--
Labels: TODOC3.0  (was: )

> Support triggers for non-pool sessions
> --
>
> Key: HIVE-17926
> URL: https://issues.apache.org/jira/browse/HIVE-17926
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>  Labels: TODOC3.0
> Fix For: 3.0.0
>
> Attachments: HIVE-17926.1.patch, HIVE-17926.1.patch, 
> HIVE-17926.2.patch, HIVE-17926.3.patch, HIVE-17926.3.patch, HIVE-17926.4.patch
>
>
> Current trigger implementation works only with tez session pools. In case 
> when tez sessions pools are not used, a new session gets created for every 
> query in which case trigger validation does not happen. It will be good to 
> support such one-off session case as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17926) Support triggers for non-pool sessions

2017-11-11 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248780#comment-16248780
 ] 

Prasanth Jayachandran commented on HIVE-17926:
--

[~leftylev] Yes. This full feature has to be documented in the wiki. This 
feature is still in flux, will update the wiki once completed.

> Support triggers for non-pool sessions
> --
>
> Key: HIVE-17926
> URL: https://issues.apache.org/jira/browse/HIVE-17926
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Fix For: 3.0.0
>
> Attachments: HIVE-17926.1.patch, HIVE-17926.1.patch, 
> HIVE-17926.2.patch, HIVE-17926.3.patch, HIVE-17926.3.patch, HIVE-17926.4.patch
>
>
> Current trigger implementation works only with tez session pools. In case 
> when tez sessions pools are not used, a new session gets created for every 
> query in which case trigger validation does not happen. It will be good to 
> support such one-off session case as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17926) Support triggers for non-pool sessions

2017-11-11 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248778#comment-16248778
 ] 

Lefty Leverenz commented on HIVE-17926:
---

Should this be documented in the wiki?

> Support triggers for non-pool sessions
> --
>
> Key: HIVE-17926
> URL: https://issues.apache.org/jira/browse/HIVE-17926
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Fix For: 3.0.0
>
> Attachments: HIVE-17926.1.patch, HIVE-17926.1.patch, 
> HIVE-17926.2.patch, HIVE-17926.3.patch, HIVE-17926.3.patch, HIVE-17926.4.patch
>
>
> Current trigger implementation works only with tez session pools. In case 
> when tez sessions pools are not used, a new session gets created for every 
> query in which case trigger validation does not happen. It will be good to 
> support such one-off session case as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17965) Remove HIVELIMITTABLESCANPARTITION support

2017-11-11 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248777#comment-16248777
 ] 

Lefty Leverenz commented on HIVE-17965:
---

Doc note:  This removes the configuration parameter 
*hive.limit.query.max.table.partition* so the wiki needs to be updated in two 
places.

* [hive.limit.query.max.table.partition | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.limit.query.max.table.partition]
* [hive.metastore.limit.partition.request | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.metastore.limit.partition.request]
 (description)

Added a TODOC3.0 label.

> Remove HIVELIMITTABLESCANPARTITION support
> --
>
> Key: HIVE-17965
> URL: https://issues.apache.org/jira/browse/HIVE-17965
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Trivial
>  Labels: TODOC3.0
> Fix For: 3.0.0
>
> Attachments: HIVE-17965.01.patch
>
>
> HIVE-13884 marked it as deprecated



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17965) Remove HIVELIMITTABLESCANPARTITION support

2017-11-11 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-17965:
--
Labels: TODOC3.0  (was: )

> Remove HIVELIMITTABLESCANPARTITION support
> --
>
> Key: HIVE-17965
> URL: https://issues.apache.org/jira/browse/HIVE-17965
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Trivial
>  Labels: TODOC3.0
> Fix For: 3.0.0
>
> Attachments: HIVE-17965.01.patch
>
>
> HIVE-13884 marked it as deprecated



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17361) Support LOAD DATA for transactional tables

2017-11-11 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248652#comment-16248652
 ] 

Hive QA commented on HIVE-17361:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12897211/HIVE-17361.08.patch

{color:green}SUCCESS:{color} +1 due to 7 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 22 failed/errored test(s), 11379 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_table_stats] 
(batchId=52)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_4] 
(batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avro_partitioned_native] 
(batchId=6)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[dbtxnmgr_showlocks] 
(batchId=77)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_conversions] 
(batchId=73)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_default] (batchId=81)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[unionDistinct_1] 
(batchId=146)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=162)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mm_conversions]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=156)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[load_data_into_acid]
 (batchId=91)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[ct_noperm_loc]
 (batchId=94)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multi] 
(batchId=111)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=206)
org.apache.hadoop.hive.ql.TestTxnLoadData.loadData (batchId=254)
org.apache.hadoop.hive.ql.TestTxnLoadData.loadDataNonAcid2AcidConversion 
(batchId=254)
org.apache.hadoop.hive.ql.TestTxnLoadData.loadDataPartitioned (batchId=254)
org.apache.hadoop.hive.ql.exec.tez.TestWorkloadManager.testApplyPlanQpChanges 
(batchId=281)
org.apache.hadoop.hive.ql.io.orc.TestInputOutputFormat.testACIDReaderFooterSerializeWithDeltas
 (batchId=267)
org.apache.hadoop.hive.ql.io.orc.TestInputOutputFormat.testACIDReaderNoFooterSerializeWithDeltas
 (batchId=267)
org.apache.hadoop.hive.ql.io.orc.TestInputOutputFormat.testEtlCombinedStrategy 
(batchId=267)
org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints 
(batchId=223)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7781/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7781/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7781/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 22 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12897211 - PreCommit-HIVE-Build

> Support LOAD DATA for transactional tables
> --
>
> Key: HIVE-17361
> URL: https://issues.apache.org/jira/browse/HIVE-17361
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Reporter: Wei Zheng
>Assignee: Eugene Koifman
>Priority: Critical
> Attachments: HIVE-17361.07.patch, HIVE-17361.08.patch, 
> HIVE-17361.1.patch, HIVE-17361.2.patch, HIVE-17361.3.patch, HIVE-17361.4.patch
>
>
> LOAD DATA was not supported since ACID was introduced. Need to fill this gap 
> between ACID table and regular hive table.
> Current Documentation is under [DML 
> Operations|https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-DMLOperations]
>  and [Loading files into 
> tables|https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML#LanguageManualDML-Loadingfilesintotables]:
> \\
> * Load Data performs very limited validations of the data, in particular it 
> uses the input file name which may not be in 0_0 which can break some 
> read logic.  (Certainly will for Acid).
> * It does not check the schema of the file.  This may be a non issue for Acid 
> which requires ORC which is self describing so Schema Evolution may handle 
> this seamlessly.  (Assuming Schema is not too different).
> * It does check that _InputFormat_S are compatible. 
> * Bucketed (and thus sorted) tables don't support Load Data (but only if 
> hive.strict.checks.bucketing=true (default)).  Will keep this restriction for 
> Acid.
> * Load Data supports OVERWRITE clause
> * What happens to file permissions/ownership: rename vs copy differences
> \\
> The implementation will follow the same idea as 

[jira] [Updated] (HIVE-16406) Remove unwanted interning when creating PartitionDesc

2017-11-11 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HIVE-16406:

Status: Open  (was: Patch Available)

> Remove unwanted interning when creating PartitionDesc
> -
>
> Key: HIVE-16406
> URL: https://issues.apache.org/jira/browse/HIVE-16406
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HIVE-16406.1.patch, HIVE-16406.2.patch, 
> HIVE-16406.3.patch, HIVE-16406.profiler.png
>
>
> {{PartitionDesc::getTableDesc}} interns all table description properties by 
> default. But the table description properties are already interned and need 
> not be interned again. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17361) Support LOAD DATA for transactional tables

2017-11-11 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-17361:
--
Attachment: HIVE-17361.08.patch

> Support LOAD DATA for transactional tables
> --
>
> Key: HIVE-17361
> URL: https://issues.apache.org/jira/browse/HIVE-17361
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Reporter: Wei Zheng
>Assignee: Eugene Koifman
>Priority: Critical
> Attachments: HIVE-17361.07.patch, HIVE-17361.08.patch, 
> HIVE-17361.1.patch, HIVE-17361.2.patch, HIVE-17361.3.patch, HIVE-17361.4.patch
>
>
> LOAD DATA was not supported since ACID was introduced. Need to fill this gap 
> between ACID table and regular hive table.
> Current Documentation is under [DML 
> Operations|https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-DMLOperations]
>  and [Loading files into 
> tables|https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML#LanguageManualDML-Loadingfilesintotables]:
> \\
> * Load Data performs very limited validations of the data, in particular it 
> uses the input file name which may not be in 0_0 which can break some 
> read logic.  (Certainly will for Acid).
> * It does not check the schema of the file.  This may be a non issue for Acid 
> which requires ORC which is self describing so Schema Evolution may handle 
> this seamlessly.  (Assuming Schema is not too different).
> * It does check that _InputFormat_S are compatible. 
> * Bucketed (and thus sorted) tables don't support Load Data (but only if 
> hive.strict.checks.bucketing=true (default)).  Will keep this restriction for 
> Acid.
> * Load Data supports OVERWRITE clause
> * What happens to file permissions/ownership: rename vs copy differences
> \\
> The implementation will follow the same idea as in HIVE-14988 and use a 
> base_N/ dir for OVERWRITE clause.
> \\
> How is minor compaction going to handle delta/base with original files?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17221) Error: Error while compiling statement: FAILED: IndexOutOfBoundsException Index: 4, Size: 2 (state=42000,code=40000)

2017-11-11 Thread PRASHANT GOLASH (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248623#comment-16248623
 ] 

PRASHANT GOLASH commented on HIVE-17221:


I build hive from 2.1 branch.

pgolash-C02SH6ERG8WP:hive pgolash$ hive --version
Hive 2.1.2-SNAPSHOT

When running the above commands mentioned by you, I don't see any error.

Are you still facing the issue?

> Error: Error while compiling statement: FAILED: IndexOutOfBoundsException 
> Index: 4, Size: 2 (state=42000,code=4)
> 
>
> Key: HIVE-17221
> URL: https://issues.apache.org/jira/browse/HIVE-17221
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 2.1.1
> Environment: Amazon EMR 5.4 or any version where Hive 2.1.1 is used.
>Reporter: Matan Vardi
>Assignee: PRASHANT GOLASH
>
> Run the following queries in beeline:
> Observed that is a regression and used to work in Hive 1.x.
>  
> !connect jdbc:hive2://localhost:1/default (Login as hive/hive)
>  
> SET hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
> SET hive.support.concurrency=true;
> SET hive.enforce.bucketing=true;
> SET hive.exec.dynamic.partition.mode=nonstrict;
> create table orders_bkt1 (
>  O_ORDERKEY DOUBLE,
>  O_CUSTKEY DOUBLE,
>  O_TOTALPRICE DOUBLE,
>  O_ORDERDATE STRING, 
>  O_ORDERPRIORITY STRING,
>  O_CLERK STRING,
>  O_SHIPPRIORITY DOUBLE,
>  O_COMMENT STRING)
> PARTITIONED BY (
> O_ORDERSTATUS STRING)
> CLUSTERED BY (O_ORDERPRIORITY) INTO 6 BUCKETS
> ROW FORMAT DELIMITED
> FIELDS TERMINATED BY '|' STORED AS ORC
> TBLPROPERTIES ("transactional"="true");
> create table orders_src (
> O_ORDERKEY DOUBLE,
> O_CUSTKEY DOUBLE,
> O_ORDERSTATUS STRING,
> O_TOTALPRICE DOUBLE,
> O_ORDERDATE STRING,
> O_ORDERPRIORITY STRING,
> O_CLERK STRING,
> O_SHIPPRIORITY DOUBLE,
> O_COMMENT STRING)
> ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' STORED AS TEXTFILE;
> Insert into orders_src values 
> (1.5,2.5,"PENDING",15.5,"10/25/2017","low","clerk", 1.0,"comment");
> CREATE TABLE IF NOT EXISTS 
> w2834719472743385761_update_strategy_m_orders_updtx_50percent (a0 DOUBLE, a1 
> DOUBLE, a2 STRING, a3 DOUBLE, a4 STRING, a5 STRING, a6 STRING, a7 DOUBLE, a8 
> STRING) CLUSTERED BY (a0, a1, a2, a3, a4, a5, a6, a7, a8) INTO 32 BUCKETS 
> STORED AS ORC TBLPROPERTIES ('transactional'='true');
> INSERT INTO TABLE 
> w2834719472743385761_update_strategy_m_orders_updtx_50percent SELECT 
> alias.o_orderkey as a0, alias.o_custkey as a1, alias.o_orderstatus as a2, 10 
> + alias.o_totalprice as a3, alias.o_orderdate as a4, alias.o_orderpriority as 
> a5, alias.o_clerk as a6, alias.o_shippriority as a7, alias.o_comment as a8 
> FROM orders_src alias;
> CREATE TABLE IF NOT EXISTS 
> w2834719472743385761_write_orders_bkt_src_tmp_m_orders_updtx_50percent (a0 
> DOUBLE, a1 DOUBLE, a2 DOUBLE, a3 STRING, a4 STRING, a5 STRING, a6 DOUBLE, a7 
> STRING, a8 STRING) CLUSTERED BY (a0) INTO 32 BUCKETS STORED AS ORC 
> TBLPROPERTIES ('transactional'='true');
> INSERT INTO TABLE 
> w2834719472743385761_write_orders_bkt_src_tmp_m_orders_updtx_50percent SELECT 
> w2834719472743385761_update_strategy_m_orders_updtx_50percent.a0 as a0, 
> w2834719472743385761_update_strategy_m_orders_updtx_50percent.a1 as a1, 
> w2834719472743385761_update_strategy_m_orders_updtx_50percent.a3 as a2, 
> w2834719472743385761_update_strategy_m_orders_updtx_50percent.a4 as a3, 
> w2834719472743385761_update_strategy_m_orders_updtx_50percent.a5 as a4, 
> w2834719472743385761_update_strategy_m_orders_updtx_50percent.a6 as a5, 
> w2834719472743385761_update_strategy_m_orders_updtx_50percent.a7 as a6, 
> w2834719472743385761_update_strategy_m_orders_updtx_50percent.a8 as a7, 
> w2834719472743385761_update_strategy_m_orders_updtx_50percent.a2 as a8 FROM 
> w2834719472743385761_update_strategy_m_orders_updtx_50percent WHERE (CASE 
> WHEN w2834719472743385761_update_strategy_m_orders_updtx_50percent.a2 = 'P' 
> THEN 1 ELSE 0 END) = 1;
> CREATE TABLE IF NOT EXISTS 
> w2834719472743385761_write_orders_bkt_tgt_tmp_m_orders_updtx_50percent (a0 
> DOUBLE, a1 DOUBLE, a2 DOUBLE, a3 STRING, a4 STRING, a5 STRING, a6 DOUBLE, a7 
> STRING, a8 STRING) CLUSTERED BY (a0) INTO 32 BUCKETS STORED AS ORC 
> TBLPROPERTIES ('transactional'='true');
> INSERT INTO TABLE 
> w2834719472743385761_write_orders_bkt_tgt_tmp_m_orders_updtx_50percent SELECT 
> orders_bkt1.o_orderkey as a0, orders_bkt1.o_custkey as a1, 
> orders_bkt1.o_totalprice as a2, orders_bkt1.o_orderdate as a3, 
> orders_bkt1.o_orderpriority as a4, orders_bkt1.o_clerk as a5, 
> orders_bkt1.o_shippriority as a6, orders_bkt1.o_comment as a7, 
> orders_bkt1.o_orderstatus as a8 FROM 
> w2834719472743385761_write_orders_bkt_src_tmp_m_orders_updtx_50percent JOIN 
> orders_bkt1 ON 
> 

[jira] [Updated] (HIVE-17361) Support LOAD DATA for transactional tables

2017-11-11 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-17361:
--
Description: 
LOAD DATA was not supported since ACID was introduced. Need to fill this gap 
between ACID table and regular hive table.

Current Documentation is under [DML 
Operations|https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-DMLOperations]
 and [Loading files into 
tables|https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML#LanguageManualDML-Loadingfilesintotables]:

\\
* Load Data performs very limited validations of the data, in particular it 
uses the input file name which may not be in 0_0 which can break some read 
logic.  (Certainly will for Acid).
* It does not check the schema of the file.  This may be a non issue for Acid 
which requires ORC which is self describing so Schema Evolution may handle this 
seamlessly.  (Assuming Schema is not too different).
* It does check that _InputFormat_S are compatible. 
* Bucketed (and thus sorted) tables don't support Load Data (but only if 
hive.strict.checks.bucketing=true (default)).  Will keep this restriction for 
Acid.
* Load Data supports OVERWRITE clause
* What happens to file permissions/ownership: rename vs copy differences

\\
The implementation will follow the same idea as in HIVE-14988 and use a base_N/ 
dir for OVERWRITE clause.

\\
How is minor compaction going to handle delta/base with original files?


  was:
LOAD DATA was not supported since ACID was introduced. Need to fill this gap 
between ACID table and regular hive table.

Current Documentation is under [DML 
Operations|https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-DMLOperations]
 and [Loading files into 
tables|https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML#LanguageManualDML-Loadingfilesintotables]:

\\
* Load Data performs very limited validations of the data, in particular it 
uses the input file name which may not be in 0_0 which can break some read 
logic.  (Certainly will for Acid).
* It does not check the schema of the file.  This may be a non issue for Acid 
which requires ORC which is self describing so Schema Evolution may handle this 
seamlessly.  (Assuming Schema is not too different).
* It does check that _InputFormat_S are compatible. 
* Bucketed (and thus sorted) tables don't support Load Data (but only if 
hive.strict.checks.bucketing=true (default)).  Will keep this restriction for 
Acid.
* Load Data supports OVERWRITE clause
* What happens to file permissions/ownership: rename vs copy differences

\\
The implementation will follow the same idea as in HIVE-14988 and use a base_N/ 
dir for OVERWRITE clause.
\\
How is minor compaction going to handle delta/base with original files?



> Support LOAD DATA for transactional tables
> --
>
> Key: HIVE-17361
> URL: https://issues.apache.org/jira/browse/HIVE-17361
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Reporter: Wei Zheng
>Assignee: Eugene Koifman
>Priority: Critical
> Attachments: HIVE-17361.07.patch, HIVE-17361.1.patch, 
> HIVE-17361.2.patch, HIVE-17361.3.patch, HIVE-17361.4.patch
>
>
> LOAD DATA was not supported since ACID was introduced. Need to fill this gap 
> between ACID table and regular hive table.
> Current Documentation is under [DML 
> Operations|https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-DMLOperations]
>  and [Loading files into 
> tables|https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML#LanguageManualDML-Loadingfilesintotables]:
> \\
> * Load Data performs very limited validations of the data, in particular it 
> uses the input file name which may not be in 0_0 which can break some 
> read logic.  (Certainly will for Acid).
> * It does not check the schema of the file.  This may be a non issue for Acid 
> which requires ORC which is self describing so Schema Evolution may handle 
> this seamlessly.  (Assuming Schema is not too different).
> * It does check that _InputFormat_S are compatible. 
> * Bucketed (and thus sorted) tables don't support Load Data (but only if 
> hive.strict.checks.bucketing=true (default)).  Will keep this restriction for 
> Acid.
> * Load Data supports OVERWRITE clause
> * What happens to file permissions/ownership: rename vs copy differences
> \\
> The implementation will follow the same idea as in HIVE-14988 and use a 
> base_N/ dir for OVERWRITE clause.
> \\
> How is minor compaction going to handle delta/base with original files?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17361) Support LOAD DATA for transactional tables

2017-11-11 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-17361:
--
Description: 
LOAD DATA was not supported since ACID was introduced. Need to fill this gap 
between ACID table and regular hive table.

Current Documentation is under [DML 
Operations|https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-DMLOperations]
 and [Loading files into 
tables|https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML#LanguageManualDML-Loadingfilesintotables]:

\\
* Load Data performs very limited validations of the data, in particular it 
uses the input file name which may not be in 0_0 which can break some read 
logic.  (Certainly will for Acid).
* It does not check the schema of the file.  This may be a non issue for Acid 
which requires ORC which is self describing so Schema Evolution may handle this 
seamlessly.  (Assuming Schema is not too different).
* It does check that _InputFormat_S are compatible. 
* Bucketed (and thus sorted) tables don't support Load Data (but only if 
hive.strict.checks.bucketing=true (default)).  Will keep this restriction for 
Acid.
* Load Data supports OVERWRITE clause
* What happens to file permissions/ownership: rename vs copy differences

\\
The implementation will follow the same idea as in HIVE-14988 and use a base_N/ 
dir for OVERWRITE clause.
\\
How is minor compaction going to handle delta/base with original files?


  was:
LOAD DATA was not supported since ACID was introduced. Need to fill this gap 
between ACID table and regular hive table.

Current Documentation is under [DML 
Operations|https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-DMLOperations]
 and [Loading files into 
tables|https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML#LanguageManualDML-Loadingfilesintotables]:

\\
* Load Data performs very limited validations of the data, in particular it 
uses the input file name which may not be in 0_0 which can break some read 
logic.  (Certainly will for Acid).
* It does not check the schema of the file.  This may be a non issue for Acid 
which requires ORC which is self describing so Schema Evolution may handle this 
seamlessly.  (Assuming Schema is not too different).
* It does check that _InputFormat_S are compatible. 
* Bucketed (and thus sorted) tables don't support Load Data (but only if 
hive.strict.checks.bucketing=true (default)).  Will keep this restriction for 
Acid.
* Load Data supports OVERWRITE clause
* What happens to file permissions/ownership: rename vs copy differences

\\
The implementation will follow the same idea as in HIVE-14988 and use a base_N/ 
dir for OVERWRITE clause.



> Support LOAD DATA for transactional tables
> --
>
> Key: HIVE-17361
> URL: https://issues.apache.org/jira/browse/HIVE-17361
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Reporter: Wei Zheng
>Assignee: Eugene Koifman
>Priority: Critical
> Attachments: HIVE-17361.07.patch, HIVE-17361.1.patch, 
> HIVE-17361.2.patch, HIVE-17361.3.patch, HIVE-17361.4.patch
>
>
> LOAD DATA was not supported since ACID was introduced. Need to fill this gap 
> between ACID table and regular hive table.
> Current Documentation is under [DML 
> Operations|https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-DMLOperations]
>  and [Loading files into 
> tables|https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML#LanguageManualDML-Loadingfilesintotables]:
> \\
> * Load Data performs very limited validations of the data, in particular it 
> uses the input file name which may not be in 0_0 which can break some 
> read logic.  (Certainly will for Acid).
> * It does not check the schema of the file.  This may be a non issue for Acid 
> which requires ORC which is self describing so Schema Evolution may handle 
> this seamlessly.  (Assuming Schema is not too different).
> * It does check that _InputFormat_S are compatible. 
> * Bucketed (and thus sorted) tables don't support Load Data (but only if 
> hive.strict.checks.bucketing=true (default)).  Will keep this restriction for 
> Acid.
> * Load Data supports OVERWRITE clause
> * What happens to file permissions/ownership: rename vs copy differences
> \\
> The implementation will follow the same idea as in HIVE-14988 and use a 
> base_N/ dir for OVERWRITE clause.
> \\
> How is minor compaction going to handle delta/base with original files?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17856) MM tables - IOW is not ACID compliant

2017-11-11 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248538#comment-16248538
 ] 

Hive QA commented on HIVE-17856:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12897174/HIVE-17856.8.patch

{color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 16 failed/errored test(s), 11384 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] 
(batchId=47)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[dbtxnmgr_showlocks] 
(batchId=77)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_all] (batchId=66)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_loaddata] (batchId=45)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[union_null] (batchId=80)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[mm_all] 
(batchId=147)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[unionDistinct_1] 
(batchId=146)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dp_counter_mm]
 (batchId=150)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=162)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=156)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[ct_noperm_loc]
 (batchId=94)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multi] 
(batchId=111)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=206)
org.apache.hadoop.hive.ql.TestTxnCommandsForMmTable.testInsertOverwriteWithDynamicPartition
 (batchId=254)
org.apache.hadoop.hive.ql.TestTxnCommandsForOrcMmTable.testInsertOverwriteWithDynamicPartition
 (batchId=272)
org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints 
(batchId=223)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7780/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7780/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7780/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 16 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12897174 - PreCommit-HIVE-Build

> MM tables - IOW is not ACID compliant
> -
>
> Key: HIVE-17856
> URL: https://issues.apache.org/jira/browse/HIVE-17856
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Sergey Shelukhin
>Assignee: Steve Yeom
>  Labels: mm-gap-1
> Attachments: HIVE-17856.1.patch, HIVE-17856.2.patch, 
> HIVE-17856.3.patch, HIVE-17856.4.patch, HIVE-17856.5.patch, 
> HIVE-17856.6.patch, HIVE-17856.7.patch, HIVE-17856.8.patch
>
>
> The following tests were removed from mm_all during "integration"... I should 
> have never allowed such manner of intergration.
> MM logic should have been kept intact until ACID logic could catch up. Alas, 
> here we are.
> {noformat}
> drop table iow0_mm;
> create table iow0_mm(key int) tblproperties("transactional"="true", 
> "transactional_properties"="insert_only");
> insert overwrite table iow0_mm select key from intermediate;
> insert into table iow0_mm select key + 1 from intermediate;
> select * from iow0_mm order by key;
> insert overwrite table iow0_mm select key + 2 from intermediate;
> select * from iow0_mm order by key;
> drop table iow0_mm;
> drop table iow1_mm; 
> create table iow1_mm(key int) partitioned by (key2 int)  
> tblproperties("transactional"="true", 
> "transactional_properties"="insert_only");
> insert overwrite table iow1_mm partition (key2)
> select key as k1, key from intermediate union all select key as k1, key from 
> intermediate;
> insert into table iow1_mm partition (key2)
> select key + 1 as k1, key from intermediate union all select key as k1, key 
> from intermediate;
> select * from iow1_mm order by key, key2;
> insert overwrite table iow1_mm partition (key2)
> select key + 3 as k1, key from intermediate union all select key + 4 as k1, 
> key from intermediate;
> select * from iow1_mm order by key, key2;
> insert overwrite table iow1_mm partition (key2)
> select key + 3 as k1, key + 3 from intermediate union all select key + 2 as 
> k1, key + 2 from intermediate;
> select * from iow1_mm order by key, key2;
> drop table iow1_mm;
> {noformat}
> {noformat}
> drop table simple_mm;
> create table simple_mm(key int) stored as orc tblproperties 
> ("transactional"="true", 

[jira] [Commented] (HIVE-17856) MM tables - IOW is not ACID compliant

2017-11-11 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248513#comment-16248513
 ] 

Hive QA commented on HIVE-17856:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12897174/HIVE-17856.8.patch

{color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 16 failed/errored test(s), 11384 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[dbtxnmgr_showlocks] 
(batchId=77)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_all] (batchId=66)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_loaddata] (batchId=45)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[mm_all] 
(batchId=147)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[unionDistinct_1] 
(batchId=146)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dp_counter_mm]
 (batchId=150)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=162)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=157)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=156)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_2] 
(batchId=102)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[ct_noperm_loc]
 (batchId=94)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multi] 
(batchId=111)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=206)
org.apache.hadoop.hive.ql.TestTxnCommandsForMmTable.testInsertOverwriteWithDynamicPartition
 (batchId=254)
org.apache.hadoop.hive.ql.TestTxnCommandsForOrcMmTable.testInsertOverwriteWithDynamicPartition
 (batchId=272)
org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints 
(batchId=223)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7779/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7779/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7779/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 16 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12897174 - PreCommit-HIVE-Build

> MM tables - IOW is not ACID compliant
> -
>
> Key: HIVE-17856
> URL: https://issues.apache.org/jira/browse/HIVE-17856
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Sergey Shelukhin
>Assignee: Steve Yeom
>  Labels: mm-gap-1
> Attachments: HIVE-17856.1.patch, HIVE-17856.2.patch, 
> HIVE-17856.3.patch, HIVE-17856.4.patch, HIVE-17856.5.patch, 
> HIVE-17856.6.patch, HIVE-17856.7.patch, HIVE-17856.8.patch
>
>
> The following tests were removed from mm_all during "integration"... I should 
> have never allowed such manner of intergration.
> MM logic should have been kept intact until ACID logic could catch up. Alas, 
> here we are.
> {noformat}
> drop table iow0_mm;
> create table iow0_mm(key int) tblproperties("transactional"="true", 
> "transactional_properties"="insert_only");
> insert overwrite table iow0_mm select key from intermediate;
> insert into table iow0_mm select key + 1 from intermediate;
> select * from iow0_mm order by key;
> insert overwrite table iow0_mm select key + 2 from intermediate;
> select * from iow0_mm order by key;
> drop table iow0_mm;
> drop table iow1_mm; 
> create table iow1_mm(key int) partitioned by (key2 int)  
> tblproperties("transactional"="true", 
> "transactional_properties"="insert_only");
> insert overwrite table iow1_mm partition (key2)
> select key as k1, key from intermediate union all select key as k1, key from 
> intermediate;
> insert into table iow1_mm partition (key2)
> select key + 1 as k1, key from intermediate union all select key as k1, key 
> from intermediate;
> select * from iow1_mm order by key, key2;
> insert overwrite table iow1_mm partition (key2)
> select key + 3 as k1, key from intermediate union all select key + 4 as k1, 
> key from intermediate;
> select * from iow1_mm order by key, key2;
> insert overwrite table iow1_mm partition (key2)
> select key + 3 as k1, key + 3 from intermediate union all select key + 2 as 
> k1, key + 2 from intermediate;
> select * from iow1_mm order by key, key2;
> drop table iow1_mm;
> {noformat}
> {noformat}
> drop table simple_mm;
> create table simple_mm(key int) stored as orc tblproperties 
> 

[jira] [Commented] (HIVE-17361) Support LOAD DATA for transactional tables

2017-11-11 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248481#comment-16248481
 ] 

Hive QA commented on HIVE-17361:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12897168/HIVE-17361.07.patch

{color:green}SUCCESS:{color} +1 due to 7 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 33 failed/errored test(s), 11379 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_table_stats] 
(batchId=52)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_4] 
(batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[dbtxnmgr_showlocks] 
(batchId=77)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_conversions] 
(batchId=73)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_default] (batchId=81)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[unionDistinct_1] 
(batchId=146)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=162)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mm_conversions]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=156)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_2] 
(batchId=102)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[load_data_into_acid]
 (batchId=91)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[ct_noperm_loc]
 (batchId=94)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multi] 
(batchId=111)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query39] 
(batchId=245)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=206)
org.apache.hadoop.hive.ql.TestTxnLoadData.loadData (batchId=254)
org.apache.hadoop.hive.ql.TestTxnLoadData.loadDataNonAcid2AcidConversion 
(batchId=254)
org.apache.hadoop.hive.ql.TestTxnLoadData.loadDataPartitioned (batchId=254)
org.apache.hadoop.hive.ql.exec.tez.TestWorkloadManager.testApplyPlanQpChanges 
(batchId=281)
org.apache.hadoop.hive.ql.io.orc.TestInputOutputFormat.testACIDReaderFooterSerializeWithDeltas
 (batchId=267)
org.apache.hadoop.hive.ql.io.orc.TestInputOutputFormat.testACIDReaderNoFooterSerializeWithDeltas
 (batchId=267)
org.apache.hadoop.hive.ql.io.orc.TestInputOutputFormat.testEtlCombinedStrategy 
(batchId=267)
org.apache.hadoop.hive.ql.io.orc.TestOrcRawRecordMerger.testRecordReaderIncompleteDelta
 (batchId=267)
org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints 
(batchId=223)
org.apache.hive.hcatalog.streaming.TestStreaming.testInterleavedTransactionBatchCommits
 (batchId=196)
org.apache.hive.hcatalog.streaming.TestStreaming.testMultipleTransactionBatchCommits
 (batchId=196)
org.apache.hive.hcatalog.streaming.TestStreaming.testTransactionBatchAbortAndCommit
 (batchId=196)
org.apache.hive.hcatalog.streaming.TestStreaming.testTransactionBatchCommit_Delimited
 (batchId=196)
org.apache.hive.hcatalog.streaming.TestStreaming.testTransactionBatchCommit_DelimitedUGI
 (batchId=196)
org.apache.hive.hcatalog.streaming.TestStreaming.testTransactionBatchCommit_Json
 (batchId=196)
org.apache.hive.hcatalog.streaming.TestStreaming.testTransactionBatchCommit_Regex
 (batchId=196)
org.apache.hive.hcatalog.streaming.TestStreaming.testTransactionBatchCommit_RegexUGI
 (batchId=196)
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testHttpRetryOnServerIdleTimeout 
(batchId=233)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7778/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7778/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7778/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 33 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12897168 - PreCommit-HIVE-Build

> Support LOAD DATA for transactional tables
> --
>
> Key: HIVE-17361
> URL: https://issues.apache.org/jira/browse/HIVE-17361
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Reporter: Wei Zheng
>Assignee: Eugene Koifman
>Priority: Critical
> Attachments: HIVE-17361.07.patch, HIVE-17361.1.patch, 
> HIVE-17361.2.patch, HIVE-17361.3.patch, HIVE-17361.4.patch
>
>
> LOAD DATA was not supported since ACID was introduced. Need to fill this gap 
> between ACID table and regular hive table.
> Current Documentation is under [DML 
> 

[jira] [Commented] (HIVE-17906) use kill query mechanics to kill queries in WM

2017-11-11 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248457#comment-16248457
 ] 

Hive QA commented on HIVE-17906:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12897164/HIVE-17906.04.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 10 failed/errored test(s), 11374 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] 
(batchId=47)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[dbtxnmgr_showlocks] 
(batchId=77)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[unionDistinct_1] 
(batchId=146)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=162)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=156)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_2] 
(batchId=102)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[ct_noperm_loc]
 (batchId=94)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multi] 
(batchId=111)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=206)
org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints 
(batchId=223)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build//testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build//console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 10 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12897164 - PreCommit-HIVE-Build

> use kill query mechanics to kill queries in WM
> --
>
> Key: HIVE-17906
> URL: https://issues.apache.org/jira/browse/HIVE-17906
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-17906.01.patch, HIVE-17906.02.patch, 
> HIVE-17906.03.patch, HIVE-17906.03.patch, HIVE-17906.04.patch, 
> HIVE-17906.patch
>
>
> Right now it just closes the session (see HIVE-17841). The sessions would 
> need to be reused after the kill, or closed after the kill if the total QP 
> has decreased



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18002) add group support for pool mappings

2017-11-11 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248416#comment-16248416
 ] 

Hive QA commented on HIVE-18002:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12897163/HIVE-18002.01.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 43 failed/errored test(s), 10738 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[dbtxnmgr_showlocks] 
(batchId=77)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapCliDriver
 (batchId=144)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapCliDriver
 (batchId=145)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapCliDriver
 (batchId=146)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapCliDriver
 (batchId=147)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapCliDriver
 (batchId=148)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=149)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=150)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=151)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=153)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=154)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=155)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=156)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=157)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=158)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=159)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=162)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=166)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.org.apache.hadoop.hive.cli.TestMiniTezCliDriver
 (batchId=101)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.org.apache.hadoop.hive.cli.TestMiniTezCliDriver
 (batchId=102)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[ct_noperm_loc]
 (batchId=94)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multi] 
(batchId=111)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query39] 
(batchId=243)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=206)
org.apache.hadoop.hive.ql.TestAcidOnTez.testAcidInsertWithRemoveUnion 
(batchId=220)
org.apache.hadoop.hive.ql.TestAcidOnTez.testBucketedAcidInsertWithRemoveUnion 
(batchId=220)
org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=220)
org.apache.hadoop.hive.ql.TestAcidOnTez.testInsertWithRemoveUnion (batchId=220)
org.apache.hadoop.hive.ql.TestAcidOnTez.testMapJoinOnTez (batchId=220)
org.apache.hadoop.hive.ql.TestAcidOnTez.testMergeJoinOnTez (batchId=220)
org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 
(batchId=220)
org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints 
(batchId=223)
org.apache.hive.service.cli.operation.TestOperationLoggingAPIWithTez.testFetchResultsOfLogWithExecutionMode
 (batchId=228)
org.apache.hive.service.cli.operation.TestOperationLoggingAPIWithTez.testFetchResultsOfLogWithNoneMode
 

[jira] [Commented] (HIVE-17904) handle internal Tez AM restart in registry and WM

2017-11-11 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248392#comment-16248392
 ] 

Hive QA commented on HIVE-17904:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12897162/HIVE-17904.03.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 37 failed/errored test(s), 11377 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[dbtxnmgr_showlocks] 
(batchId=77)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapCliDriver
 (batchId=144)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapCliDriver
 (batchId=145)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapCliDriver
 (batchId=146)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapCliDriver
 (batchId=147)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapCliDriver
 (batchId=148)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_ppd_schema_evol_3a]
 (batchId=146)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[unionDistinct_1] 
(batchId=146)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=149)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=150)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=151)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=153)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=154)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=155)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=156)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=157)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=158)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=159)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=162)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=166)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=162)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=157)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=156)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.org.apache.hadoop.hive.cli.TestMiniTezCliDriver
 (batchId=101)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.org.apache.hadoop.hive.cli.TestMiniTezCliDriver
 (batchId=102)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_2] 
(batchId=102)
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[ct_noperm_loc]
 (batchId=94)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multi] 
(batchId=111)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=206)
org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints 
(batchId=223)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7775/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7775/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7775/

Messages:
{noformat}
Executing