[jira] [Updated] (HIVE-24001) Don't cache MapWork in tez/ObjectCache during query-based compaction

2020-08-13 Thread Karen Coppage (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-24001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Coppage updated HIVE-24001:
-
Fix Version/s: 4.0.0

> Don't cache MapWork in tez/ObjectCache during query-based compaction
> 
>
> Key: HIVE-24001
> URL: https://issues.apache.org/jira/browse/HIVE-24001
> Project: Hive
>  Issue Type: Bug
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Query-based major compaction can fail intermittently with the following issue:
> {code:java}
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: One writer is 
> supposed to handle only one bucket. We saw these 2 different buckets: 1 and 6
>   at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFValidateAcidSortOrder.evaluate(GenericUDFValidateAcidSortOrder.java:77)
> {code}
> This is consistently preceded in the application log with:
> {code:java}
>  [INFO] [TezChild] |tez.ObjectCache|: Found 
> hive_20200804185133_f04cca69-fa30-4f1b-a5fe-80fc2d749f48_Map 1__MAP_PLAN__ in 
> cache with value: org.apache.hadoop.hive.ql.plan.MapWork@74652101
> {code}
> Alternatively, when MapRecordProcessor doesn't find mapWork in 
> tez/ObjectCache (but instead caches mapWork), major compaction succeeds.
> The failure happens because, if MapWork is reused, 
> GenericUDFValidateAcidSortOrder (which is called during compaction) is also 
> reused on splits belonging to two different buckets, which produces an error.
> Solution is to avoid storing MapWork in the ObjectCache during query-based 
> compaction.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-24001) Don't cache MapWork in tez/ObjectCache during query-based compaction

2020-08-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-24001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-24001:
--
Labels: pull-request-available  (was: )

> Don't cache MapWork in tez/ObjectCache during query-based compaction
> 
>
> Key: HIVE-24001
> URL: https://issues.apache.org/jira/browse/HIVE-24001
> Project: Hive
>  Issue Type: Bug
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Query-based major compaction can fail intermittently with the following issue:
> {code:java}
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: One writer is 
> supposed to handle only one bucket. We saw these 2 different buckets: 1 and 6
>   at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFValidateAcidSortOrder.evaluate(GenericUDFValidateAcidSortOrder.java:77)
> {code}
> This is consistently preceded in the application log with:
> {code:java}
>  [INFO] [TezChild] |tez.ObjectCache|: Found 
> hive_20200804185133_f04cca69-fa30-4f1b-a5fe-80fc2d749f48_Map 1__MAP_PLAN__ in 
> cache with value: org.apache.hadoop.hive.ql.plan.MapWork@74652101
> {code}
> Alternatively, when MapRecordProcessor doesn't find mapWork in 
> tez/ObjectCache (but instead caches mapWork), major compaction succeeds.
> The failure happens because, if MapWork is reused, 
> GenericUDFValidateAcidSortOrder (which is called during compaction) is also 
> reused on splits belonging to two different buckets, which produces an error.
> Solution is to avoid storing MapWork in the ObjectCache during query-based 
> compaction.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)