[ 
https://issues.apache.org/jira/browse/HIVE-3997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V resolved HIVE-3997.
---------------------------

    Resolution: Not A Problem

Broadcast task + map joins obsolete this entirely

> Use distributed cache to cache/localize dimension table & filter it in map 
> task setup
> -------------------------------------------------------------------------------------
>
>                 Key: HIVE-3997
>                 URL: https://issues.apache.org/jira/browse/HIVE-3997
>             Project: Hive
>          Issue Type: Improvement
>            Reporter: Gopal V
>            Assignee: Gopal V
>
> The hive clients are not always co-located with the hadoop/hdfs cluster.
> This means that the dimension table filtering, when done on the client side 
> becomes very slow. Not only that, the conversion of the small tables into 
> hashtables has to be done every single time a query is run with different 
> filters on the big table.
> That entire hashtable has to be part of the job, which involves even more 
> HDFS writes from the far client side.
> Using the distributed cache also has the advantage that the localized files 
> can be kept between jobs instead of firing off an HDFS read for every query.
> Moving the operator pipeline for the hash generation into the map task itself 
> has perhaps a few cons.
> The map task might OOM due to this change, but it will take longer to recover 
> until all the map attempts fail, instead of being conditional on the client. 
> The client has no idea how much memory the hashtable needs and has to rely on 
> the disk sizes (compressed sizes, perhaps) to determine if it needs to fall 
> back onto a reduce-join instead.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Reply via email to