Hey Folks,
We are using hive-0.11 and are hitting java.lang.OutOfMemoryError. The
problem seems to be in CommonJoinResolver.java (processCurrentTask()),
in this function we try and convert a map-reduce join to a map join if
'n-1' tables involved in a 'n' way join have a size below a certain
threshold.
If the tables are maintained by hive then we have accurate sizes of each
table and can apply this optimization but if the tables are created
using storage handlers, HBaseStorageHanlder in our case then the size is
set to be zero. Due to this we assume that we can apply the optimization
and convert the map-reduce join to a map join. So we build a in-memory
hash table for all the keys, since our table created using the storage
handler is large, it does not fit in memory and we hit the error.
Should I open a JIRA for this? One way to fix this is to set the size of
the table (created using storage handler) to be equal to the map join
threshold. This way the table would be selected as the big table and we
can proceed with the optimization if other tables in the join have size
below the threshold. If we have multiple big tables then the
optimization would be turned off.
Thanks
Mehant