----------------------------------------------------------- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/34586/#review85853 -----------------------------------------------------------
ql/src/java/org/apache/hadoop/hive/ql/exec/tez/HashTableLoader.java <https://reviews.apache.org/r/34586/#comment137688> The comment above says - "if any table has bad size estimate...". But why you check "totalSize <= 0" then? Should you iterate over all small tables and check that they all have good size estimate. What if you have table sizes (100, -4, 0) totalSize is 96. But table #2 size is -4, which is bad size. To make code clear I recommend to add new boolean variable isAnyTableHasBadSize and set its value it in the place where you calc totalSize, biggest and maxSize - Alexander Pivovarov On May 27, 2015, 6:33 a.m., Mostafa Mokhtar wrote: > > ----------------------------------------------------------- > This is an automatically generated e-mail. To reply, visit: > https://reviews.apache.org/r/34586/ > ----------------------------------------------------------- > > (Updated May 27, 2015, 6:33 a.m.) > > > Review request for hive. > > > Repository: hive-git > > > Description > ------- > > fix biggest small table selection when table sizes are 0 > fallback to dividing memory equally if any tables have invalid size > > > Diffs > ----- > > ql/src/java/org/apache/hadoop/hive/ql/exec/tez/HashTableLoader.java 536b92c > > Diff: https://reviews.apache.org/r/34586/diff/ > > > Testing > ------- > > > Thanks, > > Mostafa Mokhtar > >