[ 
https://issues.apache.org/jira/browse/HIVE-23880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163111#comment-17163111
 ] 

Stamatis Zampetakis commented on HIVE-23880:
--------------------------------------------

I also did a small [JMH 
benchmark|https://github.com/zabetak/jmh-micro-benchmarks/blob/master/core-java/src/main/java/com/github/zabetak/benchmark/BitwiseOrBenchmark.java]
 on my side comparing various ways to perform bitwise OR operations. 

Since this kind of benchmarks are very sensivitive to JVM optimizations I 
performed the experiments both with 
[jdk1.8.0_26|https://github.com/zabetak/jmh-micro-benchmarks/blob/master/core-java/doc/jmh_bitwise_or_jdk1.8.0_261_n100.txt]
 and 
[jdk-11.0.8|https://github.com/zabetak/jmh-micro-benchmarks/blob/master/core-java/doc/jmh_bitwise_or_jdk-11.0.8_n100.txt].
 (I wanted to attach them to this issue but for some reason I cannot so I 
uploaded them on GitHub). 

The experiments are performed with 100 bitsets of size 436 465 696. I will run 
also the same experiments with 1000 bitsets that is the equivalent of 1000 
mapper tasks but I already tested and it seems that times also increase 
linearly so I am expecting a 10X increase but not much more.  

The first thing to notice is that most variants take 0.3 to 1 second so even 
with the 10X increase (for the case of 1000 bitsets) the time is around 10 
seconds. Moreover my machine is I guess less powerful than the one used in the 
cluster so the times there might drop even more. 

Looking more carefully there are some approaches (columnWise sync and async 
variants) that are clearly best to avoid since there are worse by a factor of 
10X to 100X from the rest. I don't think they deserve more discussion since I 
think they are not used at the moment. 

Another interesting finding is that in jdk11 the faster variant (namely 
customBatchWithByteArray) is not multi-threaded while its respective parallel 
version is slower and more resource consuming necessitating more CPU cycles for 
the same number of instructions. The same findings do not transfer to jdk8 
where the parallel variant is 2X faster than the serial.    

Finally, it is worth mentioning that a single-threaded variant that performs 
reasonably well (0.5sec) in both JDKs relies on java.util.BitSet (so long[] 
instead of byte[]) and the existing or method. 

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---------------------------------------------------------------------------
>
>                 Key: HIVE-23880
>                 URL: https://issues.apache.org/jira/browse/HIVE-23880
>             Project: Hive
>          Issue Type: Improvement
>            Reporter: László Bodor
>            Assignee: László Bodor
>            Priority: Major
>              Labels: pull-request-available
>         Attachments: lipwig-output3605036885489193068.svg
>
>          Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in 
> case of large number of source mapper tasks (~1000, Map 1 in below example) 
> and a large amount of expected entries (50M) in bloom filters.
> For example in TPCDS Q93:
> {code}
> select /*+ semi(store_returns, sr_item_sk, store_sales, 70000000)*/ 
> ss_customer_sk
>             ,sum(act_sales) sumsales
>       from (select ss_item_sk
>                   ,ss_ticket_number
>                   ,ss_customer_sk
>                   ,case when sr_return_quantity is not null then 
> (ss_quantity-sr_return_quantity)*ss_sales_price
>                                                             else 
> (ss_quantity*ss_sales_price) end act_sales
>             from store_sales left outer join store_returns on (sr_item_sk = 
> ss_item_sk
>                                                                and 
> sr_ticket_number = ss_ticket_number)
>                 ,reason
>             where sr_reason_sk = r_reason_sk
>               and r_reason_desc = 'reason 66') t
>       group by ss_customer_sk
>       order by sumsales, ss_customer_sk
> limit 100;
> {code}
> On 10TB-30TB scale there is a chance that from 3-4 mins of query runtime 1-2 
> mins are spent with merging bloom filters (Reducer 2), as in:  
> [^lipwig-output3605036885489193068.svg] 
> {code}
> ----------------------------------------------------------------------------------------------
>         VERTICES      MODE        STATUS  TOTAL  COMPLETED  RUNNING  PENDING  
> FAILED  KILLED
> ----------------------------------------------------------------------------------------------
> Map 3 ..........      llap     SUCCEEDED      1          1        0        0  
>      0       0
> Map 1 ..........      llap     SUCCEEDED   1263       1263        0        0  
>      0       0
> Reducer 2             llap       RUNNING      1          0        1        0  
>      0       0
> Map 4                 llap       RUNNING   6154          0      207     5947  
>      0       0
> Reducer 5             llap        INITED     43          0        0       43  
>      0       0
> Reducer 6             llap        INITED      1          0        0        1  
>      0       0
> ----------------------------------------------------------------------------------------------
> VERTICES: 02/06  [====>>----------------------] 16%   ELAPSED TIME: 149.98 s
> ----------------------------------------------------------------------------------------------
> {code}
> For example, 70M entries in bloom filter leads to a 436 465 696 bits, so 
> merging 1263 bloom filters means running ~ 1263 * 436 465 696 bitwise OR 
> operation, which is very hot codepath, but can be parallelized.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to