[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-08-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=474786=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-474786
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 26/Aug/20 12:24
Start Date: 26/Aug/20 12:24
Worklog Time Spent: 10m 
  Work Description: abstractdog commented on pull request #1280:
URL: https://github.com/apache/hive/pull/1280#issuecomment-680846635


   all checks passed, pushing this to master, thanks for the review for all you 
guys



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 474786)
Time Spent: 8h 40m  (was: 8.5h)

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---
>
> Key: HIVE-23880
> URL: https://issues.apache.org/jira/browse/HIVE-23880
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
>  Labels: pull-request-available
> Attachments: lipwig-output3605036885489193068.svg
>
>  Time Spent: 8h 40m
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in 
> case of large number of source mapper tasks (~1000, Map 1 in below example) 
> and a large amount of expected entries (50M) in bloom filters.
> For example in TPCDS Q93:
> {code}
> select /*+ semi(store_returns, sr_item_sk, store_sales, 7000)*/ 
> ss_customer_sk
> ,sum(act_sales) sumsales
>   from (select ss_item_sk
>   ,ss_ticket_number
>   ,ss_customer_sk
>   ,case when sr_return_quantity is not null then 
> (ss_quantity-sr_return_quantity)*ss_sales_price
> else 
> (ss_quantity*ss_sales_price) end act_sales
> from store_sales left outer join store_returns on (sr_item_sk = 
> ss_item_sk
>and 
> sr_ticket_number = ss_ticket_number)
> ,reason
> where sr_reason_sk = r_reason_sk
>   and r_reason_desc = 'reason 66') t
>   group by ss_customer_sk
>   order by sumsales, ss_customer_sk
> limit 100;
> {code}
> On 10TB-30TB scale there is a chance that from 3-4 mins of query runtime 1-2 
> mins are spent with merging bloom filters (Reducer 2), as in:  
> [^lipwig-output3605036885489193068.svg] 
> {code}
> --
> VERTICES  MODESTATUS  TOTAL  COMPLETED  RUNNING  PENDING  
> FAILED  KILLED
> --
> Map 3 ..  llap SUCCEEDED  1  100  
>  0   0
> Map 1 ..  llap SUCCEEDED   1263   126300  
>  0   0
> Reducer 2 llap   RUNNING  1  010  
>  0   0
> Map 4 llap   RUNNING   6154  0  207 5947  
>  0   0
> Reducer 5 llapINITED 43  00   43  
>  0   0
> Reducer 6 llapINITED  1  001  
>  0   0
> --
> VERTICES: 02/06  [>>--] 16%   ELAPSED TIME: 149.98 s
> --
> {code}
> For example, 70M entries in bloom filter leads to a 436 465 696 bits, so 
> merging 1263 bloom filters means running ~ 1263 * 436 465 696 bitwise OR 
> operation, which is very hot codepath, but can be parallelized.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-08-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=474785=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-474785
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 26/Aug/20 12:24
Start Date: 26/Aug/20 12:24
Worklog Time Spent: 10m 
  Work Description: abstractdog closed pull request #1280:
URL: https://github.com/apache/hive/pull/1280


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 474785)
Time Spent: 8.5h  (was: 8h 20m)

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---
>
> Key: HIVE-23880
> URL: https://issues.apache.org/jira/browse/HIVE-23880
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
>  Labels: pull-request-available
> Attachments: lipwig-output3605036885489193068.svg
>
>  Time Spent: 8.5h
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in 
> case of large number of source mapper tasks (~1000, Map 1 in below example) 
> and a large amount of expected entries (50M) in bloom filters.
> For example in TPCDS Q93:
> {code}
> select /*+ semi(store_returns, sr_item_sk, store_sales, 7000)*/ 
> ss_customer_sk
> ,sum(act_sales) sumsales
>   from (select ss_item_sk
>   ,ss_ticket_number
>   ,ss_customer_sk
>   ,case when sr_return_quantity is not null then 
> (ss_quantity-sr_return_quantity)*ss_sales_price
> else 
> (ss_quantity*ss_sales_price) end act_sales
> from store_sales left outer join store_returns on (sr_item_sk = 
> ss_item_sk
>and 
> sr_ticket_number = ss_ticket_number)
> ,reason
> where sr_reason_sk = r_reason_sk
>   and r_reason_desc = 'reason 66') t
>   group by ss_customer_sk
>   order by sumsales, ss_customer_sk
> limit 100;
> {code}
> On 10TB-30TB scale there is a chance that from 3-4 mins of query runtime 1-2 
> mins are spent with merging bloom filters (Reducer 2), as in:  
> [^lipwig-output3605036885489193068.svg] 
> {code}
> --
> VERTICES  MODESTATUS  TOTAL  COMPLETED  RUNNING  PENDING  
> FAILED  KILLED
> --
> Map 3 ..  llap SUCCEEDED  1  100  
>  0   0
> Map 1 ..  llap SUCCEEDED   1263   126300  
>  0   0
> Reducer 2 llap   RUNNING  1  010  
>  0   0
> Map 4 llap   RUNNING   6154  0  207 5947  
>  0   0
> Reducer 5 llapINITED 43  00   43  
>  0   0
> Reducer 6 llapINITED  1  001  
>  0   0
> --
> VERTICES: 02/06  [>>--] 16%   ELAPSED TIME: 149.98 s
> --
> {code}
> For example, 70M entries in bloom filter leads to a 436 465 696 bits, so 
> merging 1263 bloom filters means running ~ 1263 * 436 465 696 bitwise OR 
> operation, which is very hot codepath, but can be parallelized.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-08-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=473884=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-473884
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 24/Aug/20 12:49
Start Date: 24/Aug/20 12:49
Worklog Time Spent: 10m 
  Work Description: abstractdog opened a new pull request #1280:
URL: https://github.com/apache/hive/pull/1280


   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HIVE-X: Fix a typo in YYY)
   For more details, please see 
https://cwiki.apache.org/confluence/display/Hive/HowToContribute
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 473884)
Time Spent: 8h 20m  (was: 8h 10m)

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---
>
> Key: HIVE-23880
> URL: https://issues.apache.org/jira/browse/HIVE-23880
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
>  Labels: pull-request-available
> Attachments: lipwig-output3605036885489193068.svg
>
>  Time Spent: 8h 20m
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in 
> case of large number of source mapper tasks (~1000, Map 1 in below example) 
> and a large amount of expected entries (50M) in bloom filters.
> For example in TPCDS Q93:
> {code}
> select /*+ semi(store_returns, sr_item_sk, store_sales, 7000)*/ 
> ss_customer_sk
> ,sum(act_sales) sumsales
>   from (select ss_item_sk
>   ,ss_ticket_number
>   ,ss_customer_sk
>   ,case when sr_return_quantity is not null then 
> (ss_quantity-sr_return_quantity)*ss_sales_price
> else 
> (ss_quantity*ss_sales_price) end act_sales
> from store_sales left outer join store_returns on (sr_item_sk = 
> ss_item_sk
>and 
> sr_ticket_number = ss_ticket_number)
> ,reason
> where sr_reason_sk = r_reason_sk
>   and r_reason_desc = 'reason 66') t
>   group by ss_customer_sk
>   order by sumsales, ss_customer_sk
> limit 100;
> {code}
> On 10TB-30TB scale there is a chance that from 3-4 mins of query runtime 1-2 
> mins are spent with merging bloom filters (Reducer 2), as in:  
> [^lipwig-output3605036885489193068.svg] 
> {code}
> --
> VERTICES  MODESTATUS  TOTAL  COMPLETED  RUNNING  PENDING  
> FAILED  KILLED
> --
> Map 3 ..  llap SUCCEEDED  1  100  
>  0   0
> Map 1 ..  llap SUCCEEDED   1263   126300  
>  0   0
> Reducer 2 llap   RUNNING  1  010  
>  0   0
> Map 4 llap   RUNNING   6154  0  207 5947  
>  0   0
> Reducer 5 llapINITED 43  00   43  
>  0   0
> Reducer 6 llapINITED  1  001  
>  0   0
> --
> VERTICES: 02/06  [>>--] 16%   ELAPSED TIME: 149.98 s
> --
> {code}
> For example, 70M entries in bloom filter leads to a 436 465 696 bits, so 
> merging 1263 bloom filters means running ~ 1263 * 436 465 696 bitwise OR 
> operation, which is very hot codepath, but can be parallelized.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-08-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=473883=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-473883
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 24/Aug/20 12:49
Start Date: 24/Aug/20 12:49
Worklog Time Spent: 10m 
  Work Description: abstractdog closed pull request #1280:
URL: https://github.com/apache/hive/pull/1280


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 473883)
Time Spent: 8h 10m  (was: 8h)

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---
>
> Key: HIVE-23880
> URL: https://issues.apache.org/jira/browse/HIVE-23880
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
>  Labels: pull-request-available
> Attachments: lipwig-output3605036885489193068.svg
>
>  Time Spent: 8h 10m
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in 
> case of large number of source mapper tasks (~1000, Map 1 in below example) 
> and a large amount of expected entries (50M) in bloom filters.
> For example in TPCDS Q93:
> {code}
> select /*+ semi(store_returns, sr_item_sk, store_sales, 7000)*/ 
> ss_customer_sk
> ,sum(act_sales) sumsales
>   from (select ss_item_sk
>   ,ss_ticket_number
>   ,ss_customer_sk
>   ,case when sr_return_quantity is not null then 
> (ss_quantity-sr_return_quantity)*ss_sales_price
> else 
> (ss_quantity*ss_sales_price) end act_sales
> from store_sales left outer join store_returns on (sr_item_sk = 
> ss_item_sk
>and 
> sr_ticket_number = ss_ticket_number)
> ,reason
> where sr_reason_sk = r_reason_sk
>   and r_reason_desc = 'reason 66') t
>   group by ss_customer_sk
>   order by sumsales, ss_customer_sk
> limit 100;
> {code}
> On 10TB-30TB scale there is a chance that from 3-4 mins of query runtime 1-2 
> mins are spent with merging bloom filters (Reducer 2), as in:  
> [^lipwig-output3605036885489193068.svg] 
> {code}
> --
> VERTICES  MODESTATUS  TOTAL  COMPLETED  RUNNING  PENDING  
> FAILED  KILLED
> --
> Map 3 ..  llap SUCCEEDED  1  100  
>  0   0
> Map 1 ..  llap SUCCEEDED   1263   126300  
>  0   0
> Reducer 2 llap   RUNNING  1  010  
>  0   0
> Map 4 llap   RUNNING   6154  0  207 5947  
>  0   0
> Reducer 5 llapINITED 43  00   43  
>  0   0
> Reducer 6 llapINITED  1  001  
>  0   0
> --
> VERTICES: 02/06  [>>--] 16%   ELAPSED TIME: 149.98 s
> --
> {code}
> For example, 70M entries in bloom filter leads to a 436 465 696 bits, so 
> merging 1263 bloom filters means running ~ 1263 * 436 465 696 bitwise OR 
> operation, which is very hot codepath, but can be parallelized.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-08-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=470581=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-470581
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 14/Aug/20 05:37
Start Date: 14/Aug/20 05:37
Worklog Time Spent: 10m 
  Work Description: mustafaiman commented on pull request #1280:
URL: https://github.com/apache/hive/pull/1280#issuecomment-673894751


   @abstractdog  I missed that call. I think that covers it.
   Good work.
   +1



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 470581)
Time Spent: 8h  (was: 7h 50m)

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---
>
> Key: HIVE-23880
> URL: https://issues.apache.org/jira/browse/HIVE-23880
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
>  Labels: pull-request-available
> Attachments: lipwig-output3605036885489193068.svg
>
>  Time Spent: 8h
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in 
> case of large number of source mapper tasks (~1000, Map 1 in below example) 
> and a large amount of expected entries (50M) in bloom filters.
> For example in TPCDS Q93:
> {code}
> select /*+ semi(store_returns, sr_item_sk, store_sales, 7000)*/ 
> ss_customer_sk
> ,sum(act_sales) sumsales
>   from (select ss_item_sk
>   ,ss_ticket_number
>   ,ss_customer_sk
>   ,case when sr_return_quantity is not null then 
> (ss_quantity-sr_return_quantity)*ss_sales_price
> else 
> (ss_quantity*ss_sales_price) end act_sales
> from store_sales left outer join store_returns on (sr_item_sk = 
> ss_item_sk
>and 
> sr_ticket_number = ss_ticket_number)
> ,reason
> where sr_reason_sk = r_reason_sk
>   and r_reason_desc = 'reason 66') t
>   group by ss_customer_sk
>   order by sumsales, ss_customer_sk
> limit 100;
> {code}
> On 10TB-30TB scale there is a chance that from 3-4 mins of query runtime 1-2 
> mins are spent with merging bloom filters (Reducer 2), as in:  
> [^lipwig-output3605036885489193068.svg] 
> {code}
> --
> VERTICES  MODESTATUS  TOTAL  COMPLETED  RUNNING  PENDING  
> FAILED  KILLED
> --
> Map 3 ..  llap SUCCEEDED  1  100  
>  0   0
> Map 1 ..  llap SUCCEEDED   1263   126300  
>  0   0
> Reducer 2 llap   RUNNING  1  010  
>  0   0
> Map 4 llap   RUNNING   6154  0  207 5947  
>  0   0
> Reducer 5 llapINITED 43  00   43  
>  0   0
> Reducer 6 llapINITED  1  001  
>  0   0
> --
> VERTICES: 02/06  [>>--] 16%   ELAPSED TIME: 149.98 s
> --
> {code}
> For example, 70M entries in bloom filter leads to a 436 465 696 bits, so 
> merging 1263 bloom filters means running ~ 1263 * 436 465 696 bitwise OR 
> operation, which is very hot codepath, but can be parallelized.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-08-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=470580=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-470580
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 14/Aug/20 05:37
Start Date: 14/Aug/20 05:37
Worklog Time Spent: 10m 
  Work Description: abstractdog commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r470419947



##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorGroupByOperator.java
##
@@ -1126,6 +1137,7 @@ protected void initializeOp(Configuration hconf) throws 
HiveException {
 VectorAggregateExpression vecAggrExpr = null;
 try {
   vecAggrExpr = ctor.newInstance(vecAggrDesc);
+  vecAggrExpr.withConf(hconf);

Review comment:
   Sadly, I need to agree with conf abusing in (hive) codebase :) somehow I 
don't really like instanceof stuff here, only for a single expression, 
moreover, I wanted to find a general way to provide some configuration to 
expressions, as this patch showed that they might need that (in the future). On 
the other hand, explicitly calling a specific constructor for different types 
could be a kind of documentation in one place about "how to instantiate" these 
expressions. I'm about to refactor this logic to a separate method in 
VectorGroupByOperator and let this patch go!





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 470580)
Time Spent: 7h 50m  (was: 7h 40m)

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---
>
> Key: HIVE-23880
> URL: https://issues.apache.org/jira/browse/HIVE-23880
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
>  Labels: pull-request-available
> Attachments: lipwig-output3605036885489193068.svg
>
>  Time Spent: 7h 50m
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in 
> case of large number of source mapper tasks (~1000, Map 1 in below example) 
> and a large amount of expected entries (50M) in bloom filters.
> For example in TPCDS Q93:
> {code}
> select /*+ semi(store_returns, sr_item_sk, store_sales, 7000)*/ 
> ss_customer_sk
> ,sum(act_sales) sumsales
>   from (select ss_item_sk
>   ,ss_ticket_number
>   ,ss_customer_sk
>   ,case when sr_return_quantity is not null then 
> (ss_quantity-sr_return_quantity)*ss_sales_price
> else 
> (ss_quantity*ss_sales_price) end act_sales
> from store_sales left outer join store_returns on (sr_item_sk = 
> ss_item_sk
>and 
> sr_ticket_number = ss_ticket_number)
> ,reason
> where sr_reason_sk = r_reason_sk
>   and r_reason_desc = 'reason 66') t
>   group by ss_customer_sk
>   order by sumsales, ss_customer_sk
> limit 100;
> {code}
> On 10TB-30TB scale there is a chance that from 3-4 mins of query runtime 1-2 
> mins are spent with merging bloom filters (Reducer 2), as in:  
> [^lipwig-output3605036885489193068.svg] 
> {code}
> --
> VERTICES  MODESTATUS  TOTAL  COMPLETED  RUNNING  PENDING  
> FAILED  KILLED
> --
> Map 3 ..  llap SUCCEEDED  1  100  
>  0   0
> Map 1 ..  llap SUCCEEDED   1263   126300  
>  0   0
> Reducer 2 llap   RUNNING  1  010  
>  0   0
> Map 4 llap   RUNNING   6154  0  207 5947  
>  0   0
> Reducer 5 llapINITED 43  00   43  
>  0   0
> Reducer 6 llapINITED  1  001  
>  0   0
> --
> VERTICES: 02/06  [>>--] 16%   ELAPSED TIME: 149.98 s
> 

[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-08-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=470577=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-470577
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 14/Aug/20 05:29
Start Date: 14/Aug/20 05:29
Worklog Time Spent: 10m 
  Work Description: abstractdog commented on pull request #1280:
URL: https://github.com/apache/hive/pull/1280#issuecomment-673892379


   > @abstractdog
   > I am almost ok with this patch. However I still dont understand how this 
integrates with `ProcessingModeHashAggregate`. Since there are multiple 
VectorAggregationBufferRows in hash mode, I think we should `finish` each of 
them as we process them. Otherwise, we pass to the next operator in the 
pipeline without completing the bloom filter. Also, since hash mode dynamically 
allocates and frees VectorAggregationBufferRows these `finish`es should happen 
as we deallocate each of them, rather than only at the end of the operator.
   
   Good point. I was creating this patch by focusing on finishing buffers 
correctly, I think I've already taken care of by this, please take a look:
   
https://github.com/apache/hive/pull/1280/commits/0ada66534a937b8f4492d14f508903fa98402aed#diff-07c28d3f5c72db581b9cd4fa424a0ecbR675
   
   As you can see, I'm calling finish before every instance of writeSingleRow. 
I'm assuming that writeSingleRow is a point where a buffer should be finished 
for writing. In ProcessingModeHashAggregate, the above part is enclosed in an 
iteration on buffers in flush method. Are you aware of any other places where I 
should finish a buffer?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 470577)
Time Spent: 7h 40m  (was: 7.5h)

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---
>
> Key: HIVE-23880
> URL: https://issues.apache.org/jira/browse/HIVE-23880
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
>  Labels: pull-request-available
> Attachments: lipwig-output3605036885489193068.svg
>
>  Time Spent: 7h 40m
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in 
> case of large number of source mapper tasks (~1000, Map 1 in below example) 
> and a large amount of expected entries (50M) in bloom filters.
> For example in TPCDS Q93:
> {code}
> select /*+ semi(store_returns, sr_item_sk, store_sales, 7000)*/ 
> ss_customer_sk
> ,sum(act_sales) sumsales
>   from (select ss_item_sk
>   ,ss_ticket_number
>   ,ss_customer_sk
>   ,case when sr_return_quantity is not null then 
> (ss_quantity-sr_return_quantity)*ss_sales_price
> else 
> (ss_quantity*ss_sales_price) end act_sales
> from store_sales left outer join store_returns on (sr_item_sk = 
> ss_item_sk
>and 
> sr_ticket_number = ss_ticket_number)
> ,reason
> where sr_reason_sk = r_reason_sk
>   and r_reason_desc = 'reason 66') t
>   group by ss_customer_sk
>   order by sumsales, ss_customer_sk
> limit 100;
> {code}
> On 10TB-30TB scale there is a chance that from 3-4 mins of query runtime 1-2 
> mins are spent with merging bloom filters (Reducer 2), as in:  
> [^lipwig-output3605036885489193068.svg] 
> {code}
> --
> VERTICES  MODESTATUS  TOTAL  COMPLETED  RUNNING  PENDING  
> FAILED  KILLED
> --
> Map 3 ..  llap SUCCEEDED  1  100  
>  0   0
> Map 1 ..  llap SUCCEEDED   1263   126300  
>  0   0
> Reducer 2 llap   RUNNING  1  010  
>  0   0
> Map 4 llap   RUNNING   6154  0  207 5947  
>  0   0
> Reducer 5 llapINITED 43  00   43  
>  0   0
> Reducer 6 llapINITED  1  001  
>  0   0
> 

[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-08-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=470493=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-470493
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 14/Aug/20 00:02
Start Date: 14/Aug/20 00:02
Worklog Time Spent: 10m 
  Work Description: mustafaiman commented on pull request #1280:
URL: https://github.com/apache/hive/pull/1280#issuecomment-673767435


   @abstractdog 
   I am almost ok with this patch. However I still dont understand how this 
integrates with `ProcessingModeHashAggregate`. Since there are multiple 
VectorAggregationBufferRows in hash mode, I think we should `finish` each of 
them as we process them. Otherwise, we pass to the next operator in the 
pipeline without completing the bloom filter. Also, since hash mode dynamically 
allocates and frees VectorAggregationBufferRows these `finish`es should happen 
as we deallocate each of them, rather than only at the end of the operator.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 470493)
Time Spent: 7.5h  (was: 7h 20m)

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---
>
> Key: HIVE-23880
> URL: https://issues.apache.org/jira/browse/HIVE-23880
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
>  Labels: pull-request-available
> Attachments: lipwig-output3605036885489193068.svg
>
>  Time Spent: 7.5h
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in 
> case of large number of source mapper tasks (~1000, Map 1 in below example) 
> and a large amount of expected entries (50M) in bloom filters.
> For example in TPCDS Q93:
> {code}
> select /*+ semi(store_returns, sr_item_sk, store_sales, 7000)*/ 
> ss_customer_sk
> ,sum(act_sales) sumsales
>   from (select ss_item_sk
>   ,ss_ticket_number
>   ,ss_customer_sk
>   ,case when sr_return_quantity is not null then 
> (ss_quantity-sr_return_quantity)*ss_sales_price
> else 
> (ss_quantity*ss_sales_price) end act_sales
> from store_sales left outer join store_returns on (sr_item_sk = 
> ss_item_sk
>and 
> sr_ticket_number = ss_ticket_number)
> ,reason
> where sr_reason_sk = r_reason_sk
>   and r_reason_desc = 'reason 66') t
>   group by ss_customer_sk
>   order by sumsales, ss_customer_sk
> limit 100;
> {code}
> On 10TB-30TB scale there is a chance that from 3-4 mins of query runtime 1-2 
> mins are spent with merging bloom filters (Reducer 2), as in:  
> [^lipwig-output3605036885489193068.svg] 
> {code}
> --
> VERTICES  MODESTATUS  TOTAL  COMPLETED  RUNNING  PENDING  
> FAILED  KILLED
> --
> Map 3 ..  llap SUCCEEDED  1  100  
>  0   0
> Map 1 ..  llap SUCCEEDED   1263   126300  
>  0   0
> Reducer 2 llap   RUNNING  1  010  
>  0   0
> Map 4 llap   RUNNING   6154  0  207 5947  
>  0   0
> Reducer 5 llapINITED 43  00   43  
>  0   0
> Reducer 6 llapINITED  1  001  
>  0   0
> --
> VERTICES: 02/06  [>>--] 16%   ELAPSED TIME: 149.98 s
> --
> {code}
> For example, 70M entries in bloom filter leads to a 436 465 696 bits, so 
> merging 1263 bloom filters means running ~ 1263 * 436 465 696 bitwise OR 
> operation, which is very hot codepath, but can be parallelized.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-08-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=470489=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-470489
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 13/Aug/20 23:56
Start Date: 13/Aug/20 23:56
Worklog Time Spent: 10m 
  Work Description: mustafaiman commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r470310851



##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorGroupByOperator.java
##
@@ -1126,6 +1137,7 @@ protected void initializeOp(Configuration hconf) throws 
HiveException {
 VectorAggregateExpression vecAggrExpr = null;
 try {
   vecAggrExpr = ctor.newInstance(vecAggrDesc);
+  vecAggrExpr.withConf(hconf);

Review comment:
   I think making `VectorUDAFBloomFilterMerge` construction a special case 
and supplying the single int to that constructor is much cleaner. While trying 
to avoid that specialization, you are injecting the conf object to all the 
other classes.
   
   I specifically despise passing conf object around in Hive as it is abused so 
much in every part of the codebase. I'd prefer the other way but I won't insist 
on it. It is not a big deal for this patch.
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 470489)
Time Spent: 7h 20m  (was: 7h 10m)

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---
>
> Key: HIVE-23880
> URL: https://issues.apache.org/jira/browse/HIVE-23880
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
>  Labels: pull-request-available
> Attachments: lipwig-output3605036885489193068.svg
>
>  Time Spent: 7h 20m
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in 
> case of large number of source mapper tasks (~1000, Map 1 in below example) 
> and a large amount of expected entries (50M) in bloom filters.
> For example in TPCDS Q93:
> {code}
> select /*+ semi(store_returns, sr_item_sk, store_sales, 7000)*/ 
> ss_customer_sk
> ,sum(act_sales) sumsales
>   from (select ss_item_sk
>   ,ss_ticket_number
>   ,ss_customer_sk
>   ,case when sr_return_quantity is not null then 
> (ss_quantity-sr_return_quantity)*ss_sales_price
> else 
> (ss_quantity*ss_sales_price) end act_sales
> from store_sales left outer join store_returns on (sr_item_sk = 
> ss_item_sk
>and 
> sr_ticket_number = ss_ticket_number)
> ,reason
> where sr_reason_sk = r_reason_sk
>   and r_reason_desc = 'reason 66') t
>   group by ss_customer_sk
>   order by sumsales, ss_customer_sk
> limit 100;
> {code}
> On 10TB-30TB scale there is a chance that from 3-4 mins of query runtime 1-2 
> mins are spent with merging bloom filters (Reducer 2), as in:  
> [^lipwig-output3605036885489193068.svg] 
> {code}
> --
> VERTICES  MODESTATUS  TOTAL  COMPLETED  RUNNING  PENDING  
> FAILED  KILLED
> --
> Map 3 ..  llap SUCCEEDED  1  100  
>  0   0
> Map 1 ..  llap SUCCEEDED   1263   126300  
>  0   0
> Reducer 2 llap   RUNNING  1  010  
>  0   0
> Map 4 llap   RUNNING   6154  0  207 5947  
>  0   0
> Reducer 5 llapINITED 43  00   43  
>  0   0
> Reducer 6 llapINITED  1  001  
>  0   0
> --
> VERTICES: 02/06  [>>--] 16%   ELAPSED TIME: 149.98 s
> --
> {code}
> For example, 70M entries in bloom filter leads to a 

[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-08-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=470126=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-470126
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 13/Aug/20 07:55
Start Date: 13/Aug/20 07:55
Worklog Time Spent: 10m 
  Work Description: abstractdog commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r469765404



##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/aggregates/VectorUDAFBloomFilterMerge.java
##
@@ -77,6 +75,211 @@ public void reset() {
   // Do not change the initial bytes which contain 
NumHashFunctions/NumBits!
   Arrays.fill(bfBytes, BloomKFilter.START_OF_SERIALIZED_LONGS, 
bfBytes.length, (byte) 0);
 }
+
+public boolean mergeBloomFilterBytesFromInputColumn(BytesColumnVector 
inputColumn,
+int batchSize, boolean selectedInUse, int[] selected, Configuration 
conf) {
+  // already set in previous iterations, no need to call initExecutor again
+  if (numThreads == 0) {
+return false;
+  }
+  if (executor == null) {
+initExecutor(conf, batchSize);
+if (!isParallel) {
+  return false;
+}
+  }
+
+  // split every bloom filter (represented by a part of a byte[]) across 
workers
+  for (int j = 0; j < batchSize; j++) {
+if (!selectedInUse && inputColumn.noNulls) {
+  splitVectorAcrossWorkers(workers, inputColumn.vector[j], 
inputColumn.start[j],
+  inputColumn.length[j]);
+} else if (!selectedInUse) {
+  if (!inputColumn.isNull[j]) {
+splitVectorAcrossWorkers(workers, inputColumn.vector[j], 
inputColumn.start[j],
+inputColumn.length[j]);
+  }
+} else if (inputColumn.noNulls) {
+  int i = selected[j];
+  splitVectorAcrossWorkers(workers, inputColumn.vector[i], 
inputColumn.start[i],
+  inputColumn.length[i]);
+} else {
+  int i = selected[j];
+  if (!inputColumn.isNull[i]) {
+splitVectorAcrossWorkers(workers, inputColumn.vector[i], 
inputColumn.start[i],
+inputColumn.length[i]);
+  }
+}
+  }
+
+  return true;
+}
+
+private void initExecutor(Configuration conf, int batchSize) {
+  numThreads = 
conf.getInt(HiveConf.ConfVars.TEZ_BLOOM_FILTER_MERGE_THREADS.varname,
+  HiveConf.ConfVars.TEZ_BLOOM_FILTER_MERGE_THREADS.defaultIntVal);
+  LOG.info("Number of threads used for bloom filter merge: {}", 
numThreads);
+
+  if (numThreads < 0) {
+throw new RuntimeException(
+"invalid number of threads for bloom filter merge: " + numThreads);
+  }
+  if (numThreads == 0) { // disable parallel feature
+return; // this will leave isParallel=false
+  }
+  isParallel = true;
+  executor = Executors.newFixedThreadPool(numThreads);
+
+  workers = new BloomFilterMergeWorker[numThreads];
+  for (int f = 0; f < numThreads; f++) {
+workers[f] = new BloomFilterMergeWorker(bfBytes, 0, bfBytes.length);
+  }
+
+  for (int f = 0; f < numThreads; f++) {
+executor.submit(workers[f]);
+  }
+}
+
+public int getNumberOfWaitingMergeTasks(){
+  int size = 0;
+  for (BloomFilterMergeWorker w : workers){
+size += w.queue.size();
+  }
+  return size;
+}
+
+public int getNumberOfMergingWorkers() {
+  int working = 0;
+  for (BloomFilterMergeWorker w : workers) {
+if (w.isMerging.get()) {
+  working += 1;
+}
+  }
+  return working;
+}
+
+private static void splitVectorAcrossWorkers(BloomFilterMergeWorker[] 
workers, byte[] bytes,
+int start, int length) {
+  if (bytes == null || length == 0) {
+return;
+  }
+  /*
+   * This will split a byte[] across workers as below:
+   * let's say there are 10 workers for 7813 bytes, in this case
+   * length: 7813, elementPerBatch: 781
+   * bytes assigned to workers: inclusive lower bound, exclusive upper 
bound
+   * 1. worker: 5 -> 786
+   * 2. worker: 786 -> 1567
+   * 3. worker: 1567 -> 2348
+   * 4. worker: 2348 -> 3129
+   * 5. worker: 3129 -> 3910
+   * 6. worker: 3910 -> 4691
+   * 7. worker: 4691 -> 5472
+   * 8. worker: 5472 -> 6253
+   * 9. worker: 6253 -> 7034
+   * 10. worker: 7034 -> 7813 (last element per batch is: 779)
+   *
+   * This way, a particular worker will be given with the same part
+   * of all bloom filters along with the shared base bloom filter,
+   * so the bitwise OR function will not be a subject of threading/sync 
issues.
+   */
+  int elementPerBatch =
+  (int) Math.ceil((double) (length - 

[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-08-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=470123=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-470123
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 13/Aug/20 07:50
Start Date: 13/Aug/20 07:50
Worklog Time Spent: 10m 
  Work Description: abstractdog commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r469762209



##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/aggregates/VectorUDAFBloomFilterMerge.java
##
@@ -77,6 +75,211 @@ public void reset() {
   // Do not change the initial bytes which contain 
NumHashFunctions/NumBits!
   Arrays.fill(bfBytes, BloomKFilter.START_OF_SERIALIZED_LONGS, 
bfBytes.length, (byte) 0);
 }
+
+public boolean mergeBloomFilterBytesFromInputColumn(BytesColumnVector 
inputColumn,
+int batchSize, boolean selectedInUse, int[] selected, Configuration 
conf) {
+  // already set in previous iterations, no need to call initExecutor again
+  if (numThreads == 0) {
+return false;
+  }
+  if (executor == null) {
+initExecutor(conf, batchSize);
+if (!isParallel) {
+  return false;
+}
+  }
+
+  // split every bloom filter (represented by a part of a byte[]) across 
workers
+  for (int j = 0; j < batchSize; j++) {
+if (!selectedInUse && inputColumn.noNulls) {
+  splitVectorAcrossWorkers(workers, inputColumn.vector[j], 
inputColumn.start[j],
+  inputColumn.length[j]);
+} else if (!selectedInUse) {
+  if (!inputColumn.isNull[j]) {
+splitVectorAcrossWorkers(workers, inputColumn.vector[j], 
inputColumn.start[j],
+inputColumn.length[j]);
+  }
+} else if (inputColumn.noNulls) {
+  int i = selected[j];
+  splitVectorAcrossWorkers(workers, inputColumn.vector[i], 
inputColumn.start[i],
+  inputColumn.length[i]);
+} else {
+  int i = selected[j];
+  if (!inputColumn.isNull[i]) {
+splitVectorAcrossWorkers(workers, inputColumn.vector[i], 
inputColumn.start[i],
+inputColumn.length[i]);
+  }
+}
+  }
+
+  return true;
+}
+
+private void initExecutor(Configuration conf, int batchSize) {
+  numThreads = 
conf.getInt(HiveConf.ConfVars.TEZ_BLOOM_FILTER_MERGE_THREADS.varname,
+  HiveConf.ConfVars.TEZ_BLOOM_FILTER_MERGE_THREADS.defaultIntVal);
+  LOG.info("Number of threads used for bloom filter merge: {}", 
numThreads);
+
+  if (numThreads < 0) {
+throw new RuntimeException(
+"invalid number of threads for bloom filter merge: " + numThreads);
+  }
+  if (numThreads == 0) { // disable parallel feature
+return; // this will leave isParallel=false
+  }
+  isParallel = true;
+  executor = Executors.newFixedThreadPool(numThreads);
+
+  workers = new BloomFilterMergeWorker[numThreads];
+  for (int f = 0; f < numThreads; f++) {
+workers[f] = new BloomFilterMergeWorker(bfBytes, 0, bfBytes.length);
+  }
+
+  for (int f = 0; f < numThreads; f++) {
+executor.submit(workers[f]);
+  }
+}
+
+public int getNumberOfWaitingMergeTasks(){
+  int size = 0;
+  for (BloomFilterMergeWorker w : workers){
+size += w.queue.size();
+  }
+  return size;
+}
+
+public int getNumberOfMergingWorkers() {
+  int working = 0;
+  for (BloomFilterMergeWorker w : workers) {
+if (w.isMerging.get()) {
+  working += 1;
+}
+  }
+  return working;
+}
+
+private static void splitVectorAcrossWorkers(BloomFilterMergeWorker[] 
workers, byte[] bytes,
+int start, int length) {
+  if (bytes == null || length == 0) {
+return;
+  }
+  /*
+   * This will split a byte[] across workers as below:
+   * let's say there are 10 workers for 7813 bytes, in this case
+   * length: 7813, elementPerBatch: 781
+   * bytes assigned to workers: inclusive lower bound, exclusive upper 
bound
+   * 1. worker: 5 -> 786
+   * 2. worker: 786 -> 1567
+   * 3. worker: 1567 -> 2348
+   * 4. worker: 2348 -> 3129
+   * 5. worker: 3129 -> 3910
+   * 6. worker: 3910 -> 4691
+   * 7. worker: 4691 -> 5472
+   * 8. worker: 5472 -> 6253
+   * 9. worker: 6253 -> 7034
+   * 10. worker: 7034 -> 7813 (last element per batch is: 779)
+   *
+   * This way, a particular worker will be given with the same part
+   * of all bloom filters along with the shared base bloom filter,
+   * so the bitwise OR function will not be a subject of threading/sync 
issues.
+   */
+  int elementPerBatch =
+  (int) Math.ceil((double) (length - 

[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-08-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=470122=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-470122
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 13/Aug/20 07:49
Start Date: 13/Aug/20 07:49
Worklog Time Spent: 10m 
  Work Description: abstractdog commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r469761824



##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorGroupByOperator.java
##
@@ -252,6 +258,13 @@ protected VectorAggregationBufferRow 
allocateAggregationBuffer() throws HiveExce
   return bufferSet;
 }
 
+protected void finishAggregators(boolean aborted) {

Review comment:
   I'll take care of this in next patch





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 470122)
Time Spent: 6h 50m  (was: 6h 40m)

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---
>
> Key: HIVE-23880
> URL: https://issues.apache.org/jira/browse/HIVE-23880
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
>  Labels: pull-request-available
> Attachments: lipwig-output3605036885489193068.svg
>
>  Time Spent: 6h 50m
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in 
> case of large number of source mapper tasks (~1000, Map 1 in below example) 
> and a large amount of expected entries (50M) in bloom filters.
> For example in TPCDS Q93:
> {code}
> select /*+ semi(store_returns, sr_item_sk, store_sales, 7000)*/ 
> ss_customer_sk
> ,sum(act_sales) sumsales
>   from (select ss_item_sk
>   ,ss_ticket_number
>   ,ss_customer_sk
>   ,case when sr_return_quantity is not null then 
> (ss_quantity-sr_return_quantity)*ss_sales_price
> else 
> (ss_quantity*ss_sales_price) end act_sales
> from store_sales left outer join store_returns on (sr_item_sk = 
> ss_item_sk
>and 
> sr_ticket_number = ss_ticket_number)
> ,reason
> where sr_reason_sk = r_reason_sk
>   and r_reason_desc = 'reason 66') t
>   group by ss_customer_sk
>   order by sumsales, ss_customer_sk
> limit 100;
> {code}
> On 10TB-30TB scale there is a chance that from 3-4 mins of query runtime 1-2 
> mins are spent with merging bloom filters (Reducer 2), as in:  
> [^lipwig-output3605036885489193068.svg] 
> {code}
> --
> VERTICES  MODESTATUS  TOTAL  COMPLETED  RUNNING  PENDING  
> FAILED  KILLED
> --
> Map 3 ..  llap SUCCEEDED  1  100  
>  0   0
> Map 1 ..  llap SUCCEEDED   1263   126300  
>  0   0
> Reducer 2 llap   RUNNING  1  010  
>  0   0
> Map 4 llap   RUNNING   6154  0  207 5947  
>  0   0
> Reducer 5 llapINITED 43  00   43  
>  0   0
> Reducer 6 llapINITED  1  001  
>  0   0
> --
> VERTICES: 02/06  [>>--] 16%   ELAPSED TIME: 149.98 s
> --
> {code}
> For example, 70M entries in bloom filter leads to a 436 465 696 bits, so 
> merging 1263 bloom filters means running ~ 1263 * 436 465 696 bitwise OR 
> operation, which is very hot codepath, but can be parallelized.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-08-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=469734=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-469734
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 12/Aug/20 14:26
Start Date: 12/Aug/20 14:26
Worklog Time Spent: 10m 
  Work Description: pgaref commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r469300187



##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/aggregates/VectorUDAFBloomFilterMerge.java
##
@@ -77,6 +75,211 @@ public void reset() {
   // Do not change the initial bytes which contain 
NumHashFunctions/NumBits!
   Arrays.fill(bfBytes, BloomKFilter.START_OF_SERIALIZED_LONGS, 
bfBytes.length, (byte) 0);
 }
+
+public boolean mergeBloomFilterBytesFromInputColumn(BytesColumnVector 
inputColumn,
+int batchSize, boolean selectedInUse, int[] selected, Configuration 
conf) {
+  // already set in previous iterations, no need to call initExecutor again
+  if (numThreads == 0) {
+return false;
+  }
+  if (executor == null) {
+initExecutor(conf, batchSize);
+if (!isParallel) {
+  return false;
+}
+  }
+
+  // split every bloom filter (represented by a part of a byte[]) across 
workers
+  for (int j = 0; j < batchSize; j++) {
+if (!selectedInUse && inputColumn.noNulls) {
+  splitVectorAcrossWorkers(workers, inputColumn.vector[j], 
inputColumn.start[j],
+  inputColumn.length[j]);
+} else if (!selectedInUse) {
+  if (!inputColumn.isNull[j]) {
+splitVectorAcrossWorkers(workers, inputColumn.vector[j], 
inputColumn.start[j],
+inputColumn.length[j]);
+  }
+} else if (inputColumn.noNulls) {
+  int i = selected[j];
+  splitVectorAcrossWorkers(workers, inputColumn.vector[i], 
inputColumn.start[i],
+  inputColumn.length[i]);
+} else {
+  int i = selected[j];
+  if (!inputColumn.isNull[i]) {
+splitVectorAcrossWorkers(workers, inputColumn.vector[i], 
inputColumn.start[i],
+inputColumn.length[i]);
+  }
+}
+  }
+
+  return true;
+}
+
+private void initExecutor(Configuration conf, int batchSize) {
+  numThreads = 
conf.getInt(HiveConf.ConfVars.TEZ_BLOOM_FILTER_MERGE_THREADS.varname,
+  HiveConf.ConfVars.TEZ_BLOOM_FILTER_MERGE_THREADS.defaultIntVal);
+  LOG.info("Number of threads used for bloom filter merge: {}", 
numThreads);
+
+  if (numThreads < 0) {
+throw new RuntimeException(
+"invalid number of threads for bloom filter merge: " + numThreads);
+  }
+  if (numThreads == 0) { // disable parallel feature
+return; // this will leave isParallel=false
+  }
+  isParallel = true;
+  executor = Executors.newFixedThreadPool(numThreads);
+
+  workers = new BloomFilterMergeWorker[numThreads];
+  for (int f = 0; f < numThreads; f++) {
+workers[f] = new BloomFilterMergeWorker(bfBytes, 0, bfBytes.length);
+  }
+
+  for (int f = 0; f < numThreads; f++) {
+executor.submit(workers[f]);
+  }
+}
+
+public int getNumberOfWaitingMergeTasks(){
+  int size = 0;
+  for (BloomFilterMergeWorker w : workers){
+size += w.queue.size();
+  }
+  return size;
+}
+
+public int getNumberOfMergingWorkers() {
+  int working = 0;
+  for (BloomFilterMergeWorker w : workers) {
+if (w.isMerging.get()) {
+  working += 1;
+}
+  }
+  return working;
+}
+
+private static void splitVectorAcrossWorkers(BloomFilterMergeWorker[] 
workers, byte[] bytes,
+int start, int length) {
+  if (bytes == null || length == 0) {
+return;
+  }
+  /*
+   * This will split a byte[] across workers as below:
+   * let's say there are 10 workers for 7813 bytes, in this case
+   * length: 7813, elementPerBatch: 781
+   * bytes assigned to workers: inclusive lower bound, exclusive upper 
bound
+   * 1. worker: 5 -> 786
+   * 2. worker: 786 -> 1567
+   * 3. worker: 1567 -> 2348
+   * 4. worker: 2348 -> 3129
+   * 5. worker: 3129 -> 3910
+   * 6. worker: 3910 -> 4691
+   * 7. worker: 4691 -> 5472
+   * 8. worker: 5472 -> 6253
+   * 9. worker: 6253 -> 7034
+   * 10. worker: 7034 -> 7813 (last element per batch is: 779)
+   *
+   * This way, a particular worker will be given with the same part
+   * of all bloom filters along with the shared base bloom filter,
+   * so the bitwise OR function will not be a subject of threading/sync 
issues.
+   */
+  int elementPerBatch =
+  (int) Math.ceil((double) (length - 

[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-08-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=469682=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-469682
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 12/Aug/20 12:41
Start Date: 12/Aug/20 12:41
Worklog Time Spent: 10m 
  Work Description: abstractdog commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r469099535



##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorGroupByOperator.java
##
@@ -1126,6 +1137,7 @@ protected void initializeOp(Configuration hconf) throws 
HiveException {
 VectorAggregateExpression vecAggrExpr = null;
 try {
   vecAggrExpr = ctor.newInstance(vecAggrDesc);
+  vecAggrExpr.withConf(hconf);

Review comment:
   1. constructor: first I tried to pass it to constructor, but that breaks 
compatibility with every other subclasses of VectorAggregateExpression, if I 
use ctor.newInstance(vecAggrDesc, hconf), I need to add that constructor to all 
subclasses, because they don't inherit this ctor from 
VectorAggregateExpression...withConf can solve this, let me know about better 
ways
   
   2. single int: this config is specific to VectorUDAFBloomFilterMerge, so I 
believe I should not pass it through a constructor to every 
VectorAggregateExpression, and I didn't want to go for an instanceof hack for a 
cast + specific call





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 469682)
Time Spent: 6.5h  (was: 6h 20m)

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---
>
> Key: HIVE-23880
> URL: https://issues.apache.org/jira/browse/HIVE-23880
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
>  Labels: pull-request-available
> Attachments: lipwig-output3605036885489193068.svg
>
>  Time Spent: 6.5h
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in 
> case of large number of source mapper tasks (~1000, Map 1 in below example) 
> and a large amount of expected entries (50M) in bloom filters.
> For example in TPCDS Q93:
> {code}
> select /*+ semi(store_returns, sr_item_sk, store_sales, 7000)*/ 
> ss_customer_sk
> ,sum(act_sales) sumsales
>   from (select ss_item_sk
>   ,ss_ticket_number
>   ,ss_customer_sk
>   ,case when sr_return_quantity is not null then 
> (ss_quantity-sr_return_quantity)*ss_sales_price
> else 
> (ss_quantity*ss_sales_price) end act_sales
> from store_sales left outer join store_returns on (sr_item_sk = 
> ss_item_sk
>and 
> sr_ticket_number = ss_ticket_number)
> ,reason
> where sr_reason_sk = r_reason_sk
>   and r_reason_desc = 'reason 66') t
>   group by ss_customer_sk
>   order by sumsales, ss_customer_sk
> limit 100;
> {code}
> On 10TB-30TB scale there is a chance that from 3-4 mins of query runtime 1-2 
> mins are spent with merging bloom filters (Reducer 2), as in:  
> [^lipwig-output3605036885489193068.svg] 
> {code}
> --
> VERTICES  MODESTATUS  TOTAL  COMPLETED  RUNNING  PENDING  
> FAILED  KILLED
> --
> Map 3 ..  llap SUCCEEDED  1  100  
>  0   0
> Map 1 ..  llap SUCCEEDED   1263   126300  
>  0   0
> Reducer 2 llap   RUNNING  1  010  
>  0   0
> Map 4 llap   RUNNING   6154  0  207 5947  
>  0   0
> Reducer 5 llapINITED 43  00   43  
>  0   0
> Reducer 6 llapINITED  1  001  
>  0   0
> --
> VERTICES: 02/06  [>>--] 16%   ELAPSED TIME: 149.98 s
> 

[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-08-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=469680=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-469680
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 12/Aug/20 12:40
Start Date: 12/Aug/20 12:40
Worklog Time Spent: 10m 
  Work Description: abstractdog commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r469229142



##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorGroupByOperator.java
##
@@ -517,6 +532,10 @@ public void close(boolean aborted) throws HiveException {
 
 }
 
+//TODO: implement finishAggregators
+protected void finishAggregators(boolean aborted) {

Review comment:
   valid point, I need to recheck and learn how aggregators and aggretation 
buffers are paired together for a specific mode, it seems complicated for the 
first time





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 469680)
Time Spent: 6h 10m  (was: 6h)

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---
>
> Key: HIVE-23880
> URL: https://issues.apache.org/jira/browse/HIVE-23880
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
>  Labels: pull-request-available
> Attachments: lipwig-output3605036885489193068.svg
>
>  Time Spent: 6h 10m
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in 
> case of large number of source mapper tasks (~1000, Map 1 in below example) 
> and a large amount of expected entries (50M) in bloom filters.
> For example in TPCDS Q93:
> {code}
> select /*+ semi(store_returns, sr_item_sk, store_sales, 7000)*/ 
> ss_customer_sk
> ,sum(act_sales) sumsales
>   from (select ss_item_sk
>   ,ss_ticket_number
>   ,ss_customer_sk
>   ,case when sr_return_quantity is not null then 
> (ss_quantity-sr_return_quantity)*ss_sales_price
> else 
> (ss_quantity*ss_sales_price) end act_sales
> from store_sales left outer join store_returns on (sr_item_sk = 
> ss_item_sk
>and 
> sr_ticket_number = ss_ticket_number)
> ,reason
> where sr_reason_sk = r_reason_sk
>   and r_reason_desc = 'reason 66') t
>   group by ss_customer_sk
>   order by sumsales, ss_customer_sk
> limit 100;
> {code}
> On 10TB-30TB scale there is a chance that from 3-4 mins of query runtime 1-2 
> mins are spent with merging bloom filters (Reducer 2), as in:  
> [^lipwig-output3605036885489193068.svg] 
> {code}
> --
> VERTICES  MODESTATUS  TOTAL  COMPLETED  RUNNING  PENDING  
> FAILED  KILLED
> --
> Map 3 ..  llap SUCCEEDED  1  100  
>  0   0
> Map 1 ..  llap SUCCEEDED   1263   126300  
>  0   0
> Reducer 2 llap   RUNNING  1  010  
>  0   0
> Map 4 llap   RUNNING   6154  0  207 5947  
>  0   0
> Reducer 5 llapINITED 43  00   43  
>  0   0
> Reducer 6 llapINITED  1  001  
>  0   0
> --
> VERTICES: 02/06  [>>--] 16%   ELAPSED TIME: 149.98 s
> --
> {code}
> For example, 70M entries in bloom filter leads to a 436 465 696 bits, so 
> merging 1263 bloom filters means running ~ 1263 * 436 465 696 bitwise OR 
> operation, which is very hot codepath, but can be parallelized.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-08-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=469681=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-469681
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 12/Aug/20 12:40
Start Date: 12/Aug/20 12:40
Worklog Time Spent: 10m 
  Work Description: abstractdog commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r469229142



##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorGroupByOperator.java
##
@@ -517,6 +532,10 @@ public void close(boolean aborted) throws HiveException {
 
 }
 
+//TODO: implement finishAggregators
+protected void finishAggregators(boolean aborted) {

Review comment:
   valid point, I need to recheck and learn how aggregators and aggregation 
buffers are paired together for a specific mode, it seems complicated for the 
first time





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 469681)
Time Spent: 6h 20m  (was: 6h 10m)

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---
>
> Key: HIVE-23880
> URL: https://issues.apache.org/jira/browse/HIVE-23880
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
>  Labels: pull-request-available
> Attachments: lipwig-output3605036885489193068.svg
>
>  Time Spent: 6h 20m
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in 
> case of large number of source mapper tasks (~1000, Map 1 in below example) 
> and a large amount of expected entries (50M) in bloom filters.
> For example in TPCDS Q93:
> {code}
> select /*+ semi(store_returns, sr_item_sk, store_sales, 7000)*/ 
> ss_customer_sk
> ,sum(act_sales) sumsales
>   from (select ss_item_sk
>   ,ss_ticket_number
>   ,ss_customer_sk
>   ,case when sr_return_quantity is not null then 
> (ss_quantity-sr_return_quantity)*ss_sales_price
> else 
> (ss_quantity*ss_sales_price) end act_sales
> from store_sales left outer join store_returns on (sr_item_sk = 
> ss_item_sk
>and 
> sr_ticket_number = ss_ticket_number)
> ,reason
> where sr_reason_sk = r_reason_sk
>   and r_reason_desc = 'reason 66') t
>   group by ss_customer_sk
>   order by sumsales, ss_customer_sk
> limit 100;
> {code}
> On 10TB-30TB scale there is a chance that from 3-4 mins of query runtime 1-2 
> mins are spent with merging bloom filters (Reducer 2), as in:  
> [^lipwig-output3605036885489193068.svg] 
> {code}
> --
> VERTICES  MODESTATUS  TOTAL  COMPLETED  RUNNING  PENDING  
> FAILED  KILLED
> --
> Map 3 ..  llap SUCCEEDED  1  100  
>  0   0
> Map 1 ..  llap SUCCEEDED   1263   126300  
>  0   0
> Reducer 2 llap   RUNNING  1  010  
>  0   0
> Map 4 llap   RUNNING   6154  0  207 5947  
>  0   0
> Reducer 5 llapINITED 43  00   43  
>  0   0
> Reducer 6 llapINITED  1  001  
>  0   0
> --
> VERTICES: 02/06  [>>--] 16%   ELAPSED TIME: 149.98 s
> --
> {code}
> For example, 70M entries in bloom filter leads to a 436 465 696 bits, so 
> merging 1263 bloom filters means running ~ 1263 * 436 465 696 bitwise OR 
> operation, which is very hot codepath, but can be parallelized.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-08-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=469678=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-469678
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 12/Aug/20 12:37
Start Date: 12/Aug/20 12:37
Worklog Time Spent: 10m 
  Work Description: abstractdog commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r469227267



##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/aggregates/VectorUDAFBloomFilterMerge.java
##
@@ -77,6 +75,211 @@ public void reset() {
   // Do not change the initial bytes which contain 
NumHashFunctions/NumBits!
   Arrays.fill(bfBytes, BloomKFilter.START_OF_SERIALIZED_LONGS, 
bfBytes.length, (byte) 0);
 }
+
+public boolean mergeBloomFilterBytesFromInputColumn(BytesColumnVector 
inputColumn,
+int batchSize, boolean selectedInUse, int[] selected, Configuration 
conf) {
+  // already set in previous iterations, no need to call initExecutor again
+  if (numThreads == 0) {
+return false;
+  }
+  if (executor == null) {
+initExecutor(conf, batchSize);
+if (!isParallel) {
+  return false;
+}
+  }
+
+  // split every bloom filter (represented by a part of a byte[]) across 
workers
+  for (int j = 0; j < batchSize; j++) {
+if (!selectedInUse && inputColumn.noNulls) {
+  splitVectorAcrossWorkers(workers, inputColumn.vector[j], 
inputColumn.start[j],
+  inputColumn.length[j]);
+} else if (!selectedInUse) {
+  if (!inputColumn.isNull[j]) {
+splitVectorAcrossWorkers(workers, inputColumn.vector[j], 
inputColumn.start[j],
+inputColumn.length[j]);
+  }
+} else if (inputColumn.noNulls) {
+  int i = selected[j];
+  splitVectorAcrossWorkers(workers, inputColumn.vector[i], 
inputColumn.start[i],
+  inputColumn.length[i]);
+} else {
+  int i = selected[j];
+  if (!inputColumn.isNull[i]) {
+splitVectorAcrossWorkers(workers, inputColumn.vector[i], 
inputColumn.start[i],
+inputColumn.length[i]);
+  }
+}
+  }
+
+  return true;
+}
+
+private void initExecutor(Configuration conf, int batchSize) {
+  numThreads = 
conf.getInt(HiveConf.ConfVars.TEZ_BLOOM_FILTER_MERGE_THREADS.varname,
+  HiveConf.ConfVars.TEZ_BLOOM_FILTER_MERGE_THREADS.defaultIntVal);
+  LOG.info("Number of threads used for bloom filter merge: {}", 
numThreads);
+
+  if (numThreads < 0) {
+throw new RuntimeException(
+"invalid number of threads for bloom filter merge: " + numThreads);
+  }
+  if (numThreads == 0) { // disable parallel feature
+return; // this will leave isParallel=false
+  }
+  isParallel = true;
+  executor = Executors.newFixedThreadPool(numThreads);
+
+  workers = new BloomFilterMergeWorker[numThreads];
+  for (int f = 0; f < numThreads; f++) {
+workers[f] = new BloomFilterMergeWorker(bfBytes, 0, bfBytes.length);
+  }
+
+  for (int f = 0; f < numThreads; f++) {
+executor.submit(workers[f]);
+  }
+}
+
+public int getNumberOfWaitingMergeTasks(){
+  int size = 0;
+  for (BloomFilterMergeWorker w : workers){
+size += w.queue.size();
+  }
+  return size;
+}
+
+public int getNumberOfMergingWorkers() {
+  int working = 0;
+  for (BloomFilterMergeWorker w : workers) {
+if (w.isMerging.get()) {
+  working += 1;
+}
+  }
+  return working;
+}
+
+private static void splitVectorAcrossWorkers(BloomFilterMergeWorker[] 
workers, byte[] bytes,
+int start, int length) {
+  if (bytes == null || length == 0) {
+return;
+  }
+  /*
+   * This will split a byte[] across workers as below:
+   * let's say there are 10 workers for 7813 bytes, in this case
+   * length: 7813, elementPerBatch: 781
+   * bytes assigned to workers: inclusive lower bound, exclusive upper 
bound
+   * 1. worker: 5 -> 786
+   * 2. worker: 786 -> 1567
+   * 3. worker: 1567 -> 2348
+   * 4. worker: 2348 -> 3129
+   * 5. worker: 3129 -> 3910
+   * 6. worker: 3910 -> 4691
+   * 7. worker: 4691 -> 5472
+   * 8. worker: 5472 -> 6253
+   * 9. worker: 6253 -> 7034
+   * 10. worker: 7034 -> 7813 (last element per batch is: 779)
+   *
+   * This way, a particular worker will be given with the same part
+   * of all bloom filters along with the shared base bloom filter,
+   * so the bitwise OR function will not be a subject of threading/sync 
issues.
+   */
+  int elementPerBatch =
+  (int) Math.ceil((double) (length - 

[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-08-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=469647=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-469647
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 12/Aug/20 11:21
Start Date: 12/Aug/20 11:21
Worklog Time Spent: 10m 
  Work Description: abstractdog commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r469186450



##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/aggregates/VectorUDAFBloomFilterMerge.java
##
@@ -77,6 +75,211 @@ public void reset() {
   // Do not change the initial bytes which contain 
NumHashFunctions/NumBits!
   Arrays.fill(bfBytes, BloomKFilter.START_OF_SERIALIZED_LONGS, 
bfBytes.length, (byte) 0);
 }
+
+public boolean mergeBloomFilterBytesFromInputColumn(BytesColumnVector 
inputColumn,
+int batchSize, boolean selectedInUse, int[] selected, Configuration 
conf) {
+  // already set in previous iterations, no need to call initExecutor again
+  if (numThreads == 0) {
+return false;
+  }
+  if (executor == null) {
+initExecutor(conf, batchSize);
+if (!isParallel) {
+  return false;
+}
+  }
+
+  // split every bloom filter (represented by a part of a byte[]) across 
workers
+  for (int j = 0; j < batchSize; j++) {
+if (!selectedInUse && inputColumn.noNulls) {
+  splitVectorAcrossWorkers(workers, inputColumn.vector[j], 
inputColumn.start[j],
+  inputColumn.length[j]);
+} else if (!selectedInUse) {
+  if (!inputColumn.isNull[j]) {
+splitVectorAcrossWorkers(workers, inputColumn.vector[j], 
inputColumn.start[j],
+inputColumn.length[j]);
+  }
+} else if (inputColumn.noNulls) {
+  int i = selected[j];
+  splitVectorAcrossWorkers(workers, inputColumn.vector[i], 
inputColumn.start[i],
+  inputColumn.length[i]);
+} else {
+  int i = selected[j];
+  if (!inputColumn.isNull[i]) {
+splitVectorAcrossWorkers(workers, inputColumn.vector[i], 
inputColumn.start[i],
+inputColumn.length[i]);
+  }
+}
+  }
+
+  return true;
+}
+
+private void initExecutor(Configuration conf, int batchSize) {

Review comment:
   right, I'll remove
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 469647)
Time Spent: 5h 50m  (was: 5h 40m)

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---
>
> Key: HIVE-23880
> URL: https://issues.apache.org/jira/browse/HIVE-23880
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
>  Labels: pull-request-available
> Attachments: lipwig-output3605036885489193068.svg
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in 
> case of large number of source mapper tasks (~1000, Map 1 in below example) 
> and a large amount of expected entries (50M) in bloom filters.
> For example in TPCDS Q93:
> {code}
> select /*+ semi(store_returns, sr_item_sk, store_sales, 7000)*/ 
> ss_customer_sk
> ,sum(act_sales) sumsales
>   from (select ss_item_sk
>   ,ss_ticket_number
>   ,ss_customer_sk
>   ,case when sr_return_quantity is not null then 
> (ss_quantity-sr_return_quantity)*ss_sales_price
> else 
> (ss_quantity*ss_sales_price) end act_sales
> from store_sales left outer join store_returns on (sr_item_sk = 
> ss_item_sk
>and 
> sr_ticket_number = ss_ticket_number)
> ,reason
> where sr_reason_sk = r_reason_sk
>   and r_reason_desc = 'reason 66') t
>   group by ss_customer_sk
>   order by sumsales, ss_customer_sk
> limit 100;
> {code}
> On 10TB-30TB scale there is a chance that from 3-4 mins of query runtime 1-2 
> mins are spent with merging bloom filters (Reducer 2), as in:  
> [^lipwig-output3605036885489193068.svg] 
> {code}
> 

[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-08-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=469646=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-469646
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 12/Aug/20 11:20
Start Date: 12/Aug/20 11:20
Worklog Time Spent: 10m 
  Work Description: pgaref commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r469186380



##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/aggregates/VectorUDAFBloomFilterMerge.java
##
@@ -77,6 +75,211 @@ public void reset() {
   // Do not change the initial bytes which contain 
NumHashFunctions/NumBits!
   Arrays.fill(bfBytes, BloomKFilter.START_OF_SERIALIZED_LONGS, 
bfBytes.length, (byte) 0);
 }
+
+public boolean mergeBloomFilterBytesFromInputColumn(BytesColumnVector 
inputColumn,
+int batchSize, boolean selectedInUse, int[] selected, Configuration 
conf) {
+  // already set in previous iterations, no need to call initExecutor again
+  if (numThreads == 0) {
+return false;
+  }
+  if (executor == null) {
+initExecutor(conf, batchSize);
+if (!isParallel) {
+  return false;
+}
+  }
+
+  // split every bloom filter (represented by a part of a byte[]) across 
workers
+  for (int j = 0; j < batchSize; j++) {
+if (!selectedInUse && inputColumn.noNulls) {
+  splitVectorAcrossWorkers(workers, inputColumn.vector[j], 
inputColumn.start[j],
+  inputColumn.length[j]);
+} else if (!selectedInUse) {
+  if (!inputColumn.isNull[j]) {
+splitVectorAcrossWorkers(workers, inputColumn.vector[j], 
inputColumn.start[j],
+inputColumn.length[j]);
+  }
+} else if (inputColumn.noNulls) {
+  int i = selected[j];
+  splitVectorAcrossWorkers(workers, inputColumn.vector[i], 
inputColumn.start[i],
+  inputColumn.length[i]);
+} else {
+  int i = selected[j];
+  if (!inputColumn.isNull[i]) {
+splitVectorAcrossWorkers(workers, inputColumn.vector[i], 
inputColumn.start[i],
+inputColumn.length[i]);
+  }
+}
+  }
+
+  return true;
+}
+
+private void initExecutor(Configuration conf, int batchSize) {
+  numThreads = 
conf.getInt(HiveConf.ConfVars.TEZ_BLOOM_FILTER_MERGE_THREADS.varname,
+  HiveConf.ConfVars.TEZ_BLOOM_FILTER_MERGE_THREADS.defaultIntVal);
+  LOG.info("Number of threads used for bloom filter merge: {}", 
numThreads);
+
+  if (numThreads < 0) {
+throw new RuntimeException(
+"invalid number of threads for bloom filter merge: " + numThreads);
+  }
+  if (numThreads == 0) { // disable parallel feature
+return; // this will leave isParallel=false
+  }
+  isParallel = true;
+  executor = Executors.newFixedThreadPool(numThreads);
+
+  workers = new BloomFilterMergeWorker[numThreads];
+  for (int f = 0; f < numThreads; f++) {
+workers[f] = new BloomFilterMergeWorker(bfBytes, 0, bfBytes.length);
+  }
+
+  for (int f = 0; f < numThreads; f++) {
+executor.submit(workers[f]);
+  }
+}
+
+public int getNumberOfWaitingMergeTasks(){
+  int size = 0;
+  for (BloomFilterMergeWorker w : workers){
+size += w.queue.size();
+  }
+  return size;
+}
+
+public int getNumberOfMergingWorkers() {
+  int working = 0;
+  for (BloomFilterMergeWorker w : workers) {
+if (w.isMerging.get()) {
+  working += 1;
+}
+  }
+  return working;
+}
+
+private static void splitVectorAcrossWorkers(BloomFilterMergeWorker[] 
workers, byte[] bytes,
+int start, int length) {
+  if (bytes == null || length == 0) {
+return;
+  }
+  /*
+   * This will split a byte[] across workers as below:
+   * let's say there are 10 workers for 7813 bytes, in this case
+   * length: 7813, elementPerBatch: 781
+   * bytes assigned to workers: inclusive lower bound, exclusive upper 
bound
+   * 1. worker: 5 -> 786
+   * 2. worker: 786 -> 1567
+   * 3. worker: 1567 -> 2348
+   * 4. worker: 2348 -> 3129
+   * 5. worker: 3129 -> 3910
+   * 6. worker: 3910 -> 4691
+   * 7. worker: 4691 -> 5472
+   * 8. worker: 5472 -> 6253
+   * 9. worker: 6253 -> 7034
+   * 10. worker: 7034 -> 7813 (last element per batch is: 779)
+   *
+   * This way, a particular worker will be given with the same part
+   * of all bloom filters along with the shared base bloom filter,
+   * so the bitwise OR function will not be a subject of threading/sync 
issues.
+   */
+  int elementPerBatch =
+  (int) Math.ceil((double) (length - 

[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-08-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=469645=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-469645
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 12/Aug/20 11:18
Start Date: 12/Aug/20 11:18
Worklog Time Spent: 10m 
  Work Description: abstractdog commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r469185446



##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/aggregates/VectorUDAFBloomFilterMerge.java
##
@@ -77,6 +75,211 @@ public void reset() {
   // Do not change the initial bytes which contain 
NumHashFunctions/NumBits!
   Arrays.fill(bfBytes, BloomKFilter.START_OF_SERIALIZED_LONGS, 
bfBytes.length, (byte) 0);
 }
+
+public boolean mergeBloomFilterBytesFromInputColumn(BytesColumnVector 
inputColumn,
+int batchSize, boolean selectedInUse, int[] selected, Configuration 
conf) {
+  // already set in previous iterations, no need to call initExecutor again
+  if (numThreads == 0) {
+return false;
+  }
+  if (executor == null) {
+initExecutor(conf, batchSize);
+if (!isParallel) {
+  return false;
+}
+  }
+
+  // split every bloom filter (represented by a part of a byte[]) across 
workers
+  for (int j = 0; j < batchSize; j++) {
+if (!selectedInUse && inputColumn.noNulls) {
+  splitVectorAcrossWorkers(workers, inputColumn.vector[j], 
inputColumn.start[j],
+  inputColumn.length[j]);
+} else if (!selectedInUse) {
+  if (!inputColumn.isNull[j]) {
+splitVectorAcrossWorkers(workers, inputColumn.vector[j], 
inputColumn.start[j],
+inputColumn.length[j]);
+  }
+} else if (inputColumn.noNulls) {
+  int i = selected[j];
+  splitVectorAcrossWorkers(workers, inputColumn.vector[i], 
inputColumn.start[i],
+  inputColumn.length[i]);
+} else {
+  int i = selected[j];
+  if (!inputColumn.isNull[i]) {
+splitVectorAcrossWorkers(workers, inputColumn.vector[i], 
inputColumn.start[i],
+inputColumn.length[i]);
+  }
+}
+  }
+
+  return true;
+}
+
+private void initExecutor(Configuration conf, int batchSize) {
+  numThreads = 
conf.getInt(HiveConf.ConfVars.TEZ_BLOOM_FILTER_MERGE_THREADS.varname,
+  HiveConf.ConfVars.TEZ_BLOOM_FILTER_MERGE_THREADS.defaultIntVal);
+  LOG.info("Number of threads used for bloom filter merge: {}", 
numThreads);
+
+  if (numThreads < 0) {
+throw new RuntimeException(
+"invalid number of threads for bloom filter merge: " + numThreads);
+  }
+  if (numThreads == 0) { // disable parallel feature

Review comment:
   good catch, I'm eliminating this check by initializing parallel behavior 
while initializing the aggregation buffer





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 469645)
Time Spent: 5.5h  (was: 5h 20m)

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---
>
> Key: HIVE-23880
> URL: https://issues.apache.org/jira/browse/HIVE-23880
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
>  Labels: pull-request-available
> Attachments: lipwig-output3605036885489193068.svg
>
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in 
> case of large number of source mapper tasks (~1000, Map 1 in below example) 
> and a large amount of expected entries (50M) in bloom filters.
> For example in TPCDS Q93:
> {code}
> select /*+ semi(store_returns, sr_item_sk, store_sales, 7000)*/ 
> ss_customer_sk
> ,sum(act_sales) sumsales
>   from (select ss_item_sk
>   ,ss_ticket_number
>   ,ss_customer_sk
>   ,case when sr_return_quantity is not null then 
> (ss_quantity-sr_return_quantity)*ss_sales_price
> else 
> (ss_quantity*ss_sales_price) end act_sales
> from store_sales left outer join store_returns on (sr_item_sk = 
> ss_item_sk
>  

[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-08-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=469644=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-469644
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 12/Aug/20 11:17
Start Date: 12/Aug/20 11:17
Worklog Time Spent: 10m 
  Work Description: abstractdog commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r469184919



##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/aggregates/VectorUDAFBloomFilterMerge.java
##
@@ -77,6 +75,211 @@ public void reset() {
   // Do not change the initial bytes which contain 
NumHashFunctions/NumBits!
   Arrays.fill(bfBytes, BloomKFilter.START_OF_SERIALIZED_LONGS, 
bfBytes.length, (byte) 0);
 }
+

Review comment:
   fortunately we won't need this, I've eliminated with boolean return hack 
according to another PR comment





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 469644)
Time Spent: 5h 20m  (was: 5h 10m)

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---
>
> Key: HIVE-23880
> URL: https://issues.apache.org/jira/browse/HIVE-23880
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
>  Labels: pull-request-available
> Attachments: lipwig-output3605036885489193068.svg
>
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in 
> case of large number of source mapper tasks (~1000, Map 1 in below example) 
> and a large amount of expected entries (50M) in bloom filters.
> For example in TPCDS Q93:
> {code}
> select /*+ semi(store_returns, sr_item_sk, store_sales, 7000)*/ 
> ss_customer_sk
> ,sum(act_sales) sumsales
>   from (select ss_item_sk
>   ,ss_ticket_number
>   ,ss_customer_sk
>   ,case when sr_return_quantity is not null then 
> (ss_quantity-sr_return_quantity)*ss_sales_price
> else 
> (ss_quantity*ss_sales_price) end act_sales
> from store_sales left outer join store_returns on (sr_item_sk = 
> ss_item_sk
>and 
> sr_ticket_number = ss_ticket_number)
> ,reason
> where sr_reason_sk = r_reason_sk
>   and r_reason_desc = 'reason 66') t
>   group by ss_customer_sk
>   order by sumsales, ss_customer_sk
> limit 100;
> {code}
> On 10TB-30TB scale there is a chance that from 3-4 mins of query runtime 1-2 
> mins are spent with merging bloom filters (Reducer 2), as in:  
> [^lipwig-output3605036885489193068.svg] 
> {code}
> --
> VERTICES  MODESTATUS  TOTAL  COMPLETED  RUNNING  PENDING  
> FAILED  KILLED
> --
> Map 3 ..  llap SUCCEEDED  1  100  
>  0   0
> Map 1 ..  llap SUCCEEDED   1263   126300  
>  0   0
> Reducer 2 llap   RUNNING  1  010  
>  0   0
> Map 4 llap   RUNNING   6154  0  207 5947  
>  0   0
> Reducer 5 llapINITED 43  00   43  
>  0   0
> Reducer 6 llapINITED  1  001  
>  0   0
> --
> VERTICES: 02/06  [>>--] 16%   ELAPSED TIME: 149.98 s
> --
> {code}
> For example, 70M entries in bloom filter leads to a 436 465 696 bits, so 
> merging 1263 bloom filters means running ~ 1263 * 436 465 696 bitwise OR 
> operation, which is very hot codepath, but can be parallelized.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-08-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=469619=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-469619
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 12/Aug/20 09:13
Start Date: 12/Aug/20 09:13
Worklog Time Spent: 10m 
  Work Description: abstractdog commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r469120003



##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/aggregates/VectorUDAFBloomFilterMerge.java
##
@@ -77,6 +75,211 @@ public void reset() {
   // Do not change the initial bytes which contain 
NumHashFunctions/NumBits!
   Arrays.fill(bfBytes, BloomKFilter.START_OF_SERIALIZED_LONGS, 
bfBytes.length, (byte) 0);
 }
+
+public boolean mergeBloomFilterBytesFromInputColumn(BytesColumnVector 
inputColumn,
+int batchSize, boolean selectedInUse, int[] selected, Configuration 
conf) {
+  // already set in previous iterations, no need to call initExecutor again
+  if (numThreads == 0) {
+return false;
+  }
+  if (executor == null) {
+initExecutor(conf, batchSize);
+if (!isParallel) {
+  return false;
+}
+  }
+
+  // split every bloom filter (represented by a part of a byte[]) across 
workers
+  for (int j = 0; j < batchSize; j++) {
+if (!selectedInUse && inputColumn.noNulls) {
+  splitVectorAcrossWorkers(workers, inputColumn.vector[j], 
inputColumn.start[j],
+  inputColumn.length[j]);
+} else if (!selectedInUse) {
+  if (!inputColumn.isNull[j]) {

Review comment:
   yeah, I missed this cleanup while mirroring the [original 
logic](https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/aggregates/VectorUDAFBloomFilterMerge.java#L132-L191)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 469619)
Time Spent: 5h 10m  (was: 5h)

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---
>
> Key: HIVE-23880
> URL: https://issues.apache.org/jira/browse/HIVE-23880
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
>  Labels: pull-request-available
> Attachments: lipwig-output3605036885489193068.svg
>
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in 
> case of large number of source mapper tasks (~1000, Map 1 in below example) 
> and a large amount of expected entries (50M) in bloom filters.
> For example in TPCDS Q93:
> {code}
> select /*+ semi(store_returns, sr_item_sk, store_sales, 7000)*/ 
> ss_customer_sk
> ,sum(act_sales) sumsales
>   from (select ss_item_sk
>   ,ss_ticket_number
>   ,ss_customer_sk
>   ,case when sr_return_quantity is not null then 
> (ss_quantity-sr_return_quantity)*ss_sales_price
> else 
> (ss_quantity*ss_sales_price) end act_sales
> from store_sales left outer join store_returns on (sr_item_sk = 
> ss_item_sk
>and 
> sr_ticket_number = ss_ticket_number)
> ,reason
> where sr_reason_sk = r_reason_sk
>   and r_reason_desc = 'reason 66') t
>   group by ss_customer_sk
>   order by sumsales, ss_customer_sk
> limit 100;
> {code}
> On 10TB-30TB scale there is a chance that from 3-4 mins of query runtime 1-2 
> mins are spent with merging bloom filters (Reducer 2), as in:  
> [^lipwig-output3605036885489193068.svg] 
> {code}
> --
> VERTICES  MODESTATUS  TOTAL  COMPLETED  RUNNING  PENDING  
> FAILED  KILLED
> --
> Map 3 ..  llap SUCCEEDED  1  100  
>  0   0
> Map 1 ..  llap SUCCEEDED   1263   126300  
>  0   0
> Reducer 2 llap   RUNNING  1  010  
>  0   0
> Map 4 

[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-08-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=469610=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-469610
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 12/Aug/20 09:00
Start Date: 12/Aug/20 09:00
Worklog Time Spent: 10m 
  Work Description: abstractdog commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r469112026



##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/aggregates/VectorUDAFBloomFilterMerge.java
##
@@ -77,6 +75,211 @@ public void reset() {
   // Do not change the initial bytes which contain 
NumHashFunctions/NumBits!
   Arrays.fill(bfBytes, BloomKFilter.START_OF_SERIALIZED_LONGS, 
bfBytes.length, (byte) 0);
 }
+
+public boolean mergeBloomFilterBytesFromInputColumn(BytesColumnVector 
inputColumn,
+int batchSize, boolean selectedInUse, int[] selected, Configuration 
conf) {
+  // already set in previous iterations, no need to call initExecutor again
+  if (numThreads == 0) {
+return false;
+  }
+  if (executor == null) {
+initExecutor(conf, batchSize);
+if (!isParallel) {
+  return false;
+}
+  }
+
+  // split every bloom filter (represented by a part of a byte[]) across 
workers
+  for (int j = 0; j < batchSize; j++) {
+if (!selectedInUse && inputColumn.noNulls) {
+  splitVectorAcrossWorkers(workers, inputColumn.vector[j], 
inputColumn.start[j],
+  inputColumn.length[j]);
+} else if (!selectedInUse) {
+  if (!inputColumn.isNull[j]) {
+splitVectorAcrossWorkers(workers, inputColumn.vector[j], 
inputColumn.start[j],
+inputColumn.length[j]);
+  }
+} else if (inputColumn.noNulls) {
+  int i = selected[j];
+  splitVectorAcrossWorkers(workers, inputColumn.vector[i], 
inputColumn.start[i],
+  inputColumn.length[i]);
+} else {
+  int i = selected[j];
+  if (!inputColumn.isNull[i]) {
+splitVectorAcrossWorkers(workers, inputColumn.vector[i], 
inputColumn.start[i],
+inputColumn.length[i]);
+  }
+}
+  }
+
+  return true;
+}
+
+private void initExecutor(Configuration conf, int batchSize) {
+  numThreads = 
conf.getInt(HiveConf.ConfVars.TEZ_BLOOM_FILTER_MERGE_THREADS.varname,
+  HiveConf.ConfVars.TEZ_BLOOM_FILTER_MERGE_THREADS.defaultIntVal);
+  LOG.info("Number of threads used for bloom filter merge: {}", 
numThreads);
+
+  if (numThreads < 0) {
+throw new RuntimeException(
+"invalid number of threads for bloom filter merge: " + numThreads);
+  }
+  if (numThreads == 0) { // disable parallel feature
+return; // this will leave isParallel=false
+  }
+  isParallel = true;
+  executor = Executors.newFixedThreadPool(numThreads);
+
+  workers = new BloomFilterMergeWorker[numThreads];
+  for (int f = 0; f < numThreads; f++) {

Review comment:
   cood catch, moving them to a single loop

##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/aggregates/VectorUDAFBloomFilterMerge.java
##
@@ -77,6 +75,211 @@ public void reset() {
   // Do not change the initial bytes which contain 
NumHashFunctions/NumBits!
   Arrays.fill(bfBytes, BloomKFilter.START_OF_SERIALIZED_LONGS, 
bfBytes.length, (byte) 0);
 }
+
+public boolean mergeBloomFilterBytesFromInputColumn(BytesColumnVector 
inputColumn,
+int batchSize, boolean selectedInUse, int[] selected, Configuration 
conf) {
+  // already set in previous iterations, no need to call initExecutor again
+  if (numThreads == 0) {
+return false;
+  }
+  if (executor == null) {
+initExecutor(conf, batchSize);
+if (!isParallel) {
+  return false;
+}
+  }
+
+  // split every bloom filter (represented by a part of a byte[]) across 
workers
+  for (int j = 0; j < batchSize; j++) {
+if (!selectedInUse && inputColumn.noNulls) {
+  splitVectorAcrossWorkers(workers, inputColumn.vector[j], 
inputColumn.start[j],
+  inputColumn.length[j]);
+} else if (!selectedInUse) {
+  if (!inputColumn.isNull[j]) {
+splitVectorAcrossWorkers(workers, inputColumn.vector[j], 
inputColumn.start[j],
+inputColumn.length[j]);
+  }
+} else if (inputColumn.noNulls) {
+  int i = selected[j];
+  splitVectorAcrossWorkers(workers, inputColumn.vector[i], 
inputColumn.start[i],
+  inputColumn.length[i]);
+} else {
+  int i = selected[j];
+  if (!inputColumn.isNull[i]) {
+splitVectorAcrossWorkers(workers, 

[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-08-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=469608=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-469608
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 12/Aug/20 08:59
Start Date: 12/Aug/20 08:59
Worklog Time Spent: 10m 
  Work Description: abstractdog commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r469111595



##
File path: common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
##
@@ -4330,6 +4330,12 @@ private static void 
populateLlapDaemonVarsSet(Set llapDaemonVarsSetLocal
 "Bloom filter should be of at max certain size to be effective"),
 TEZ_BLOOM_FILTER_FACTOR("hive.tez.bloom.filter.factor", (float) 1.0,
 "Bloom filter should be a multiple of this factor with nDV"),
+TEZ_BLOOM_FILTER_MERGE_THREADS("hive.tez.bloom.filter.merge.threads", 1,
+"How many threads are used for merging bloom filters?\n"
++ "-1: sanity check, it will fail if execution hits bloom filter 
merge codepath\n"
++ " 0: feature is disabled\n"

Review comment:
   agree, added





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 469608)
Time Spent: 4h 50m  (was: 4h 40m)

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---
>
> Key: HIVE-23880
> URL: https://issues.apache.org/jira/browse/HIVE-23880
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
>  Labels: pull-request-available
> Attachments: lipwig-output3605036885489193068.svg
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in 
> case of large number of source mapper tasks (~1000, Map 1 in below example) 
> and a large amount of expected entries (50M) in bloom filters.
> For example in TPCDS Q93:
> {code}
> select /*+ semi(store_returns, sr_item_sk, store_sales, 7000)*/ 
> ss_customer_sk
> ,sum(act_sales) sumsales
>   from (select ss_item_sk
>   ,ss_ticket_number
>   ,ss_customer_sk
>   ,case when sr_return_quantity is not null then 
> (ss_quantity-sr_return_quantity)*ss_sales_price
> else 
> (ss_quantity*ss_sales_price) end act_sales
> from store_sales left outer join store_returns on (sr_item_sk = 
> ss_item_sk
>and 
> sr_ticket_number = ss_ticket_number)
> ,reason
> where sr_reason_sk = r_reason_sk
>   and r_reason_desc = 'reason 66') t
>   group by ss_customer_sk
>   order by sumsales, ss_customer_sk
> limit 100;
> {code}
> On 10TB-30TB scale there is a chance that from 3-4 mins of query runtime 1-2 
> mins are spent with merging bloom filters (Reducer 2), as in:  
> [^lipwig-output3605036885489193068.svg] 
> {code}
> --
> VERTICES  MODESTATUS  TOTAL  COMPLETED  RUNNING  PENDING  
> FAILED  KILLED
> --
> Map 3 ..  llap SUCCEEDED  1  100  
>  0   0
> Map 1 ..  llap SUCCEEDED   1263   126300  
>  0   0
> Reducer 2 llap   RUNNING  1  010  
>  0   0
> Map 4 llap   RUNNING   6154  0  207 5947  
>  0   0
> Reducer 5 llapINITED 43  00   43  
>  0   0
> Reducer 6 llapINITED  1  001  
>  0   0
> --
> VERTICES: 02/06  [>>--] 16%   ELAPSED TIME: 149.98 s
> --
> {code}
> For example, 70M entries in bloom filter leads to a 436 465 696 bits, so 
> merging 1263 bloom filters means running ~ 1263 * 436 465 696 bitwise OR 
> operation, which 

[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-08-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=469605=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-469605
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 12/Aug/20 08:58
Start Date: 12/Aug/20 08:58
Worklog Time Spent: 10m 
  Work Description: abstractdog commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r469110972



##
File path: common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
##
@@ -4330,6 +4330,12 @@ private static void 
populateLlapDaemonVarsSet(Set llapDaemonVarsSetLocal
 "Bloom filter should be of at max certain size to be effective"),
 TEZ_BLOOM_FILTER_FACTOR("hive.tez.bloom.filter.factor", (float) 1.0,
 "Bloom filter should be a multiple of this factor with nDV"),
+TEZ_BLOOM_FILTER_MERGE_THREADS("hive.tez.bloom.filter.merge.threads", 1,
+"How many threads are used for merging bloom filters?\n"

Review comment:
   agree, adding that





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 469605)
Time Spent: 4h 40m  (was: 4.5h)

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---
>
> Key: HIVE-23880
> URL: https://issues.apache.org/jira/browse/HIVE-23880
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
>  Labels: pull-request-available
> Attachments: lipwig-output3605036885489193068.svg
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in 
> case of large number of source mapper tasks (~1000, Map 1 in below example) 
> and a large amount of expected entries (50M) in bloom filters.
> For example in TPCDS Q93:
> {code}
> select /*+ semi(store_returns, sr_item_sk, store_sales, 7000)*/ 
> ss_customer_sk
> ,sum(act_sales) sumsales
>   from (select ss_item_sk
>   ,ss_ticket_number
>   ,ss_customer_sk
>   ,case when sr_return_quantity is not null then 
> (ss_quantity-sr_return_quantity)*ss_sales_price
> else 
> (ss_quantity*ss_sales_price) end act_sales
> from store_sales left outer join store_returns on (sr_item_sk = 
> ss_item_sk
>and 
> sr_ticket_number = ss_ticket_number)
> ,reason
> where sr_reason_sk = r_reason_sk
>   and r_reason_desc = 'reason 66') t
>   group by ss_customer_sk
>   order by sumsales, ss_customer_sk
> limit 100;
> {code}
> On 10TB-30TB scale there is a chance that from 3-4 mins of query runtime 1-2 
> mins are spent with merging bloom filters (Reducer 2), as in:  
> [^lipwig-output3605036885489193068.svg] 
> {code}
> --
> VERTICES  MODESTATUS  TOTAL  COMPLETED  RUNNING  PENDING  
> FAILED  KILLED
> --
> Map 3 ..  llap SUCCEEDED  1  100  
>  0   0
> Map 1 ..  llap SUCCEEDED   1263   126300  
>  0   0
> Reducer 2 llap   RUNNING  1  010  
>  0   0
> Map 4 llap   RUNNING   6154  0  207 5947  
>  0   0
> Reducer 5 llapINITED 43  00   43  
>  0   0
> Reducer 6 llapINITED  1  001  
>  0   0
> --
> VERTICES: 02/06  [>>--] 16%   ELAPSED TIME: 149.98 s
> --
> {code}
> For example, 70M entries in bloom filter leads to a 436 465 696 bits, so 
> merging 1263 bloom filters means running ~ 1263 * 436 465 696 bitwise OR 
> operation, which is very hot codepath, but can be parallelized.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-08-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=469604=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-469604
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 12/Aug/20 08:56
Start Date: 12/Aug/20 08:56
Worklog Time Spent: 10m 
  Work Description: abstractdog commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r469109686



##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/aggregates/VectorAggregateExpression.java
##
@@ -20,24 +20,25 @@
 
 import java.io.Serializable;
 
+import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hive.common.type.DataTypePhysicalVariation;
 import org.apache.hadoop.hive.ql.exec.vector.ColumnVector;
 import org.apache.hadoop.hive.ql.exec.vector.VectorAggregationBufferRow;
 import org.apache.hadoop.hive.ql.exec.vector.VectorAggregationDesc;
 import org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch;
 import org.apache.hadoop.hive.ql.exec.vector.expressions.VectorExpression;
 import org.apache.hadoop.hive.ql.metadata.HiveException;
-import org.apache.hadoop.hive.ql.plan.AggregationDesc;
 import org.apache.hadoop.hive.ql.udf.generic.GenericUDAFEvaluator;
-import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector;
 import org.apache.hadoop.hive.serde2.typeinfo.TypeInfo;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 import org.apache.hadoop.hive.ql.udf.generic.GenericUDAFEvaluator.Mode;
 
 /**
  * Base class for aggregation expressions.
  */
 public abstract class VectorAggregateExpression  implements Serializable {
-
+  protected final Logger LOG = LoggerFactory.getLogger(getClass().getName());

Review comment:
   personally, I don't really like protected static Logger, because 
subclasses won't show the actual class name (only the parent)
   in this case, you're right, this LOG is not used in 
VectorUDAFBloomFilterMerge at all, it's useless leftover, I'm going to remove it





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 469604)
Time Spent: 4.5h  (was: 4h 20m)

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---
>
> Key: HIVE-23880
> URL: https://issues.apache.org/jira/browse/HIVE-23880
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
>  Labels: pull-request-available
> Attachments: lipwig-output3605036885489193068.svg
>
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in 
> case of large number of source mapper tasks (~1000, Map 1 in below example) 
> and a large amount of expected entries (50M) in bloom filters.
> For example in TPCDS Q93:
> {code}
> select /*+ semi(store_returns, sr_item_sk, store_sales, 7000)*/ 
> ss_customer_sk
> ,sum(act_sales) sumsales
>   from (select ss_item_sk
>   ,ss_ticket_number
>   ,ss_customer_sk
>   ,case when sr_return_quantity is not null then 
> (ss_quantity-sr_return_quantity)*ss_sales_price
> else 
> (ss_quantity*ss_sales_price) end act_sales
> from store_sales left outer join store_returns on (sr_item_sk = 
> ss_item_sk
>and 
> sr_ticket_number = ss_ticket_number)
> ,reason
> where sr_reason_sk = r_reason_sk
>   and r_reason_desc = 'reason 66') t
>   group by ss_customer_sk
>   order by sumsales, ss_customer_sk
> limit 100;
> {code}
> On 10TB-30TB scale there is a chance that from 3-4 mins of query runtime 1-2 
> mins are spent with merging bloom filters (Reducer 2), as in:  
> [^lipwig-output3605036885489193068.svg] 
> {code}
> --
> VERTICES  MODESTATUS  TOTAL  COMPLETED  RUNNING  PENDING  
> FAILED  KILLED
> --
> Map 3 ..  llap SUCCEEDED  1  100  
>  0   0
> Map 1 ..  llap SUCCEEDED   1263   126300  
>  

[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-08-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=469603=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-469603
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 12/Aug/20 08:56
Start Date: 12/Aug/20 08:56
Worklog Time Spent: 10m 
  Work Description: abstractdog commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r469109686



##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/aggregates/VectorAggregateExpression.java
##
@@ -20,24 +20,25 @@
 
 import java.io.Serializable;
 
+import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hive.common.type.DataTypePhysicalVariation;
 import org.apache.hadoop.hive.ql.exec.vector.ColumnVector;
 import org.apache.hadoop.hive.ql.exec.vector.VectorAggregationBufferRow;
 import org.apache.hadoop.hive.ql.exec.vector.VectorAggregationDesc;
 import org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch;
 import org.apache.hadoop.hive.ql.exec.vector.expressions.VectorExpression;
 import org.apache.hadoop.hive.ql.metadata.HiveException;
-import org.apache.hadoop.hive.ql.plan.AggregationDesc;
 import org.apache.hadoop.hive.ql.udf.generic.GenericUDAFEvaluator;
-import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector;
 import org.apache.hadoop.hive.serde2.typeinfo.TypeInfo;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 import org.apache.hadoop.hive.ql.udf.generic.GenericUDAFEvaluator.Mode;
 
 /**
  * Base class for aggregation expressions.
  */
 public abstract class VectorAggregateExpression  implements Serializable {
-
+  protected final Logger LOG = LoggerFactory.getLogger(getClass().getName());

Review comment:
   personally, I don't really like protected static Logger, because 
subclasses won't show the actual class name (only the parent), but in this 
case, this LOG is not used in VectorUDAFBloomFilterMerge at all, it's useless 
leftover, I'm going to remove it





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 469603)
Time Spent: 4h 20m  (was: 4h 10m)

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---
>
> Key: HIVE-23880
> URL: https://issues.apache.org/jira/browse/HIVE-23880
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
>  Labels: pull-request-available
> Attachments: lipwig-output3605036885489193068.svg
>
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in 
> case of large number of source mapper tasks (~1000, Map 1 in below example) 
> and a large amount of expected entries (50M) in bloom filters.
> For example in TPCDS Q93:
> {code}
> select /*+ semi(store_returns, sr_item_sk, store_sales, 7000)*/ 
> ss_customer_sk
> ,sum(act_sales) sumsales
>   from (select ss_item_sk
>   ,ss_ticket_number
>   ,ss_customer_sk
>   ,case when sr_return_quantity is not null then 
> (ss_quantity-sr_return_quantity)*ss_sales_price
> else 
> (ss_quantity*ss_sales_price) end act_sales
> from store_sales left outer join store_returns on (sr_item_sk = 
> ss_item_sk
>and 
> sr_ticket_number = ss_ticket_number)
> ,reason
> where sr_reason_sk = r_reason_sk
>   and r_reason_desc = 'reason 66') t
>   group by ss_customer_sk
>   order by sumsales, ss_customer_sk
> limit 100;
> {code}
> On 10TB-30TB scale there is a chance that from 3-4 mins of query runtime 1-2 
> mins are spent with merging bloom filters (Reducer 2), as in:  
> [^lipwig-output3605036885489193068.svg] 
> {code}
> --
> VERTICES  MODESTATUS  TOTAL  COMPLETED  RUNNING  PENDING  
> FAILED  KILLED
> --
> Map 3 ..  llap SUCCEEDED  1  100  
>  0   0
> Map 1 ..  llap SUCCEEDED   1263   126300  
>  0  

[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-08-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=469600=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-469600
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 12/Aug/20 08:52
Start Date: 12/Aug/20 08:52
Worklog Time Spent: 10m 
  Work Description: abstractdog commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r469107566



##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/aggregates/VectorUDAFBloomFilterMerge.java
##
@@ -77,6 +75,211 @@ public void reset() {
   // Do not change the initial bytes which contain 
NumHashFunctions/NumBits!
   Arrays.fill(bfBytes, BloomKFilter.START_OF_SERIALIZED_LONGS, 
bfBytes.length, (byte) 0);
 }
+
+public boolean mergeBloomFilterBytesFromInputColumn(BytesColumnVector 
inputColumn,
+int batchSize, boolean selectedInUse, int[] selected, Configuration 
conf) {
+  // already set in previous iterations, no need to call initExecutor again
+  if (numThreads == 0) {
+return false;
+  }
+  if (executor == null) {
+initExecutor(conf, batchSize);
+if (!isParallel) {
+  return false;
+}
+  }
+
+  // split every bloom filter (represented by a part of a byte[]) across 
workers
+  for (int j = 0; j < batchSize; j++) {
+if (!selectedInUse && inputColumn.noNulls) {
+  splitVectorAcrossWorkers(workers, inputColumn.vector[j], 
inputColumn.start[j],
+  inputColumn.length[j]);
+} else if (!selectedInUse) {
+  if (!inputColumn.isNull[j]) {
+splitVectorAcrossWorkers(workers, inputColumn.vector[j], 
inputColumn.start[j],
+inputColumn.length[j]);
+  }
+} else if (inputColumn.noNulls) {
+  int i = selected[j];
+  splitVectorAcrossWorkers(workers, inputColumn.vector[i], 
inputColumn.start[i],
+  inputColumn.length[i]);
+} else {
+  int i = selected[j];
+  if (!inputColumn.isNull[i]) {
+splitVectorAcrossWorkers(workers, inputColumn.vector[i], 
inputColumn.start[i],
+inputColumn.length[i]);
+  }
+}
+  }
+
+  return true;
+}
+
+private void initExecutor(Configuration conf, int batchSize) {
+  numThreads = 
conf.getInt(HiveConf.ConfVars.TEZ_BLOOM_FILTER_MERGE_THREADS.varname,
+  HiveConf.ConfVars.TEZ_BLOOM_FILTER_MERGE_THREADS.defaultIntVal);
+  LOG.info("Number of threads used for bloom filter merge: {}", 
numThreads);
+
+  if (numThreads < 0) {
+throw new RuntimeException(
+"invalid number of threads for bloom filter merge: " + numThreads);
+  }
+  if (numThreads == 0) { // disable parallel feature
+return; // this will leave isParallel=false
+  }
+  isParallel = true;
+  executor = Executors.newFixedThreadPool(numThreads);
+
+  workers = new BloomFilterMergeWorker[numThreads];
+  for (int f = 0; f < numThreads; f++) {
+workers[f] = new BloomFilterMergeWorker(bfBytes, 0, bfBytes.length);
+  }
+
+  for (int f = 0; f < numThreads; f++) {
+executor.submit(workers[f]);
+  }
+}
+
+public int getNumberOfWaitingMergeTasks(){
+  int size = 0;
+  for (BloomFilterMergeWorker w : workers){
+size += w.queue.size();
+  }
+  return size;
+}
+
+public int getNumberOfMergingWorkers() {
+  int working = 0;
+  for (BloomFilterMergeWorker w : workers) {
+if (w.isMerging.get()) {
+  working += 1;
+}
+  }
+  return working;
+}
+
+private static void splitVectorAcrossWorkers(BloomFilterMergeWorker[] 
workers, byte[] bytes,
+int start, int length) {
+  if (bytes == null || length == 0) {
+return;
+  }
+  /*
+   * This will split a byte[] across workers as below:
+   * let's say there are 10 workers for 7813 bytes, in this case
+   * length: 7813, elementPerBatch: 781
+   * bytes assigned to workers: inclusive lower bound, exclusive upper 
bound
+   * 1. worker: 5 -> 786
+   * 2. worker: 786 -> 1567
+   * 3. worker: 1567 -> 2348
+   * 4. worker: 2348 -> 3129
+   * 5. worker: 3129 -> 3910
+   * 6. worker: 3910 -> 4691
+   * 7. worker: 4691 -> 5472
+   * 8. worker: 5472 -> 6253
+   * 9. worker: 6253 -> 7034
+   * 10. worker: 7034 -> 7813 (last element per batch is: 779)
+   *
+   * This way, a particular worker will be given with the same part
+   * of all bloom filters along with the shared base bloom filter,
+   * so the bitwise OR function will not be a subject of threading/sync 
issues.
+   */
+  int elementPerBatch =
+  (int) Math.ceil((double) (length - 

[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-08-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=469596=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-469596
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 12/Aug/20 08:44
Start Date: 12/Aug/20 08:44
Worklog Time Spent: 10m 
  Work Description: abstractdog commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r469102366



##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/aggregates/VectorUDAFBloomFilterMerge.java
##
@@ -77,6 +75,211 @@ public void reset() {
   // Do not change the initial bytes which contain 
NumHashFunctions/NumBits!
   Arrays.fill(bfBytes, BloomKFilter.START_OF_SERIALIZED_LONGS, 
bfBytes.length, (byte) 0);
 }
+
+public boolean mergeBloomFilterBytesFromInputColumn(BytesColumnVector 
inputColumn,
+int batchSize, boolean selectedInUse, int[] selected, Configuration 
conf) {
+  // already set in previous iterations, no need to call initExecutor again
+  if (numThreads == 0) {
+return false;
+  }
+  if (executor == null) {
+initExecutor(conf, batchSize);
+if (!isParallel) {
+  return false;
+}
+  }
+
+  // split every bloom filter (represented by a part of a byte[]) across 
workers
+  for (int j = 0; j < batchSize; j++) {
+if (!selectedInUse && inputColumn.noNulls) {
+  splitVectorAcrossWorkers(workers, inputColumn.vector[j], 
inputColumn.start[j],
+  inputColumn.length[j]);
+} else if (!selectedInUse) {
+  if (!inputColumn.isNull[j]) {
+splitVectorAcrossWorkers(workers, inputColumn.vector[j], 
inputColumn.start[j],
+inputColumn.length[j]);
+  }
+} else if (inputColumn.noNulls) {
+  int i = selected[j];
+  splitVectorAcrossWorkers(workers, inputColumn.vector[i], 
inputColumn.start[i],
+  inputColumn.length[i]);
+} else {
+  int i = selected[j];
+  if (!inputColumn.isNull[i]) {
+splitVectorAcrossWorkers(workers, inputColumn.vector[i], 
inputColumn.start[i],
+inputColumn.length[i]);
+  }
+}
+  }
+
+  return true;
+}
+
+private void initExecutor(Configuration conf, int batchSize) {
+  numThreads = 
conf.getInt(HiveConf.ConfVars.TEZ_BLOOM_FILTER_MERGE_THREADS.varname,
+  HiveConf.ConfVars.TEZ_BLOOM_FILTER_MERGE_THREADS.defaultIntVal);
+  LOG.info("Number of threads used for bloom filter merge: {}", 
numThreads);
+
+  if (numThreads < 0) {
+throw new RuntimeException(
+"invalid number of threads for bloom filter merge: " + numThreads);
+  }
+  if (numThreads == 0) { // disable parallel feature
+return; // this will leave isParallel=false
+  }
+  isParallel = true;
+  executor = Executors.newFixedThreadPool(numThreads);
+
+  workers = new BloomFilterMergeWorker[numThreads];
+  for (int f = 0; f < numThreads; f++) {
+workers[f] = new BloomFilterMergeWorker(bfBytes, 0, bfBytes.length);
+  }
+
+  for (int f = 0; f < numThreads; f++) {
+executor.submit(workers[f]);
+  }
+}
+
+public int getNumberOfWaitingMergeTasks(){
+  int size = 0;
+  for (BloomFilterMergeWorker w : workers){
+size += w.queue.size();
+  }
+  return size;
+}
+
+public int getNumberOfMergingWorkers() {

Review comment:
   yeah, only for logging, it was for validating my executor shutdown 
correctness...that can be misleading, I'm removing it 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 469596)
Time Spent: 4h  (was: 3h 50m)

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---
>
> Key: HIVE-23880
> URL: https://issues.apache.org/jira/browse/HIVE-23880
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
>  Labels: pull-request-available
> Attachments: lipwig-output3605036885489193068.svg
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in 
> case of large number of source mapper tasks (~1000, Map 1 in below example) 
> and a large amount of expected 

[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-08-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=469594=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-469594
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 12/Aug/20 08:39
Start Date: 12/Aug/20 08:39
Worklog Time Spent: 10m 
  Work Description: abstractdog commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r469099535



##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorGroupByOperator.java
##
@@ -1126,6 +1137,7 @@ protected void initializeOp(Configuration hconf) throws 
HiveException {
 VectorAggregateExpression vecAggrExpr = null;
 try {
   vecAggrExpr = ctor.newInstance(vecAggrDesc);
+  vecAggrExpr.withConf(hconf);

Review comment:
   1. constructor: first I tried to pass it to constructor, but that breaks 
compatibility with every other subclasses of VectorAggregateExpression, if I 
use ctor.newInstance(vecAggrDesc, hconf), I need to add that constructor to all 
subclasses, because they don't inherit this ctor from 
VectorAggregateExpression...withConf can solve this, let me know about better 
ways
   
   2. single int: this config is specific to VectorUDAFBloomFilterMerge, I 
think I should pass it through a constructor to every VectorAggregateExpressio, 
and I didn't want to go for an instanceof hack for a cast + specific call





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 469594)
Time Spent: 3h 50m  (was: 3h 40m)

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---
>
> Key: HIVE-23880
> URL: https://issues.apache.org/jira/browse/HIVE-23880
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
>  Labels: pull-request-available
> Attachments: lipwig-output3605036885489193068.svg
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in 
> case of large number of source mapper tasks (~1000, Map 1 in below example) 
> and a large amount of expected entries (50M) in bloom filters.
> For example in TPCDS Q93:
> {code}
> select /*+ semi(store_returns, sr_item_sk, store_sales, 7000)*/ 
> ss_customer_sk
> ,sum(act_sales) sumsales
>   from (select ss_item_sk
>   ,ss_ticket_number
>   ,ss_customer_sk
>   ,case when sr_return_quantity is not null then 
> (ss_quantity-sr_return_quantity)*ss_sales_price
> else 
> (ss_quantity*ss_sales_price) end act_sales
> from store_sales left outer join store_returns on (sr_item_sk = 
> ss_item_sk
>and 
> sr_ticket_number = ss_ticket_number)
> ,reason
> where sr_reason_sk = r_reason_sk
>   and r_reason_desc = 'reason 66') t
>   group by ss_customer_sk
>   order by sumsales, ss_customer_sk
> limit 100;
> {code}
> On 10TB-30TB scale there is a chance that from 3-4 mins of query runtime 1-2 
> mins are spent with merging bloom filters (Reducer 2), as in:  
> [^lipwig-output3605036885489193068.svg] 
> {code}
> --
> VERTICES  MODESTATUS  TOTAL  COMPLETED  RUNNING  PENDING  
> FAILED  KILLED
> --
> Map 3 ..  llap SUCCEEDED  1  100  
>  0   0
> Map 1 ..  llap SUCCEEDED   1263   126300  
>  0   0
> Reducer 2 llap   RUNNING  1  010  
>  0   0
> Map 4 llap   RUNNING   6154  0  207 5947  
>  0   0
> Reducer 5 llapINITED 43  00   43  
>  0   0
> Reducer 6 llapINITED  1  001  
>  0   0
> --
> VERTICES: 02/06  [>>--] 16%   ELAPSED TIME: 149.98 s
> 

[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-08-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=469330=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-469330
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 11/Aug/20 17:33
Start Date: 11/Aug/20 17:33
Worklog Time Spent: 10m 
  Work Description: mustafaiman commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r468748810



##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/aggregates/VectorUDAFBloomFilterMerge.java
##
@@ -77,6 +75,211 @@ public void reset() {
   // Do not change the initial bytes which contain 
NumHashFunctions/NumBits!
   Arrays.fill(bfBytes, BloomKFilter.START_OF_SERIALIZED_LONGS, 
bfBytes.length, (byte) 0);
 }
+
+public boolean mergeBloomFilterBytesFromInputColumn(BytesColumnVector 
inputColumn,
+int batchSize, boolean selectedInUse, int[] selected, Configuration 
conf) {
+  // already set in previous iterations, no need to call initExecutor again
+  if (numThreads == 0) {
+return false;
+  }
+  if (executor == null) {
+initExecutor(conf, batchSize);
+if (!isParallel) {
+  return false;
+}
+  }
+
+  // split every bloom filter (represented by a part of a byte[]) across 
workers
+  for (int j = 0; j < batchSize; j++) {
+if (!selectedInUse && inputColumn.noNulls) {
+  splitVectorAcrossWorkers(workers, inputColumn.vector[j], 
inputColumn.start[j],
+  inputColumn.length[j]);
+} else if (!selectedInUse) {
+  if (!inputColumn.isNull[j]) {
+splitVectorAcrossWorkers(workers, inputColumn.vector[j], 
inputColumn.start[j],
+inputColumn.length[j]);
+  }
+} else if (inputColumn.noNulls) {
+  int i = selected[j];
+  splitVectorAcrossWorkers(workers, inputColumn.vector[i], 
inputColumn.start[i],
+  inputColumn.length[i]);
+} else {
+  int i = selected[j];
+  if (!inputColumn.isNull[i]) {
+splitVectorAcrossWorkers(workers, inputColumn.vector[i], 
inputColumn.start[i],
+inputColumn.length[i]);
+  }
+}
+  }
+
+  return true;
+}
+
+private void initExecutor(Configuration conf, int batchSize) {
+  numThreads = 
conf.getInt(HiveConf.ConfVars.TEZ_BLOOM_FILTER_MERGE_THREADS.varname,
+  HiveConf.ConfVars.TEZ_BLOOM_FILTER_MERGE_THREADS.defaultIntVal);
+  LOG.info("Number of threads used for bloom filter merge: {}", 
numThreads);
+
+  if (numThreads < 0) {
+throw new RuntimeException(
+"invalid number of threads for bloom filter merge: " + numThreads);
+  }
+  if (numThreads == 0) { // disable parallel feature
+return; // this will leave isParallel=false
+  }
+  isParallel = true;
+  executor = Executors.newFixedThreadPool(numThreads);
+
+  workers = new BloomFilterMergeWorker[numThreads];
+  for (int f = 0; f < numThreads; f++) {
+workers[f] = new BloomFilterMergeWorker(bfBytes, 0, bfBytes.length);
+  }
+
+  for (int f = 0; f < numThreads; f++) {
+executor.submit(workers[f]);
+  }
+}
+
+public int getNumberOfWaitingMergeTasks(){
+  int size = 0;
+  for (BloomFilterMergeWorker w : workers){
+size += w.queue.size();
+  }
+  return size;
+}
+
+public int getNumberOfMergingWorkers() {

Review comment:
   I see this method is used only for logging. What is the benefit of 
having that log? I am asking because if we get rid of this method, we can get 
rid of isMerging atomic variable.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 469330)
Time Spent: 3h 40m  (was: 3.5h)

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---
>
> Key: HIVE-23880
> URL: https://issues.apache.org/jira/browse/HIVE-23880
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
>  Labels: pull-request-available
> Attachments: lipwig-output3605036885489193068.svg
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in 
> case of large number of source mapper tasks 

[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-08-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=469328=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-469328
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 11/Aug/20 17:27
Start Date: 11/Aug/20 17:27
Worklog Time Spent: 10m 
  Work Description: mustafaiman commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r468723475



##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorGroupByOperator.java
##
@@ -252,6 +258,13 @@ protected VectorAggregationBufferRow 
allocateAggregationBuffer() throws HiveExce
   return bufferSet;
 }
 
+protected void finishAggregators(boolean aborted) {

Review comment:
   Instead of `finishAggregators`, can you make this method default `close` 
method for `ProcessingModeBase` and call `super.close(boolean)` from close 
methods of appropriate subclasses. That way common finalization code would be 
in `close` of common super class and specific finalization code would be in 
`close` method of each subclass.

##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorGroupByOperator.java
##
@@ -517,6 +532,10 @@ public void close(boolean aborted) throws HiveException {
 
 }
 
+//TODO: implement finishAggregators
+protected void finishAggregators(boolean aborted) {

Review comment:
   What about this mode? Seems not complete.

##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorGroupByOperator.java
##
@@ -1126,6 +1137,7 @@ protected void initializeOp(Configuration hconf) throws 
HiveException {
 VectorAggregateExpression vecAggrExpr = null;
 try {
   vecAggrExpr = ctor.newInstance(vecAggrDesc);
+  vecAggrExpr.withConf(hconf);

Review comment:
   Why is `withConf` a seperate method? Conf should be a parameter to 
VectorAggregateExpression's constructor.

##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/aggregates/VectorUDAFBloomFilterMerge.java
##
@@ -77,6 +75,211 @@ public void reset() {
   // Do not change the initial bytes which contain 
NumHashFunctions/NumBits!
   Arrays.fill(bfBytes, BloomKFilter.START_OF_SERIALIZED_LONGS, 
bfBytes.length, (byte) 0);
 }
+
+public boolean mergeBloomFilterBytesFromInputColumn(BytesColumnVector 
inputColumn,
+int batchSize, boolean selectedInUse, int[] selected, Configuration 
conf) {
+  // already set in previous iterations, no need to call initExecutor again
+  if (numThreads == 0) {
+return false;
+  }
+  if (executor == null) {
+initExecutor(conf, batchSize);
+if (!isParallel) {
+  return false;
+}
+  }
+
+  // split every bloom filter (represented by a part of a byte[]) across 
workers
+  for (int j = 0; j < batchSize; j++) {
+if (!selectedInUse && inputColumn.noNulls) {
+  splitVectorAcrossWorkers(workers, inputColumn.vector[j], 
inputColumn.start[j],
+  inputColumn.length[j]);
+} else if (!selectedInUse) {
+  if (!inputColumn.isNull[j]) {
+splitVectorAcrossWorkers(workers, inputColumn.vector[j], 
inputColumn.start[j],
+inputColumn.length[j]);
+  }
+} else if (inputColumn.noNulls) {
+  int i = selected[j];
+  splitVectorAcrossWorkers(workers, inputColumn.vector[i], 
inputColumn.start[i],
+  inputColumn.length[i]);
+} else {
+  int i = selected[j];
+  if (!inputColumn.isNull[i]) {
+splitVectorAcrossWorkers(workers, inputColumn.vector[i], 
inputColumn.start[i],
+inputColumn.length[i]);
+  }
+}
+  }
+
+  return true;
+}
+
+private void initExecutor(Configuration conf, int batchSize) {
+  numThreads = 
conf.getInt(HiveConf.ConfVars.TEZ_BLOOM_FILTER_MERGE_THREADS.varname,
+  HiveConf.ConfVars.TEZ_BLOOM_FILTER_MERGE_THREADS.defaultIntVal);
+  LOG.info("Number of threads used for bloom filter merge: {}", 
numThreads);
+
+  if (numThreads < 0) {
+throw new RuntimeException(
+"invalid number of threads for bloom filter merge: " + numThreads);
+  }
+  if (numThreads == 0) { // disable parallel feature
+return; // this will leave isParallel=false
+  }
+  isParallel = true;
+  executor = Executors.newFixedThreadPool(numThreads);
+
+  workers = new BloomFilterMergeWorker[numThreads];
+  for (int f = 0; f < numThreads; f++) {
+workers[f] = new BloomFilterMergeWorker(bfBytes, 0, bfBytes.length);
+  }
+
+  for (int f = 0; f < numThreads; f++) {
+executor.submit(workers[f]);
+  }
+}
+
+public int getNumberOfWaitingMergeTasks(){
+  int 

[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-08-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=469182=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-469182
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 11/Aug/20 12:12
Start Date: 11/Aug/20 12:12
Worklog Time Spent: 10m 
  Work Description: pgaref commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r468513649



##
File path: common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
##
@@ -4330,6 +4330,12 @@ private static void 
populateLlapDaemonVarsSet(Set llapDaemonVarsSetLocal
 "Bloom filter should be of at max certain size to be effective"),
 TEZ_BLOOM_FILTER_FACTOR("hive.tez.bloom.filter.factor", (float) 1.0,
 "Bloom filter should be a multiple of this factor with nDV"),
+TEZ_BLOOM_FILTER_MERGE_THREADS("hive.tez.bloom.filter.merge.threads", 1,
+"How many threads are used for merging bloom filters?\n"

Review comment:
   The number of threads used variable is actually in **addition to tasks 
main threads** -- I would make this a bit clearer

##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/aggregates/VectorAggregateExpression.java
##
@@ -20,24 +20,25 @@
 
 import java.io.Serializable;
 
+import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hive.common.type.DataTypePhysicalVariation;
 import org.apache.hadoop.hive.ql.exec.vector.ColumnVector;
 import org.apache.hadoop.hive.ql.exec.vector.VectorAggregationBufferRow;
 import org.apache.hadoop.hive.ql.exec.vector.VectorAggregationDesc;
 import org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch;
 import org.apache.hadoop.hive.ql.exec.vector.expressions.VectorExpression;
 import org.apache.hadoop.hive.ql.metadata.HiveException;
-import org.apache.hadoop.hive.ql.plan.AggregationDesc;
 import org.apache.hadoop.hive.ql.udf.generic.GenericUDAFEvaluator;
-import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector;
 import org.apache.hadoop.hive.serde2.typeinfo.TypeInfo;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 import org.apache.hadoop.hive.ql.udf.generic.GenericUDAFEvaluator.Mode;
 
 /**
  * Base class for aggregation expressions.
  */
 public abstract class VectorAggregateExpression  implements Serializable {
-
+  protected final Logger LOG = LoggerFactory.getLogger(getClass().getName());

Review comment:
   Should we make this static? Do we really want an instance per Expr?
   PS: it also seems that we dont use it all below..

##
File path: common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
##
@@ -4330,6 +4330,12 @@ private static void 
populateLlapDaemonVarsSet(Set llapDaemonVarsSetLocal
 "Bloom filter should be of at max certain size to be effective"),
 TEZ_BLOOM_FILTER_FACTOR("hive.tez.bloom.filter.factor", (float) 1.0,
 "Bloom filter should be a multiple of this factor with nDV"),
+TEZ_BLOOM_FILTER_MERGE_THREADS("hive.tez.bloom.filter.merge.threads", 1,
+"How many threads are used for merging bloom filters?\n"
++ "-1: sanity check, it will fail if execution hits bloom filter 
merge codepath\n"
++ " 0: feature is disabled\n"

Review comment:
   feature disabled -- use only task main thread for BF merging

##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/aggregates/VectorUDAFBloomFilterMerge.java
##
@@ -77,6 +75,211 @@ public void reset() {
   // Do not change the initial bytes which contain 
NumHashFunctions/NumBits!
   Arrays.fill(bfBytes, BloomKFilter.START_OF_SERIALIZED_LONGS, 
bfBytes.length, (byte) 0);
 }
+

Review comment:
   Could add a comment describing the return boolean value 

##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/aggregates/VectorUDAFBloomFilterMerge.java
##
@@ -77,6 +75,211 @@ public void reset() {
   // Do not change the initial bytes which contain 
NumHashFunctions/NumBits!
   Arrays.fill(bfBytes, BloomKFilter.START_OF_SERIALIZED_LONGS, 
bfBytes.length, (byte) 0);
 }
+
+public boolean mergeBloomFilterBytesFromInputColumn(BytesColumnVector 
inputColumn,
+int batchSize, boolean selectedInUse, int[] selected, Configuration 
conf) {
+  // already set in previous iterations, no need to call initExecutor again
+  if (numThreads == 0) {
+return false;
+  }
+  if (executor == null) {
+initExecutor(conf, batchSize);
+if (!isParallel) {
+  return false;
+}
+  }
+
+  // split every bloom filter (represented by a part of a byte[]) across 
workers
+  for (int j = 0; j < batchSize; j++) {
+if (!selectedInUse && inputColumn.noNulls) {
+  

[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-08-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=466643=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-466643
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 05/Aug/20 07:46
Start Date: 05/Aug/20 07:46
Worklog Time Spent: 10m 
  Work Description: abstractdog commented on pull request #1280:
URL: https://github.com/apache/hive/pull/1280#issuecomment-669038358


   @zabetak : let me grab the opportunity to thank you for your [JMH 
benchmarks](https://issues.apache.org/jira/browse/HIVE-23880?focusedCommentId=17163111=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17163111)!
 it helped a lot, some of my findings from the last 2 weeks:
   
   1. on cluster, JDK11 is better, in every scenario, we'll have to switch to 
that in LLAP daemons
   
   2. more threads doesn't make any serious improvement <- that's the most 
important what I've found in the last two weeks...basically, my implementation 
was wrong, and the results got distorted by the improper usage of executor 
service (that's what is fixed in the new, squashed commit), so now, on the 
cluster I can see results which are in line with your JMH findings
   
   3. removed automatic thread calculation: performance tests revealed that 1 
thread is the most optimal, and can lead to serious improvements, this is 
something that cannot be measured from JMH easily because the advantage of 1 
thread (which is the main task thread + 1 thread) is to decouple from the main 
thread, and let it handle other, probably CPU heavy stuff (waiting for inputs 
one by one, build vectorized row batches one by one, etc.), by this I reduced 
the task runtime by 50-60 seconds (170s -> 110s)
   
   4. as agreed with @ashutoshc, I've left the support of multiple threads in 
the code, because we don't know if we can have the advantage of it later, and 
the split logic doesn't consume significant amount of resources...but I've set 
default 1 thread in HiveConf in order to let the user know that this is the 
recommended, optimal usage
   
   cc: @pgaref , @ashutoshc 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 466643)
Time Spent: 3h 10m  (was: 3h)

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---
>
> Key: HIVE-23880
> URL: https://issues.apache.org/jira/browse/HIVE-23880
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
>  Labels: pull-request-available
> Attachments: lipwig-output3605036885489193068.svg
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in 
> case of large number of source mapper tasks (~1000, Map 1 in below example) 
> and a large amount of expected entries (50M) in bloom filters.
> For example in TPCDS Q93:
> {code}
> select /*+ semi(store_returns, sr_item_sk, store_sales, 7000)*/ 
> ss_customer_sk
> ,sum(act_sales) sumsales
>   from (select ss_item_sk
>   ,ss_ticket_number
>   ,ss_customer_sk
>   ,case when sr_return_quantity is not null then 
> (ss_quantity-sr_return_quantity)*ss_sales_price
> else 
> (ss_quantity*ss_sales_price) end act_sales
> from store_sales left outer join store_returns on (sr_item_sk = 
> ss_item_sk
>and 
> sr_ticket_number = ss_ticket_number)
> ,reason
> where sr_reason_sk = r_reason_sk
>   and r_reason_desc = 'reason 66') t
>   group by ss_customer_sk
>   order by sumsales, ss_customer_sk
> limit 100;
> {code}
> On 10TB-30TB scale there is a chance that from 3-4 mins of query runtime 1-2 
> mins are spent with merging bloom filters (Reducer 2), as in:  
> [^lipwig-output3605036885489193068.svg] 
> {code}
> --
> VERTICES  MODESTATUS  TOTAL  COMPLETED  RUNNING  PENDING  
> FAILED  KILLED
> --
> Map 3 ..  llap SUCCEEDED  1  100  
>  0   0
> Map 1 

[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-07-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=462557=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-462557
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 23/Jul/20 13:12
Start Date: 23/Jul/20 13:12
Worklog Time Spent: 10m 
  Work Description: belugabehr commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r459437391



##
File path: storage-api/src/java/org/apache/hive/common/util/BloomKFilter.java
##
@@ -45,6 +48,8 @@
  * This implementation has much lesser L1 data cache misses than {@link 
BloomFilter}.
  */
 public class BloomKFilter {
+  private static final Logger LOG = 
LoggerFactory.getLogger(BloomKFilter.class.getName());

Review comment:
   Nit:: Does not require `.getName()`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 462557)
Time Spent: 3h  (was: 2h 50m)

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---
>
> Key: HIVE-23880
> URL: https://issues.apache.org/jira/browse/HIVE-23880
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
>  Labels: pull-request-available
> Attachments: lipwig-output3605036885489193068.svg
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in 
> case of large number of source mapper tasks (~1000, Map 1 in below example) 
> and a large amount of expected entries (50M) in bloom filters.
> For example in TPCDS Q93:
> {code}
> select /*+ semi(store_returns, sr_item_sk, store_sales, 7000)*/ 
> ss_customer_sk
> ,sum(act_sales) sumsales
>   from (select ss_item_sk
>   ,ss_ticket_number
>   ,ss_customer_sk
>   ,case when sr_return_quantity is not null then 
> (ss_quantity-sr_return_quantity)*ss_sales_price
> else 
> (ss_quantity*ss_sales_price) end act_sales
> from store_sales left outer join store_returns on (sr_item_sk = 
> ss_item_sk
>and 
> sr_ticket_number = ss_ticket_number)
> ,reason
> where sr_reason_sk = r_reason_sk
>   and r_reason_desc = 'reason 66') t
>   group by ss_customer_sk
>   order by sumsales, ss_customer_sk
> limit 100;
> {code}
> On 10TB-30TB scale there is a chance that from 3-4 mins of query runtime 1-2 
> mins are spent with merging bloom filters (Reducer 2), as in:  
> [^lipwig-output3605036885489193068.svg] 
> {code}
> --
> VERTICES  MODESTATUS  TOTAL  COMPLETED  RUNNING  PENDING  
> FAILED  KILLED
> --
> Map 3 ..  llap SUCCEEDED  1  100  
>  0   0
> Map 1 ..  llap SUCCEEDED   1263   126300  
>  0   0
> Reducer 2 llap   RUNNING  1  010  
>  0   0
> Map 4 llap   RUNNING   6154  0  207 5947  
>  0   0
> Reducer 5 llapINITED 43  00   43  
>  0   0
> Reducer 6 llapINITED  1  001  
>  0   0
> --
> VERTICES: 02/06  [>>--] 16%   ELAPSED TIME: 149.98 s
> --
> {code}
> For example, 70M entries in bloom filter leads to a 436 465 696 bits, so 
> merging 1263 bloom filters means running ~ 1263 * 436 465 696 bitwise OR 
> operation, which is very hot codepath, but can be parallelized.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-07-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=462175=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-462175
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 22/Jul/20 17:38
Start Date: 22/Jul/20 17:38
Worklog Time Spent: 10m 
  Work Description: abstractdog commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r458967592



##
File path: storage-api/src/java/org/apache/hive/common/util/BloomKFilter.java
##
@@ -362,16 +378,178 @@ public static void mergeBloomFilterBytes(
 
 // Just bitwise-OR the bits together - size/# functions should be the same,
 // rest of the data is serialized long values for the bitset which are 
supposed to be bitwise-ORed.
-for (int idx = START_OF_SERIALIZED_LONGS; idx < bf1Length; ++idx) {
+for (int idx = mergeStart; idx < mergeEnd; ++idx) {
   bf1Bytes[bf1Start + idx] |= bf2Bytes[bf2Start + idx];
 }
   }
 
+  public static void mergeBloomFilterBytesFromInputColumn(
+  byte[] bf1Bytes, int bf1Start, int bf1Length, long bf1ExpectedEntries,
+  BytesColumnVector inputColumn, int batchSize, boolean selectedInUse, 
int[] selected, int numThreads) {

Review comment:
   batchSize is not related to the bloom filter instances itself, it 
reflects the number of bloom filters in the vectorized row batch...bfSize is 
more reminds the byte size of a single bloom filter, which is bf1Length





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 462175)
Time Spent: 2h 50m  (was: 2h 40m)

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---
>
> Key: HIVE-23880
> URL: https://issues.apache.org/jira/browse/HIVE-23880
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
>  Labels: pull-request-available
> Attachments: lipwig-output3605036885489193068.svg
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in 
> case of large number of source mapper tasks (~1000, Map 1 in below example) 
> and a large amount of expected entries (50M) in bloom filters.
> For example in TPCDS Q93:
> {code}
> select /*+ semi(store_returns, sr_item_sk, store_sales, 7000)*/ 
> ss_customer_sk
> ,sum(act_sales) sumsales
>   from (select ss_item_sk
>   ,ss_ticket_number
>   ,ss_customer_sk
>   ,case when sr_return_quantity is not null then 
> (ss_quantity-sr_return_quantity)*ss_sales_price
> else 
> (ss_quantity*ss_sales_price) end act_sales
> from store_sales left outer join store_returns on (sr_item_sk = 
> ss_item_sk
>and 
> sr_ticket_number = ss_ticket_number)
> ,reason
> where sr_reason_sk = r_reason_sk
>   and r_reason_desc = 'reason 66') t
>   group by ss_customer_sk
>   order by sumsales, ss_customer_sk
> limit 100;
> {code}
> On 10TB-30TB scale there is a chance that from 3-4 mins of query runtime 1-2 
> mins are spent with merging bloom filters (Reducer 2), as in:  
> [^lipwig-output3605036885489193068.svg] 
> {code}
> --
> VERTICES  MODESTATUS  TOTAL  COMPLETED  RUNNING  PENDING  
> FAILED  KILLED
> --
> Map 3 ..  llap SUCCEEDED  1  100  
>  0   0
> Map 1 ..  llap SUCCEEDED   1263   126300  
>  0   0
> Reducer 2 llap   RUNNING  1  010  
>  0   0
> Map 4 llap   RUNNING   6154  0  207 5947  
>  0   0
> Reducer 5 llapINITED 43  00   43  
>  0   0
> Reducer 6 llapINITED  1  001  
>  0   0
> --
> VERTICES: 02/06  

[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-07-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=462171=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-462171
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 22/Jul/20 17:30
Start Date: 22/Jul/20 17:30
Worklog Time Spent: 10m 
  Work Description: abstractdog commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r458962698



##
File path: common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
##
@@ -4285,6 +4285,8 @@ private static void populateLlapDaemonVarsSet(Set 
llapDaemonVarsSetLocal
 "Bloom filter should be of at max certain size to be effective"),
 TEZ_BLOOM_FILTER_FACTOR("hive.tez.bloom.filter.factor", (float) 1.0,
 "Bloom filter should be a multiple of this factor with nDV"),
+TEZ_BLOOM_FILTER_MERGE_THREADS("hive.tez.bloom.filter.merge.threads", 8,

Review comment:
   agree, you'll see that the new version of PR will default to 0 which 
causes automatic thread number calculation
   
   but I would leave this option as an "expert setting"...e.g.: explicitly 
setting a value for testing purposes, or setting -1 for sanity check (it will 
fail with negative values, so you can double-check if bloom filter merge really 
happens in your query)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 462171)
Time Spent: 2h 40m  (was: 2.5h)

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---
>
> Key: HIVE-23880
> URL: https://issues.apache.org/jira/browse/HIVE-23880
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
>  Labels: pull-request-available
> Attachments: lipwig-output3605036885489193068.svg
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in 
> case of large number of source mapper tasks (~1000, Map 1 in below example) 
> and a large amount of expected entries (50M) in bloom filters.
> For example in TPCDS Q93:
> {code}
> select /*+ semi(store_returns, sr_item_sk, store_sales, 7000)*/ 
> ss_customer_sk
> ,sum(act_sales) sumsales
>   from (select ss_item_sk
>   ,ss_ticket_number
>   ,ss_customer_sk
>   ,case when sr_return_quantity is not null then 
> (ss_quantity-sr_return_quantity)*ss_sales_price
> else 
> (ss_quantity*ss_sales_price) end act_sales
> from store_sales left outer join store_returns on (sr_item_sk = 
> ss_item_sk
>and 
> sr_ticket_number = ss_ticket_number)
> ,reason
> where sr_reason_sk = r_reason_sk
>   and r_reason_desc = 'reason 66') t
>   group by ss_customer_sk
>   order by sumsales, ss_customer_sk
> limit 100;
> {code}
> On 10TB-30TB scale there is a chance that from 3-4 mins of query runtime 1-2 
> mins are spent with merging bloom filters (Reducer 2), as in:  
> [^lipwig-output3605036885489193068.svg] 
> {code}
> --
> VERTICES  MODESTATUS  TOTAL  COMPLETED  RUNNING  PENDING  
> FAILED  KILLED
> --
> Map 3 ..  llap SUCCEEDED  1  100  
>  0   0
> Map 1 ..  llap SUCCEEDED   1263   126300  
>  0   0
> Reducer 2 llap   RUNNING  1  010  
>  0   0
> Map 4 llap   RUNNING   6154  0  207 5947  
>  0   0
> Reducer 5 llapINITED 43  00   43  
>  0   0
> Reducer 6 llapINITED  1  001  
>  0   0
> --
> VERTICES: 02/06  [>>--] 16%   ELAPSED TIME: 149.98 s
> --
> {code}
> For example, 

[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-07-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=462169=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-462169
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 22/Jul/20 17:27
Start Date: 22/Jul/20 17:27
Worklog Time Spent: 10m 
  Work Description: abstractdog commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r458960692



##
File path: storage-api/src/java/org/apache/hive/common/util/BloomKFilter.java
##
@@ -506,18 +505,19 @@ public ElementWrapper(byte[] bytes, int start, int 
length, int modifiedStart, in
   }
 
   private static class BloomFilterMergeWorker implements Runnable {
-Queue queue = new LinkedBlockingDeque<>();
+ArrayBlockingQueue queue;
 private ExecutorService executor;
 
 private byte[] bfAggregation;
 private int bfAggregationStart;
 private int bfAggregationLength;
 
-public BloomFilterMergeWorker(ExecutorService executor, byte[] 
bfAggregation, int bfAggregationStart, int bfAggregationLength) {
+public BloomFilterMergeWorker(ExecutorService executor, int batchSize, 
byte[] bfAggregation, int bfAggregationStart, int bfAggregationLength) {
   this.executor = executor;

Review comment:
   sure, I misunderstood, somehow I thought you meant batchSize...yeah, we 
won't need executor reference if I remove executor.isTerminated() check





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 462169)
Time Spent: 2.5h  (was: 2h 20m)

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---
>
> Key: HIVE-23880
> URL: https://issues.apache.org/jira/browse/HIVE-23880
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
>  Labels: pull-request-available
> Attachments: lipwig-output3605036885489193068.svg
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in 
> case of large number of source mapper tasks (~1000, Map 1 in below example) 
> and a large amount of expected entries (50M) in bloom filters.
> For example in TPCDS Q93:
> {code}
> select /*+ semi(store_returns, sr_item_sk, store_sales, 7000)*/ 
> ss_customer_sk
> ,sum(act_sales) sumsales
>   from (select ss_item_sk
>   ,ss_ticket_number
>   ,ss_customer_sk
>   ,case when sr_return_quantity is not null then 
> (ss_quantity-sr_return_quantity)*ss_sales_price
> else 
> (ss_quantity*ss_sales_price) end act_sales
> from store_sales left outer join store_returns on (sr_item_sk = 
> ss_item_sk
>and 
> sr_ticket_number = ss_ticket_number)
> ,reason
> where sr_reason_sk = r_reason_sk
>   and r_reason_desc = 'reason 66') t
>   group by ss_customer_sk
>   order by sumsales, ss_customer_sk
> limit 100;
> {code}
> On 10TB-30TB scale there is a chance that from 3-4 mins of query runtime 1-2 
> mins are spent with merging bloom filters (Reducer 2), as in:  
> [^lipwig-output3605036885489193068.svg] 
> {code}
> --
> VERTICES  MODESTATUS  TOTAL  COMPLETED  RUNNING  PENDING  
> FAILED  KILLED
> --
> Map 3 ..  llap SUCCEEDED  1  100  
>  0   0
> Map 1 ..  llap SUCCEEDED   1263   126300  
>  0   0
> Reducer 2 llap   RUNNING  1  010  
>  0   0
> Map 4 llap   RUNNING   6154  0  207 5947  
>  0   0
> Reducer 5 llapINITED 43  00   43  
>  0   0
> Reducer 6 llapINITED  1  001  
>  0   0
> --
> VERTICES: 02/06  [>>--] 16%   ELAPSED TIME: 149.98 s
> 

[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-07-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=462080=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-462080
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 22/Jul/20 13:56
Start Date: 22/Jul/20 13:56
Worklog Time Spent: 10m 
  Work Description: zabetak commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r458811039



##
File path: storage-api/src/java/org/apache/hive/common/util/BloomKFilter.java
##
@@ -36,7 +44,7 @@
  *
  * This implementation has much lesser L1 data cache misses than {@link 
BloomFilter}.

Review comment:
   Actually, I was having the same thoughts :) FYI: I did some JMH 
benhcmarks (outside Hive) some time ago. I am adapting them slightly and I will 
share some JMH results later today.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 462080)
Time Spent: 2h 20m  (was: 2h 10m)

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---
>
> Key: HIVE-23880
> URL: https://issues.apache.org/jira/browse/HIVE-23880
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
>  Labels: pull-request-available
> Attachments: lipwig-output3605036885489193068.svg
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in 
> case of large number of source mapper tasks (~1000, Map 1 in below example) 
> and a large amount of expected entries (50M) in bloom filters.
> For example in TPCDS Q93:
> {code}
> select /*+ semi(store_returns, sr_item_sk, store_sales, 7000)*/ 
> ss_customer_sk
> ,sum(act_sales) sumsales
>   from (select ss_item_sk
>   ,ss_ticket_number
>   ,ss_customer_sk
>   ,case when sr_return_quantity is not null then 
> (ss_quantity-sr_return_quantity)*ss_sales_price
> else 
> (ss_quantity*ss_sales_price) end act_sales
> from store_sales left outer join store_returns on (sr_item_sk = 
> ss_item_sk
>and 
> sr_ticket_number = ss_ticket_number)
> ,reason
> where sr_reason_sk = r_reason_sk
>   and r_reason_desc = 'reason 66') t
>   group by ss_customer_sk
>   order by sumsales, ss_customer_sk
> limit 100;
> {code}
> On 10TB-30TB scale there is a chance that from 3-4 mins of query runtime 1-2 
> mins are spent with merging bloom filters (Reducer 2), as in:  
> [^lipwig-output3605036885489193068.svg] 
> {code}
> --
> VERTICES  MODESTATUS  TOTAL  COMPLETED  RUNNING  PENDING  
> FAILED  KILLED
> --
> Map 3 ..  llap SUCCEEDED  1  100  
>  0   0
> Map 1 ..  llap SUCCEEDED   1263   126300  
>  0   0
> Reducer 2 llap   RUNNING  1  010  
>  0   0
> Map 4 llap   RUNNING   6154  0  207 5947  
>  0   0
> Reducer 5 llapINITED 43  00   43  
>  0   0
> Reducer 6 llapINITED  1  001  
>  0   0
> --
> VERTICES: 02/06  [>>--] 16%   ELAPSED TIME: 149.98 s
> --
> {code}
> For example, 70M entries in bloom filter leads to a 436 465 696 bits, so 
> merging 1263 bloom filters means running ~ 1263 * 436 465 696 bitwise OR 
> operation, which is very hot codepath, but can be parallelized.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-07-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=462064=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-462064
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 22/Jul/20 13:35
Start Date: 22/Jul/20 13:35
Worklog Time Spent: 10m 
  Work Description: belugabehr commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r458795827



##
File path: storage-api/src/java/org/apache/hive/common/util/BloomKFilter.java
##
@@ -506,18 +505,19 @@ public ElementWrapper(byte[] bytes, int start, int 
length, int modifiedStart, in
   }
 
   private static class BloomFilterMergeWorker implements Runnable {
-Queue queue = new LinkedBlockingDeque<>();
+ArrayBlockingQueue queue;
 private ExecutorService executor;
 
 private byte[] bfAggregation;
 private int bfAggregationStart;
 private int bfAggregationLength;
 
-public BloomFilterMergeWorker(ExecutorService executor, byte[] 
bfAggregation, int bfAggregationStart, int bfAggregationLength) {
+public BloomFilterMergeWorker(ExecutorService executor, int batchSize, 
byte[] bfAggregation, int bfAggregationStart, int bfAggregationLength) {
   this.executor = executor;

Review comment:
   @abstractdog Hey, ya, of course.  This just goes with what I was saying 
about the relationship between Runnable/Thread/Callable and ExecutorService.  
The thread rarely needs a reference to its own ExecutorService.  The only 
reason this was being capture was to support `!executor.isTerminated()` which 
is not the correct thing to do.  Please remove the `this.executor` instance 
variable and/or change the constructor to not include it.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 462064)
Time Spent: 2h 10m  (was: 2h)

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---
>
> Key: HIVE-23880
> URL: https://issues.apache.org/jira/browse/HIVE-23880
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
>  Labels: pull-request-available
> Attachments: lipwig-output3605036885489193068.svg
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in 
> case of large number of source mapper tasks (~1000, Map 1 in below example) 
> and a large amount of expected entries (50M) in bloom filters.
> For example in TPCDS Q93:
> {code}
> select /*+ semi(store_returns, sr_item_sk, store_sales, 7000)*/ 
> ss_customer_sk
> ,sum(act_sales) sumsales
>   from (select ss_item_sk
>   ,ss_ticket_number
>   ,ss_customer_sk
>   ,case when sr_return_quantity is not null then 
> (ss_quantity-sr_return_quantity)*ss_sales_price
> else 
> (ss_quantity*ss_sales_price) end act_sales
> from store_sales left outer join store_returns on (sr_item_sk = 
> ss_item_sk
>and 
> sr_ticket_number = ss_ticket_number)
> ,reason
> where sr_reason_sk = r_reason_sk
>   and r_reason_desc = 'reason 66') t
>   group by ss_customer_sk
>   order by sumsales, ss_customer_sk
> limit 100;
> {code}
> On 10TB-30TB scale there is a chance that from 3-4 mins of query runtime 1-2 
> mins are spent with merging bloom filters (Reducer 2), as in:  
> [^lipwig-output3605036885489193068.svg] 
> {code}
> --
> VERTICES  MODESTATUS  TOTAL  COMPLETED  RUNNING  PENDING  
> FAILED  KILLED
> --
> Map 3 ..  llap SUCCEEDED  1  100  
>  0   0
> Map 1 ..  llap SUCCEEDED   1263   126300  
>  0   0
> Reducer 2 llap   RUNNING  1  010  
>  0   0
> Map 4 llap   RUNNING   6154  0  207 5947  
>  0   0
> Reducer 5 llapINITED 43  00   43  
>  0   

[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-07-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=462049=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-462049
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 22/Jul/20 13:00
Start Date: 22/Jul/20 13:00
Worklog Time Spent: 10m 
  Work Description: pgaref commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r458771841



##
File path: storage-api/src/java/org/apache/hive/common/util/BloomKFilter.java
##
@@ -362,16 +378,178 @@ public static void mergeBloomFilterBytes(
 
 // Just bitwise-OR the bits together - size/# functions should be the same,
 // rest of the data is serialized long values for the bitset which are 
supposed to be bitwise-ORed.
-for (int idx = START_OF_SERIALIZED_LONGS; idx < bf1Length; ++idx) {
+for (int idx = mergeStart; idx < mergeEnd; ++idx) {
   bf1Bytes[bf1Start + idx] |= bf2Bytes[bf2Start + idx];
 }
   }
 
+  public static void mergeBloomFilterBytesFromInputColumn(
+  byte[] bf1Bytes, int bf1Start, int bf1Length, long bf1ExpectedEntries,
+  BytesColumnVector inputColumn, int batchSize, boolean selectedInUse, 
int[] selected, int numThreads) {

Review comment:
   batchSize is I assume bfSize? maybe rename to something more explicit





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 462049)
Time Spent: 2h  (was: 1h 50m)

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---
>
> Key: HIVE-23880
> URL: https://issues.apache.org/jira/browse/HIVE-23880
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
>  Labels: pull-request-available
> Attachments: lipwig-output3605036885489193068.svg
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in 
> case of large number of source mapper tasks (~1000, Map 1 in below example) 
> and a large amount of expected entries (50M) in bloom filters.
> For example in TPCDS Q93:
> {code}
> select /*+ semi(store_returns, sr_item_sk, store_sales, 7000)*/ 
> ss_customer_sk
> ,sum(act_sales) sumsales
>   from (select ss_item_sk
>   ,ss_ticket_number
>   ,ss_customer_sk
>   ,case when sr_return_quantity is not null then 
> (ss_quantity-sr_return_quantity)*ss_sales_price
> else 
> (ss_quantity*ss_sales_price) end act_sales
> from store_sales left outer join store_returns on (sr_item_sk = 
> ss_item_sk
>and 
> sr_ticket_number = ss_ticket_number)
> ,reason
> where sr_reason_sk = r_reason_sk
>   and r_reason_desc = 'reason 66') t
>   group by ss_customer_sk
>   order by sumsales, ss_customer_sk
> limit 100;
> {code}
> On 10TB-30TB scale there is a chance that from 3-4 mins of query runtime 1-2 
> mins are spent with merging bloom filters (Reducer 2), as in:  
> [^lipwig-output3605036885489193068.svg] 
> {code}
> --
> VERTICES  MODESTATUS  TOTAL  COMPLETED  RUNNING  PENDING  
> FAILED  KILLED
> --
> Map 3 ..  llap SUCCEEDED  1  100  
>  0   0
> Map 1 ..  llap SUCCEEDED   1263   126300  
>  0   0
> Reducer 2 llap   RUNNING  1  010  
>  0   0
> Map 4 llap   RUNNING   6154  0  207 5947  
>  0   0
> Reducer 5 llapINITED 43  00   43  
>  0   0
> Reducer 6 llapINITED  1  001  
>  0   0
> --
> VERTICES: 02/06  [>>--] 16%   ELAPSED TIME: 149.98 s
> --
> {code}
> For example, 70M 

[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-07-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=462047=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-462047
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 22/Jul/20 12:58
Start Date: 22/Jul/20 12:58
Worklog Time Spent: 10m 
  Work Description: pgaref commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r458771841



##
File path: storage-api/src/java/org/apache/hive/common/util/BloomKFilter.java
##
@@ -362,16 +378,178 @@ public static void mergeBloomFilterBytes(
 
 // Just bitwise-OR the bits together - size/# functions should be the same,
 // rest of the data is serialized long values for the bitset which are 
supposed to be bitwise-ORed.
-for (int idx = START_OF_SERIALIZED_LONGS; idx < bf1Length; ++idx) {
+for (int idx = mergeStart; idx < mergeEnd; ++idx) {
   bf1Bytes[bf1Start + idx] |= bf2Bytes[bf2Start + idx];
 }
   }
 
+  public static void mergeBloomFilterBytesFromInputColumn(
+  byte[] bf1Bytes, int bf1Start, int bf1Length, long bf1ExpectedEntries,
+  BytesColumnVector inputColumn, int batchSize, boolean selectedInUse, 
int[] selected, int numThreads) {

Review comment:
   batchSize is I assume bfSize?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 462047)
Time Spent: 1h 50m  (was: 1h 40m)

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---
>
> Key: HIVE-23880
> URL: https://issues.apache.org/jira/browse/HIVE-23880
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
>  Labels: pull-request-available
> Attachments: lipwig-output3605036885489193068.svg
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in 
> case of large number of source mapper tasks (~1000, Map 1 in below example) 
> and a large amount of expected entries (50M) in bloom filters.
> For example in TPCDS Q93:
> {code}
> select /*+ semi(store_returns, sr_item_sk, store_sales, 7000)*/ 
> ss_customer_sk
> ,sum(act_sales) sumsales
>   from (select ss_item_sk
>   ,ss_ticket_number
>   ,ss_customer_sk
>   ,case when sr_return_quantity is not null then 
> (ss_quantity-sr_return_quantity)*ss_sales_price
> else 
> (ss_quantity*ss_sales_price) end act_sales
> from store_sales left outer join store_returns on (sr_item_sk = 
> ss_item_sk
>and 
> sr_ticket_number = ss_ticket_number)
> ,reason
> where sr_reason_sk = r_reason_sk
>   and r_reason_desc = 'reason 66') t
>   group by ss_customer_sk
>   order by sumsales, ss_customer_sk
> limit 100;
> {code}
> On 10TB-30TB scale there is a chance that from 3-4 mins of query runtime 1-2 
> mins are spent with merging bloom filters (Reducer 2), as in:  
> [^lipwig-output3605036885489193068.svg] 
> {code}
> --
> VERTICES  MODESTATUS  TOTAL  COMPLETED  RUNNING  PENDING  
> FAILED  KILLED
> --
> Map 3 ..  llap SUCCEEDED  1  100  
>  0   0
> Map 1 ..  llap SUCCEEDED   1263   126300  
>  0   0
> Reducer 2 llap   RUNNING  1  010  
>  0   0
> Map 4 llap   RUNNING   6154  0  207 5947  
>  0   0
> Reducer 5 llapINITED 43  00   43  
>  0   0
> Reducer 6 llapINITED  1  001  
>  0   0
> --
> VERTICES: 02/06  [>>--] 16%   ELAPSED TIME: 149.98 s
> --
> {code}
> For example, 70M entries in bloom filter leads to 

[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-07-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=462038=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-462038
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 22/Jul/20 12:40
Start Date: 22/Jul/20 12:40
Worklog Time Spent: 10m 
  Work Description: pgaref commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r458761137



##
File path: storage-api/src/java/org/apache/hive/common/util/BloomKFilter.java
##
@@ -36,7 +44,7 @@
  *
  * This implementation has much lesser L1 data cache misses than {@link 
BloomFilter}.

Review comment:
   I am afraid changing the logic of the implementation will also affect 
the cache effectiveness/behaviour -- would it make sense to add a JMH test here 
as well?
   We could even save the bench-results in a log to keep track of whats 
happening





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 462038)
Time Spent: 1h 40m  (was: 1.5h)

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---
>
> Key: HIVE-23880
> URL: https://issues.apache.org/jira/browse/HIVE-23880
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
>  Labels: pull-request-available
> Attachments: lipwig-output3605036885489193068.svg
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in 
> case of large number of source mapper tasks (~1000, Map 1 in below example) 
> and a large amount of expected entries (50M) in bloom filters.
> For example in TPCDS Q93:
> {code}
> select /*+ semi(store_returns, sr_item_sk, store_sales, 7000)*/ 
> ss_customer_sk
> ,sum(act_sales) sumsales
>   from (select ss_item_sk
>   ,ss_ticket_number
>   ,ss_customer_sk
>   ,case when sr_return_quantity is not null then 
> (ss_quantity-sr_return_quantity)*ss_sales_price
> else 
> (ss_quantity*ss_sales_price) end act_sales
> from store_sales left outer join store_returns on (sr_item_sk = 
> ss_item_sk
>and 
> sr_ticket_number = ss_ticket_number)
> ,reason
> where sr_reason_sk = r_reason_sk
>   and r_reason_desc = 'reason 66') t
>   group by ss_customer_sk
>   order by sumsales, ss_customer_sk
> limit 100;
> {code}
> On 10TB-30TB scale there is a chance that from 3-4 mins of query runtime 1-2 
> mins are spent with merging bloom filters (Reducer 2), as in:  
> [^lipwig-output3605036885489193068.svg] 
> {code}
> --
> VERTICES  MODESTATUS  TOTAL  COMPLETED  RUNNING  PENDING  
> FAILED  KILLED
> --
> Map 3 ..  llap SUCCEEDED  1  100  
>  0   0
> Map 1 ..  llap SUCCEEDED   1263   126300  
>  0   0
> Reducer 2 llap   RUNNING  1  010  
>  0   0
> Map 4 llap   RUNNING   6154  0  207 5947  
>  0   0
> Reducer 5 llapINITED 43  00   43  
>  0   0
> Reducer 6 llapINITED  1  001  
>  0   0
> --
> VERTICES: 02/06  [>>--] 16%   ELAPSED TIME: 149.98 s
> --
> {code}
> For example, 70M entries in bloom filter leads to a 436 465 696 bits, so 
> merging 1263 bloom filters means running ~ 1263 * 436 465 696 bitwise OR 
> operation, which is very hot codepath, but can be parallelized.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-07-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=462029=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-462029
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 22/Jul/20 12:28
Start Date: 22/Jul/20 12:28
Worklog Time Spent: 10m 
  Work Description: pgaref commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r458754144



##
File path: common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
##
@@ -4285,6 +4285,8 @@ private static void populateLlapDaemonVarsSet(Set 
llapDaemonVarsSetLocal
 "Bloom filter should be of at max certain size to be effective"),
 TEZ_BLOOM_FILTER_FACTOR("hive.tez.bloom.filter.factor", (float) 1.0,
 "Bloom filter should be a multiple of this factor with nDV"),
+TEZ_BLOOM_FILTER_MERGE_THREADS("hive.tez.bloom.filter.merge.threads", 8,

Review comment:
   Any chance we can get rid of the extra configuration?
   I really hope one day we will starting removing configuration instead of 
adding new :) 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 462029)
Time Spent: 1.5h  (was: 1h 20m)

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---
>
> Key: HIVE-23880
> URL: https://issues.apache.org/jira/browse/HIVE-23880
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
>  Labels: pull-request-available
> Attachments: lipwig-output3605036885489193068.svg
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in 
> case of large number of source mapper tasks (~1000, Map 1 in below example) 
> and a large amount of expected entries (50M) in bloom filters.
> For example in TPCDS Q93:
> {code}
> select /*+ semi(store_returns, sr_item_sk, store_sales, 7000)*/ 
> ss_customer_sk
> ,sum(act_sales) sumsales
>   from (select ss_item_sk
>   ,ss_ticket_number
>   ,ss_customer_sk
>   ,case when sr_return_quantity is not null then 
> (ss_quantity-sr_return_quantity)*ss_sales_price
> else 
> (ss_quantity*ss_sales_price) end act_sales
> from store_sales left outer join store_returns on (sr_item_sk = 
> ss_item_sk
>and 
> sr_ticket_number = ss_ticket_number)
> ,reason
> where sr_reason_sk = r_reason_sk
>   and r_reason_desc = 'reason 66') t
>   group by ss_customer_sk
>   order by sumsales, ss_customer_sk
> limit 100;
> {code}
> On 10TB-30TB scale there is a chance that from 3-4 mins of query runtime 1-2 
> mins are spent with merging bloom filters (Reducer 2), as in:  
> [^lipwig-output3605036885489193068.svg] 
> {code}
> --
> VERTICES  MODESTATUS  TOTAL  COMPLETED  RUNNING  PENDING  
> FAILED  KILLED
> --
> Map 3 ..  llap SUCCEEDED  1  100  
>  0   0
> Map 1 ..  llap SUCCEEDED   1263   126300  
>  0   0
> Reducer 2 llap   RUNNING  1  010  
>  0   0
> Map 4 llap   RUNNING   6154  0  207 5947  
>  0   0
> Reducer 5 llapINITED 43  00   43  
>  0   0
> Reducer 6 llapINITED  1  001  
>  0   0
> --
> VERTICES: 02/06  [>>--] 16%   ELAPSED TIME: 149.98 s
> --
> {code}
> For example, 70M entries in bloom filter leads to a 436 465 696 bits, so 
> merging 1263 bloom filters means running ~ 1263 * 436 465 696 bitwise OR 
> operation, which is very hot codepath, but can be parallelized.



--
This message was sent by 

[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-07-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=461959=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-461959
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 22/Jul/20 10:16
Start Date: 22/Jul/20 10:16
Worklog Time Spent: 10m 
  Work Description: abstractdog commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r458688040



##
File path: storage-api/src/java/org/apache/hive/common/util/BloomKFilter.java
##
@@ -527,20 +527,19 @@ public void add(ElementWrapper wrapper) {
 @Override
 public void run() {
   while (!executor.isTerminated() && !queue.isEmpty()) {

Review comment:
   thanks, your comment regarding isTerminated check seems valid, let me 
think that over
   
   queue.isEmpty() check is needed I think, please a take a look at the 
implementation above, every thread uses its own queue (which is filled in 
advance), there is no concurrency between them, so queue.isEmpty() is a simple 
check for every thread that "whether I have further things to do, or I can 
quit",  this way every thread will quit once they're ready, so 
executor.awaitTermination will return and we're done





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 461959)
Time Spent: 1h 20m  (was: 1h 10m)

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---
>
> Key: HIVE-23880
> URL: https://issues.apache.org/jira/browse/HIVE-23880
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
>  Labels: pull-request-available
> Attachments: lipwig-output3605036885489193068.svg
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in 
> case of large number of source mapper tasks (~1000, Map 1 in below example) 
> and a large amount of expected entries (50M) in bloom filters.
> For example in TPCDS Q93:
> {code}
> select /*+ semi(store_returns, sr_item_sk, store_sales, 7000)*/ 
> ss_customer_sk
> ,sum(act_sales) sumsales
>   from (select ss_item_sk
>   ,ss_ticket_number
>   ,ss_customer_sk
>   ,case when sr_return_quantity is not null then 
> (ss_quantity-sr_return_quantity)*ss_sales_price
> else 
> (ss_quantity*ss_sales_price) end act_sales
> from store_sales left outer join store_returns on (sr_item_sk = 
> ss_item_sk
>and 
> sr_ticket_number = ss_ticket_number)
> ,reason
> where sr_reason_sk = r_reason_sk
>   and r_reason_desc = 'reason 66') t
>   group by ss_customer_sk
>   order by sumsales, ss_customer_sk
> limit 100;
> {code}
> On 10TB-30TB scale there is a chance that from 3-4 mins of query runtime 1-2 
> mins are spent with merging bloom filters (Reducer 2), as in:  
> [^lipwig-output3605036885489193068.svg] 
> {code}
> --
> VERTICES  MODESTATUS  TOTAL  COMPLETED  RUNNING  PENDING  
> FAILED  KILLED
> --
> Map 3 ..  llap SUCCEEDED  1  100  
>  0   0
> Map 1 ..  llap SUCCEEDED   1263   126300  
>  0   0
> Reducer 2 llap   RUNNING  1  010  
>  0   0
> Map 4 llap   RUNNING   6154  0  207 5947  
>  0   0
> Reducer 5 llapINITED 43  00   43  
>  0   0
> Reducer 6 llapINITED  1  001  
>  0   0
> --
> VERTICES: 02/06  [>>--] 16%   ELAPSED TIME: 149.98 s
> --
> {code}
> For example, 70M entries in bloom filter leads to a 436 465 696 bits, so 
> merging 1263 bloom filters 

[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-07-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=461954=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-461954
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 22/Jul/20 10:07
Start Date: 22/Jul/20 10:07
Worklog Time Spent: 10m 
  Work Description: abstractdog commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r458683069



##
File path: storage-api/src/java/org/apache/hive/common/util/BloomKFilter.java
##
@@ -506,18 +505,19 @@ public ElementWrapper(byte[] bytes, int start, int 
length, int modifiedStart, in
   }
 
   private static class BloomFilterMergeWorker implements Runnable {
-Queue queue = new LinkedBlockingDeque<>();
+ArrayBlockingQueue queue;

Review comment:
   agreed, I'll change it





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 461954)
Time Spent: 1h 10m  (was: 1h)

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---
>
> Key: HIVE-23880
> URL: https://issues.apache.org/jira/browse/HIVE-23880
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
>  Labels: pull-request-available
> Attachments: lipwig-output3605036885489193068.svg
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in 
> case of large number of source mapper tasks (~1000, Map 1 in below example) 
> and a large amount of expected entries (50M) in bloom filters.
> For example in TPCDS Q93:
> {code}
> select /*+ semi(store_returns, sr_item_sk, store_sales, 7000)*/ 
> ss_customer_sk
> ,sum(act_sales) sumsales
>   from (select ss_item_sk
>   ,ss_ticket_number
>   ,ss_customer_sk
>   ,case when sr_return_quantity is not null then 
> (ss_quantity-sr_return_quantity)*ss_sales_price
> else 
> (ss_quantity*ss_sales_price) end act_sales
> from store_sales left outer join store_returns on (sr_item_sk = 
> ss_item_sk
>and 
> sr_ticket_number = ss_ticket_number)
> ,reason
> where sr_reason_sk = r_reason_sk
>   and r_reason_desc = 'reason 66') t
>   group by ss_customer_sk
>   order by sumsales, ss_customer_sk
> limit 100;
> {code}
> On 10TB-30TB scale there is a chance that from 3-4 mins of query runtime 1-2 
> mins are spent with merging bloom filters (Reducer 2), as in:  
> [^lipwig-output3605036885489193068.svg] 
> {code}
> --
> VERTICES  MODESTATUS  TOTAL  COMPLETED  RUNNING  PENDING  
> FAILED  KILLED
> --
> Map 3 ..  llap SUCCEEDED  1  100  
>  0   0
> Map 1 ..  llap SUCCEEDED   1263   126300  
>  0   0
> Reducer 2 llap   RUNNING  1  010  
>  0   0
> Map 4 llap   RUNNING   6154  0  207 5947  
>  0   0
> Reducer 5 llapINITED 43  00   43  
>  0   0
> Reducer 6 llapINITED  1  001  
>  0   0
> --
> VERTICES: 02/06  [>>--] 16%   ELAPSED TIME: 149.98 s
> --
> {code}
> For example, 70M entries in bloom filter leads to a 436 465 696 bits, so 
> merging 1263 bloom filters means running ~ 1263 * 436 465 696 bitwise OR 
> operation, which is very hot codepath, but can be parallelized.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-07-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=461947=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-461947
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 22/Jul/20 10:00
Start Date: 22/Jul/20 10:00
Worklog Time Spent: 10m 
  Work Description: abstractdog commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r458678926



##
File path: storage-api/src/java/org/apache/hive/common/util/BloomKFilter.java
##
@@ -506,18 +505,19 @@ public ElementWrapper(byte[] bytes, int start, int 
length, int modifiedStart, in
   }
 
   private static class BloomFilterMergeWorker implements Runnable {
-Queue queue = new LinkedBlockingDeque<>();
+ArrayBlockingQueue queue;
 private ExecutorService executor;
 
 private byte[] bfAggregation;
 private int bfAggregationStart;
 private int bfAggregationLength;
 
-public BloomFilterMergeWorker(ExecutorService executor, byte[] 
bfAggregation, int bfAggregationStart, int bfAggregationLength) {
+public BloomFilterMergeWorker(ExecutorService executor, int batchSize, 
byte[] bfAggregation, int bfAggregationStart, int bfAggregationLength) {
   this.executor = executor;

Review comment:
   @belugabehr : sorry I cannot get your point, what do you mean?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 461947)
Time Spent: 1h  (was: 50m)

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---
>
> Key: HIVE-23880
> URL: https://issues.apache.org/jira/browse/HIVE-23880
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
>  Labels: pull-request-available
> Attachments: lipwig-output3605036885489193068.svg
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in 
> case of large number of source mapper tasks (~1000, Map 1 in below example) 
> and a large amount of expected entries (50M) in bloom filters.
> For example in TPCDS Q93:
> {code}
> select /*+ semi(store_returns, sr_item_sk, store_sales, 7000)*/ 
> ss_customer_sk
> ,sum(act_sales) sumsales
>   from (select ss_item_sk
>   ,ss_ticket_number
>   ,ss_customer_sk
>   ,case when sr_return_quantity is not null then 
> (ss_quantity-sr_return_quantity)*ss_sales_price
> else 
> (ss_quantity*ss_sales_price) end act_sales
> from store_sales left outer join store_returns on (sr_item_sk = 
> ss_item_sk
>and 
> sr_ticket_number = ss_ticket_number)
> ,reason
> where sr_reason_sk = r_reason_sk
>   and r_reason_desc = 'reason 66') t
>   group by ss_customer_sk
>   order by sumsales, ss_customer_sk
> limit 100;
> {code}
> On 10TB-30TB scale there is a chance that from 3-4 mins of query runtime 1-2 
> mins are spent with merging bloom filters (Reducer 2), as in:  
> [^lipwig-output3605036885489193068.svg] 
> {code}
> --
> VERTICES  MODESTATUS  TOTAL  COMPLETED  RUNNING  PENDING  
> FAILED  KILLED
> --
> Map 3 ..  llap SUCCEEDED  1  100  
>  0   0
> Map 1 ..  llap SUCCEEDED   1263   126300  
>  0   0
> Reducer 2 llap   RUNNING  1  010  
>  0   0
> Map 4 llap   RUNNING   6154  0  207 5947  
>  0   0
> Reducer 5 llapINITED 43  00   43  
>  0   0
> Reducer 6 llapINITED  1  001  
>  0   0
> --
> VERTICES: 02/06  [>>--] 16%   ELAPSED TIME: 149.98 s
> --
> {code}
> 

[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-07-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=461943=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-461943
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 22/Jul/20 09:59
Start Date: 22/Jul/20 09:59
Worklog Time Spent: 10m 
  Work Description: abstractdog commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r458678445



##
File path: storage-api/src/java/org/apache/hive/common/util/BloomKFilter.java
##
@@ -527,20 +527,19 @@ public void add(ElementWrapper wrapper) {
 @Override
 public void run() {
   while (!executor.isTerminated() && !queue.isEmpty()) {
-ElementWrapper currentBf = queue.poll();
+ElementWrapper currentBf = null;
+try {
+  currentBf = queue.take();
+} catch (InterruptedException e) {

Review comment:
   right, I'll do a simple method return with an LOG.warn to have some 
trace in the logs





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 461943)
Time Spent: 50m  (was: 40m)

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---
>
> Key: HIVE-23880
> URL: https://issues.apache.org/jira/browse/HIVE-23880
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
>  Labels: pull-request-available
> Attachments: lipwig-output3605036885489193068.svg
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in 
> case of large number of source mapper tasks (~1000, Map 1 in below example) 
> and a large amount of expected entries (50M) in bloom filters.
> For example in TPCDS Q93:
> {code}
> select /*+ semi(store_returns, sr_item_sk, store_sales, 7000)*/ 
> ss_customer_sk
> ,sum(act_sales) sumsales
>   from (select ss_item_sk
>   ,ss_ticket_number
>   ,ss_customer_sk
>   ,case when sr_return_quantity is not null then 
> (ss_quantity-sr_return_quantity)*ss_sales_price
> else 
> (ss_quantity*ss_sales_price) end act_sales
> from store_sales left outer join store_returns on (sr_item_sk = 
> ss_item_sk
>and 
> sr_ticket_number = ss_ticket_number)
> ,reason
> where sr_reason_sk = r_reason_sk
>   and r_reason_desc = 'reason 66') t
>   group by ss_customer_sk
>   order by sumsales, ss_customer_sk
> limit 100;
> {code}
> On 10TB-30TB scale there is a chance that from 3-4 mins of query runtime 1-2 
> mins are spent with merging bloom filters (Reducer 2), as in:  
> [^lipwig-output3605036885489193068.svg] 
> {code}
> --
> VERTICES  MODESTATUS  TOTAL  COMPLETED  RUNNING  PENDING  
> FAILED  KILLED
> --
> Map 3 ..  llap SUCCEEDED  1  100  
>  0   0
> Map 1 ..  llap SUCCEEDED   1263   126300  
>  0   0
> Reducer 2 llap   RUNNING  1  010  
>  0   0
> Map 4 llap   RUNNING   6154  0  207 5947  
>  0   0
> Reducer 5 llapINITED 43  00   43  
>  0   0
> Reducer 6 llapINITED  1  001  
>  0   0
> --
> VERTICES: 02/06  [>>--] 16%   ELAPSED TIME: 149.98 s
> --
> {code}
> For example, 70M entries in bloom filter leads to a 436 465 696 bits, so 
> merging 1263 bloom filters means running ~ 1263 * 436 465 696 bitwise OR 
> operation, which is very hot codepath, but can be parallelized.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-07-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=461130=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-461130
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 20/Jul/20 15:31
Start Date: 20/Jul/20 15:31
Worklog Time Spent: 10m 
  Work Description: belugabehr commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r457496282



##
File path: storage-api/src/java/org/apache/hive/common/util/BloomKFilter.java
##
@@ -527,20 +527,19 @@ public void add(ElementWrapper wrapper) {
 @Override
 public void run() {
   while (!executor.isTerminated() && !queue.isEmpty()) {

Review comment:
   A bit unrelated, but since you're touching this code.  This check is 
completely useless:
   
   ```
   while (!executor.isTerminated() && !queue.isEmpty()) {
 ...
   }
   ```
   
   I cannot think of many scenarios where the thread needs to check the state 
of its own `ExecutorService`.  If the `ExecutorService` is terminated, it will 
Interrupt every thread in the pool and that should cause it to cease to run.  
Also, checking if the `Queue` is empty is improper.  You will have two threads 
that check the state of the Queue (size = 1), see the same non-empty queue, and 
both try to read, even if there is only one item left.  Both should just try to 
`take` and one will succeed and the other will fail.

##
File path: storage-api/src/java/org/apache/hive/common/util/BloomKFilter.java
##
@@ -527,20 +527,19 @@ public void add(ElementWrapper wrapper) {
 @Override
 public void run() {
   while (!executor.isTerminated() && !queue.isEmpty()) {
-ElementWrapper currentBf = queue.poll();
+ElementWrapper currentBf = null;
+try {
+  currentBf = queue.take();
+} catch (InterruptedException e) {

Review comment:
   Do not ignore.  An Interrupt means that it's time to exit.

##
File path: storage-api/src/java/org/apache/hive/common/util/BloomKFilter.java
##
@@ -506,18 +505,19 @@ public ElementWrapper(byte[] bytes, int start, int 
length, int modifiedStart, in
   }
 
   private static class BloomFilterMergeWorker implements Runnable {
-Queue queue = new LinkedBlockingDeque<>();
+ArrayBlockingQueue queue;

Review comment:
   Use the generic `BlockingQueue` here.

##
File path: storage-api/src/java/org/apache/hive/common/util/BloomKFilter.java
##
@@ -506,18 +505,19 @@ public ElementWrapper(byte[] bytes, int start, int 
length, int modifiedStart, in
   }
 
   private static class BloomFilterMergeWorker implements Runnable {
-Queue queue = new LinkedBlockingDeque<>();
+ArrayBlockingQueue queue;
 private ExecutorService executor;
 
 private byte[] bfAggregation;
 private int bfAggregationStart;
 private int bfAggregationLength;
 
-public BloomFilterMergeWorker(ExecutorService executor, byte[] 
bfAggregation, int bfAggregationStart, int bfAggregationLength) {
+public BloomFilterMergeWorker(ExecutorService executor, int batchSize, 
byte[] bfAggregation, int bfAggregationStart, int bfAggregationLength) {
   this.executor = executor;

Review comment:
   Do not capture this value.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 461130)
Time Spent: 40m  (was: 0.5h)

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---
>
> Key: HIVE-23880
> URL: https://issues.apache.org/jira/browse/HIVE-23880
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
>  Labels: pull-request-available
> Attachments: lipwig-output3605036885489193068.svg
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in 
> case of large number of source mapper tasks (~1000, Map 1 in below example) 
> and a large amount of expected entries (50M) in bloom filters.
> For example in TPCDS Q93:
> {code}
> select /*+ semi(store_returns, sr_item_sk, store_sales, 7000)*/ 
> ss_customer_sk
> ,sum(act_sales) sumsales
>   from (select ss_item_sk
>   ,ss_ticket_number
>   ,ss_customer_sk
>   ,case when sr_return_quantity is not null then 
> 

[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-07-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=461077=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-461077
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 20/Jul/20 13:51
Start Date: 20/Jul/20 13:51
Worklog Time Spent: 10m 
  Work Description: abstractdog commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r457402888



##
File path: storage-api/src/java/org/apache/hive/common/util/BloomKFilter.java
##
@@ -362,16 +379,178 @@ public static void mergeBloomFilterBytes(
 
 // Just bitwise-OR the bits together - size/# functions should be the same,
 // rest of the data is serialized long values for the bitset which are 
supposed to be bitwise-ORed.
-for (int idx = START_OF_SERIALIZED_LONGS; idx < bf1Length; ++idx) {
+for (int idx = mergeStart; idx < mergeEnd; ++idx) {
   bf1Bytes[bf1Start + idx] |= bf2Bytes[bf2Start + idx];
 }
   }
 
+  public static void mergeBloomFilterBytesFromInputColumn(
+  byte[] bf1Bytes, int bf1Start, int bf1Length, long bf1ExpectedEntries,
+  BytesColumnVector inputColumn, int batchSize, boolean selectedInUse, 
int[] selected, int numThreads) {
+if (numThreads == 0) {
+  numThreads = Runtime.getRuntime().availableProcessors();
+}
+if (numThreads < 0) {
+  throw new RuntimeException("invalid number of threads: " + numThreads);
+}
+
+ExecutorService executor = Executors.newFixedThreadPool(numThreads);
+
+BloomFilterMergeWorker[] workers = new BloomFilterMergeWorker[numThreads];
+for (int f = 0; f < numThreads; f++) {
+  workers[f] = new BloomFilterMergeWorker(executor, bf1Bytes, bf1Start, 
bf1Length);
+}
+
+// split every bloom filter (represented by a part of a byte[]) across 
workers
+for (int j = 0; j < batchSize; j++) {
+  if (!selectedInUse && inputColumn.noNulls) {
+splitVectorAcrossWorkers(workers, inputColumn.vector[j], 
inputColumn.start[j],
+inputColumn.length[j]);
+  } else if (!selectedInUse) {
+if (!inputColumn.isNull[j]) {
+  splitVectorAcrossWorkers(workers, inputColumn.vector[j], 
inputColumn.start[j],
+  inputColumn.length[j]);
+}
+  } else if (inputColumn.noNulls) {
+int i = selected[j];
+splitVectorAcrossWorkers(workers, inputColumn.vector[i], 
inputColumn.start[i],
+inputColumn.length[i]);
+  } else {
+int i = selected[j];
+if (!inputColumn.isNull[i]) {
+  splitVectorAcrossWorkers(workers, inputColumn.vector[i], 
inputColumn.start[i],
+  inputColumn.length[i]);
+}
+  }
+}
+
+for (int f = 0; f < numThreads; f++) {
+  executor.submit(workers[f]);
+}
+
+executor.shutdown();
+try {
+  executor.awaitTermination(3600, TimeUnit.SECONDS);
+} catch (InterruptedException e) {
+  throw new RuntimeException(e);
+}
+  }
+
+  private static void splitVectorAcrossWorkers(BloomFilterMergeWorker[] 
workers, byte[] bytes,
+  int start, int length) {
+if (bytes == null || length == 0) {
+  return;
+}
+/*
+ * This will split a byte[] across workers as below:
+ * let's say there are 10 workers for 7813 bytes, in this case
+ * length: 7813, elementPerBatch: 781
+ * bytes assigned to workers: inclusive lower bound, exclusive upper bound
+ * 1. worker: 5 -> 786
+ * 2. worker: 786 -> 1567
+ * 3. worker: 1567 -> 2348
+ * 4. worker: 2348 -> 3129
+ * 5. worker: 3129 -> 3910
+ * 6. worker: 3910 -> 4691
+ * 7. worker: 4691 -> 5472
+ * 8. worker: 5472 -> 6253
+ * 9. worker: 6253 -> 7034
+ * 10. worker: 7034 -> 7813 (last element per batch is: 779)
+ *
+ * This way, a particular worker will be given with the same part
+ * of all bloom filters along with the shared base bloom filter,
+ * so the bitwise OR function will not be a subject of threading/sync 
issues.
+ */
+int elementPerBatch =
+(int) Math.ceil((double) (length - START_OF_SERIALIZED_LONGS) / 
workers.length);
+
+for (int w = 0; w < workers.length; w++) {
+  int modifiedStart = START_OF_SERIALIZED_LONGS + w * elementPerBatch;
+  int modifiedLength = (w == workers.length - 1)
+? length - (START_OF_SERIALIZED_LONGS + w * elementPerBatch) : 
elementPerBatch;
+
+  ElementWrapper wrapper =
+  new ElementWrapper(bytes, start, length, modifiedStart, 
modifiedLength);
+  workers[w].add(wrapper);
+}
+  }
+
+  public static byte[] getInitialBytes(long expectedEntries) {
+ByteArrayOutputStream bytesOut = null;
+try {
+  bytesOut = new ByteArrayOutputStream();
+  BloomKFilter bf = new BloomKFilter(expectedEntries);
+  BloomKFilter.serialize(bytesOut, bf);
+  return 

[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-07-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=460968=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-460968
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 20/Jul/20 08:43
Start Date: 20/Jul/20 08:43
Worklog Time Spent: 10m 
  Work Description: rbalamohan commented on a change in pull request #1280:
URL: https://github.com/apache/hive/pull/1280#discussion_r457184037



##
File path: storage-api/src/java/org/apache/hive/common/util/BloomKFilter.java
##
@@ -362,16 +379,178 @@ public static void mergeBloomFilterBytes(
 
 // Just bitwise-OR the bits together - size/# functions should be the same,
 // rest of the data is serialized long values for the bitset which are 
supposed to be bitwise-ORed.
-for (int idx = START_OF_SERIALIZED_LONGS; idx < bf1Length; ++idx) {
+for (int idx = mergeStart; idx < mergeEnd; ++idx) {
   bf1Bytes[bf1Start + idx] |= bf2Bytes[bf2Start + idx];
 }
   }
 
+  public static void mergeBloomFilterBytesFromInputColumn(
+  byte[] bf1Bytes, int bf1Start, int bf1Length, long bf1ExpectedEntries,
+  BytesColumnVector inputColumn, int batchSize, boolean selectedInUse, 
int[] selected, int numThreads) {
+if (numThreads == 0) {
+  numThreads = Runtime.getRuntime().availableProcessors();
+}
+if (numThreads < 0) {
+  throw new RuntimeException("invalid number of threads: " + numThreads);
+}
+
+ExecutorService executor = Executors.newFixedThreadPool(numThreads);
+
+BloomFilterMergeWorker[] workers = new BloomFilterMergeWorker[numThreads];
+for (int f = 0; f < numThreads; f++) {
+  workers[f] = new BloomFilterMergeWorker(executor, bf1Bytes, bf1Start, 
bf1Length);
+}
+
+// split every bloom filter (represented by a part of a byte[]) across 
workers
+for (int j = 0; j < batchSize; j++) {
+  if (!selectedInUse && inputColumn.noNulls) {
+splitVectorAcrossWorkers(workers, inputColumn.vector[j], 
inputColumn.start[j],
+inputColumn.length[j]);
+  } else if (!selectedInUse) {
+if (!inputColumn.isNull[j]) {
+  splitVectorAcrossWorkers(workers, inputColumn.vector[j], 
inputColumn.start[j],
+  inputColumn.length[j]);
+}
+  } else if (inputColumn.noNulls) {
+int i = selected[j];
+splitVectorAcrossWorkers(workers, inputColumn.vector[i], 
inputColumn.start[i],
+inputColumn.length[i]);
+  } else {
+int i = selected[j];
+if (!inputColumn.isNull[i]) {
+  splitVectorAcrossWorkers(workers, inputColumn.vector[i], 
inputColumn.start[i],
+  inputColumn.length[i]);
+}
+  }
+}
+
+for (int f = 0; f < numThreads; f++) {
+  executor.submit(workers[f]);
+}
+
+executor.shutdown();
+try {
+  executor.awaitTermination(3600, TimeUnit.SECONDS);
+} catch (InterruptedException e) {
+  throw new RuntimeException(e);
+}
+  }
+
+  private static void splitVectorAcrossWorkers(BloomFilterMergeWorker[] 
workers, byte[] bytes,
+  int start, int length) {
+if (bytes == null || length == 0) {
+  return;
+}
+/*
+ * This will split a byte[] across workers as below:
+ * let's say there are 10 workers for 7813 bytes, in this case
+ * length: 7813, elementPerBatch: 781
+ * bytes assigned to workers: inclusive lower bound, exclusive upper bound
+ * 1. worker: 5 -> 786
+ * 2. worker: 786 -> 1567
+ * 3. worker: 1567 -> 2348
+ * 4. worker: 2348 -> 3129
+ * 5. worker: 3129 -> 3910
+ * 6. worker: 3910 -> 4691
+ * 7. worker: 4691 -> 5472
+ * 8. worker: 5472 -> 6253
+ * 9. worker: 6253 -> 7034
+ * 10. worker: 7034 -> 7813 (last element per batch is: 779)
+ *
+ * This way, a particular worker will be given with the same part
+ * of all bloom filters along with the shared base bloom filter,
+ * so the bitwise OR function will not be a subject of threading/sync 
issues.
+ */
+int elementPerBatch =
+(int) Math.ceil((double) (length - START_OF_SERIALIZED_LONGS) / 
workers.length);
+
+for (int w = 0; w < workers.length; w++) {
+  int modifiedStart = START_OF_SERIALIZED_LONGS + w * elementPerBatch;
+  int modifiedLength = (w == workers.length - 1)
+? length - (START_OF_SERIALIZED_LONGS + w * elementPerBatch) : 
elementPerBatch;
+
+  ElementWrapper wrapper =
+  new ElementWrapper(bytes, start, length, modifiedStart, 
modifiedLength);
+  workers[w].add(wrapper);
+}
+  }
+
+  public static byte[] getInitialBytes(long expectedEntries) {
+ByteArrayOutputStream bytesOut = null;
+try {
+  bytesOut = new ByteArrayOutputStream();
+  BloomKFilter bf = new BloomKFilter(expectedEntries);
+  BloomKFilter.serialize(bytesOut, bf);
+  return 

[jira] [Work logged] (HIVE-23880) Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge

2020-07-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23880?focusedWorklogId=460950=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-460950
 ]

ASF GitHub Bot logged work on HIVE-23880:
-

Author: ASF GitHub Bot
Created on: 20/Jul/20 07:53
Start Date: 20/Jul/20 07:53
Worklog Time Spent: 10m 
  Work Description: abstractdog opened a new pull request #1280:
URL: https://github.com/apache/hive/pull/1280


   …AFBloomFilterMerge
   
   Change-Id: I235248ad327b0cea91e637e74a0c67720710737e
   
   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HIVE-X: Fix a typo in YYY)
   For more details, please see 
https://cwiki.apache.org/confluence/display/Hive/HowToContribute
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 460950)
Remaining Estimate: 0h
Time Spent: 10m

> Bloom filters can be merged in a parallel way in VectorUDAFBloomFilterMerge
> ---
>
> Key: HIVE-23880
> URL: https://issues.apache.org/jira/browse/HIVE-23880
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
> Attachments: lipwig-output3605036885489193068.svg
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Merging bloom filters in semijoin reduction can become the main bottleneck in 
> case of large number of source mapper tasks (~1000, Map 1 in below example) 
> and a large amount of expected entries (50M) in bloom filters.
> For example in TPCDS Q93:
> {code}
> select /*+ semi(store_returns, sr_item_sk, store_sales, 7000)*/ 
> ss_customer_sk
> ,sum(act_sales) sumsales
>   from (select ss_item_sk
>   ,ss_ticket_number
>   ,ss_customer_sk
>   ,case when sr_return_quantity is not null then 
> (ss_quantity-sr_return_quantity)*ss_sales_price
> else 
> (ss_quantity*ss_sales_price) end act_sales
> from store_sales left outer join store_returns on (sr_item_sk = 
> ss_item_sk
>and 
> sr_ticket_number = ss_ticket_number)
> ,reason
> where sr_reason_sk = r_reason_sk
>   and r_reason_desc = 'reason 66') t
>   group by ss_customer_sk
>   order by sumsales, ss_customer_sk
> limit 100;
> {code}
> On 10TB-30TB scale there is a chance that from 3-4 mins of query runtime 1-2 
> mins are spent with merging bloom filters (Reducer 2), as in:  
> [^lipwig-output3605036885489193068.svg] 
> {code}
> --
> VERTICES  MODESTATUS  TOTAL  COMPLETED  RUNNING  PENDING  
> FAILED  KILLED
> --
> Map 3 ..  llap SUCCEEDED  1  100  
>  0   0
> Map 1 ..  llap SUCCEEDED   1263   126300  
>  0   0
> Reducer 2 llap   RUNNING  1  010  
>  0   0
> Map 4 llap   RUNNING   6154  0  207 5947  
>  0   0
> Reducer 5 llapINITED 43  00   43  
>  0   0
> Reducer 6 llapINITED  1  001  
>  0   0
> --
> VERTICES: 02/06  [>>--] 16%   ELAPSED TIME: 149.98 s
> --
> {code}
> For example, 70M entries in bloom filter leads to a 436 465 696 bits, so 
> merging 1263 bloom filters means running ~ 1263 * 436 465 696 bitwise OR 
> operation, which is very hot codepath, but can be parallelized.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)