[jira] [Commented] (PARQUET-2202) Redundant String allocation on the hot path in CapacityByteArrayOutputStream.setByte

2023-03-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17699225#comment-17699225
 ] 

ASF GitHub Bot commented on PARQUET-2202:
-

wgtmac merged PR #1035:
URL: https://github.com/apache/parquet-mr/pull/1035




> Redundant String allocation on the hot path in 
> CapacityByteArrayOutputStream.setByte
> 
>
> Key: PARQUET-2202
> URL: https://issues.apache.org/jira/browse/PARQUET-2202
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.12.3
>Reporter: Andrei Pangin
>Priority: Major
>  Labels: performance
> Attachments: profile-alloc.png, profile-cpu.png
>
>
> Profiling of a Spark application revealed a performance issue in production:
> {{CapacityByteArrayOutputStream.setByte}} consumed 2.2% of total CPU time and 
> made up 4.6% of total allocations. However, in normal case, this method 
> should allocate nothing at all.
> Here is an excerpt from async-profiler report.
> CPU profile:
> !profile-cpu.png|width=560!
> Allocation profile:
> !profile-alloc.png|width=560!
> The reason is a {{checkArgument()}} call with an unconditionally constructed 
> dynamic String:
> [https://github.com/apache/parquet-mr/blob/62b774cd0f0c60cfbe540bbfa60bee15929af5d4/parquet-common/src/main/java/org/apache/parquet/bytes/CapacityByteArrayOutputStream.java#L303]
> The suggested fix is to move String construction under the condition:
> {code:java}
> if (index >= bytesUsed) {
>   throw new IllegalArgumentException("Index: " + index +
>   " is >= the current size of: " + bytesUsed);
> }{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2202) Redundant String allocation on the hot path in CapacityByteArrayOutputStream.setByte

2023-03-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17697862#comment-17697862
 ] 

ASF GitHub Bot commented on PARQUET-2202:
-

wgtmac commented on code in PR #1035:
URL: https://github.com/apache/parquet-mr/pull/1035#discussion_r1129335329


##
parquet-column/src/main/java/org/apache/parquet/column/ParquetProperties.java:
##
@@ -477,15 +477,15 @@ public Builder withMaxBloomFilterBytes(int 
maxBloomFilterBytes) {
  * @return this builder for method chaining
  */
 public Builder withBloomFilterNDV(String columnPath, long ndv) {
-  Preconditions.checkArgument(ndv > 0, "Invalid NDV for column \"%s\": 
%d", columnPath, ndv);
+  Preconditions.checkArgument(ndv > 0, "Invalid NDV for column \"%s\": 
%s", columnPath, ndv);
   this.bloomFilterNDVs.withValue(columnPath, ndv);
   // Setting an NDV for a column implies writing a bloom filter
   this.bloomFilterEnabled.withValue(columnPath, true);
   return this;
 }
 
 public Builder withBloomFilterFPP(String columnPath, double fpp) {
-  Preconditions.checkArgument(fpp > 0.0 && fpp < 1.0, "Invalid FPP for 
column \"%s\": %d", columnPath, fpp);
+  Preconditions.checkArgument(fpp > 0.0 && fpp < 1.0, "Invalid FPP for 
column \"%s\": %s", columnPath, fpp);

Review Comment:
   Thanks for the explanation. SGTM.





> Redundant String allocation on the hot path in 
> CapacityByteArrayOutputStream.setByte
> 
>
> Key: PARQUET-2202
> URL: https://issues.apache.org/jira/browse/PARQUET-2202
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.12.3
>Reporter: Andrei Pangin
>Priority: Major
>  Labels: performance
> Attachments: profile-alloc.png, profile-cpu.png
>
>
> Profiling of a Spark application revealed a performance issue in production:
> {{CapacityByteArrayOutputStream.setByte}} consumed 2.2% of total CPU time and 
> made up 4.6% of total allocations. However, in normal case, this method 
> should allocate nothing at all.
> Here is an excerpt from async-profiler report.
> CPU profile:
> !profile-cpu.png|width=560!
> Allocation profile:
> !profile-alloc.png|width=560!
> The reason is a {{checkArgument()}} call with an unconditionally constructed 
> dynamic String:
> [https://github.com/apache/parquet-mr/blob/62b774cd0f0c60cfbe540bbfa60bee15929af5d4/parquet-common/src/main/java/org/apache/parquet/bytes/CapacityByteArrayOutputStream.java#L303]
> The suggested fix is to move String construction under the condition:
> {code:java}
> if (index >= bytesUsed) {
>   throw new IllegalArgumentException("Index: " + index +
>   " is >= the current size of: " + bytesUsed);
> }{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2202) Redundant String allocation on the hot path in CapacityByteArrayOutputStream.setByte

2023-03-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17697843#comment-17697843
 ] 

ASF GitHub Bot commented on PARQUET-2202:
-

jerolba commented on PR #1035:
URL: https://github.com/apache/parquet-mr/pull/1035#issuecomment-1460008854

   > Is the change exhaustive?
   
   Yes, I reviewed all calls to `checkArgument` to fix incorrect usage of %f 
and %d




> Redundant String allocation on the hot path in 
> CapacityByteArrayOutputStream.setByte
> 
>
> Key: PARQUET-2202
> URL: https://issues.apache.org/jira/browse/PARQUET-2202
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.12.3
>Reporter: Andrei Pangin
>Priority: Major
>  Labels: performance
> Attachments: profile-alloc.png, profile-cpu.png
>
>
> Profiling of a Spark application revealed a performance issue in production:
> {{CapacityByteArrayOutputStream.setByte}} consumed 2.2% of total CPU time and 
> made up 4.6% of total allocations. However, in normal case, this method 
> should allocate nothing at all.
> Here is an excerpt from async-profiler report.
> CPU profile:
> !profile-cpu.png|width=560!
> Allocation profile:
> !profile-alloc.png|width=560!
> The reason is a {{checkArgument()}} call with an unconditionally constructed 
> dynamic String:
> [https://github.com/apache/parquet-mr/blob/62b774cd0f0c60cfbe540bbfa60bee15929af5d4/parquet-common/src/main/java/org/apache/parquet/bytes/CapacityByteArrayOutputStream.java#L303]
> The suggested fix is to move String construction under the condition:
> {code:java}
> if (index >= bytesUsed) {
>   throw new IllegalArgumentException("Index: " + index +
>   " is >= the current size of: " + bytesUsed);
> }{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2202) Redundant String allocation on the hot path in CapacityByteArrayOutputStream.setByte

2023-03-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17697841#comment-17697841
 ] 

ASF GitHub Bot commented on PARQUET-2202:
-

jerolba commented on code in PR #1035:
URL: https://github.com/apache/parquet-mr/pull/1035#discussion_r1129277490


##
parquet-column/src/main/java/org/apache/parquet/column/ParquetProperties.java:
##
@@ -477,15 +477,15 @@ public Builder withMaxBloomFilterBytes(int 
maxBloomFilterBytes) {
  * @return this builder for method chaining
  */
 public Builder withBloomFilterNDV(String columnPath, long ndv) {
-  Preconditions.checkArgument(ndv > 0, "Invalid NDV for column \"%s\": 
%d", columnPath, ndv);
+  Preconditions.checkArgument(ndv > 0, "Invalid NDV for column \"%s\": 
%s", columnPath, ndv);

Review Comment:
   The same logic than previous comment. Fails in case of check failure.





> Redundant String allocation on the hot path in 
> CapacityByteArrayOutputStream.setByte
> 
>
> Key: PARQUET-2202
> URL: https://issues.apache.org/jira/browse/PARQUET-2202
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.12.3
>Reporter: Andrei Pangin
>Priority: Major
>  Labels: performance
> Attachments: profile-alloc.png, profile-cpu.png
>
>
> Profiling of a Spark application revealed a performance issue in production:
> {{CapacityByteArrayOutputStream.setByte}} consumed 2.2% of total CPU time and 
> made up 4.6% of total allocations. However, in normal case, this method 
> should allocate nothing at all.
> Here is an excerpt from async-profiler report.
> CPU profile:
> !profile-cpu.png|width=560!
> Allocation profile:
> !profile-alloc.png|width=560!
> The reason is a {{checkArgument()}} call with an unconditionally constructed 
> dynamic String:
> [https://github.com/apache/parquet-mr/blob/62b774cd0f0c60cfbe540bbfa60bee15929af5d4/parquet-common/src/main/java/org/apache/parquet/bytes/CapacityByteArrayOutputStream.java#L303]
> The suggested fix is to move String construction under the condition:
> {code:java}
> if (index >= bytesUsed) {
>   throw new IllegalArgumentException("Index: " + index +
>   " is >= the current size of: " + bytesUsed);
> }{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2202) Redundant String allocation on the hot path in CapacityByteArrayOutputStream.setByte

2023-03-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17697840#comment-17697840
 ] 

ASF GitHub Bot commented on PARQUET-2202:
-

jerolba commented on code in PR #1035:
URL: https://github.com/apache/parquet-mr/pull/1035#discussion_r1129276294


##
parquet-column/src/main/java/org/apache/parquet/column/ParquetProperties.java:
##
@@ -477,15 +477,15 @@ public Builder withMaxBloomFilterBytes(int 
maxBloomFilterBytes) {
  * @return this builder for method chaining
  */
 public Builder withBloomFilterNDV(String columnPath, long ndv) {
-  Preconditions.checkArgument(ndv > 0, "Invalid NDV for column \"%s\": 
%d", columnPath, ndv);
+  Preconditions.checkArgument(ndv > 0, "Invalid NDV for column \"%s\": 
%s", columnPath, ndv);
   this.bloomFilterNDVs.withValue(columnPath, ndv);
   // Setting an NDV for a column implies writing a bloom filter
   this.bloomFilterEnabled.withValue(columnPath, true);
   return this;
 }
 
 public Builder withBloomFilterFPP(String columnPath, double fpp) {
-  Preconditions.checkArgument(fpp > 0.0 && fpp < 1.0, "Invalid FPP for 
column \"%s\": %d", columnPath, fpp);
+  Preconditions.checkArgument(fpp > 0.0 && fpp < 1.0, "Invalid FPP for 
column \"%s\": %s", columnPath, fpp);

Review Comment:
   The changed code was incorrect and in case of failure in the check, the code 
throws a `java.util.IllegalFormatConversionException`
   
   The existing [code behind checkArgument 
method](https://github.com/jerolba/parquet-mr/blob/5078cc629074046b5699544c33256ca980ab2761/parquet-common/src/main/java/org/apache/parquet/Preconditions.java#L243)
 converts all parameters to String before applying the String.format.
   
   If String.format finds an incorrect type, throws 
`IllegalFormatConversionException`. 
   
   For example try this:
   ```java
   String output = String.format("Integer value: %d", "10");
   ```
   
   Then, existing Preconditions implementation forces to use Strings messages 
with %s.
   
   I reviewed all calls to `checkArgument` to fix the incorrect format.





> Redundant String allocation on the hot path in 
> CapacityByteArrayOutputStream.setByte
> 
>
> Key: PARQUET-2202
> URL: https://issues.apache.org/jira/browse/PARQUET-2202
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.12.3
>Reporter: Andrei Pangin
>Priority: Major
>  Labels: performance
> Attachments: profile-alloc.png, profile-cpu.png
>
>
> Profiling of a Spark application revealed a performance issue in production:
> {{CapacityByteArrayOutputStream.setByte}} consumed 2.2% of total CPU time and 
> made up 4.6% of total allocations. However, in normal case, this method 
> should allocate nothing at all.
> Here is an excerpt from async-profiler report.
> CPU profile:
> !profile-cpu.png|width=560!
> Allocation profile:
> !profile-alloc.png|width=560!
> The reason is a {{checkArgument()}} call with an unconditionally constructed 
> dynamic String:
> [https://github.com/apache/parquet-mr/blob/62b774cd0f0c60cfbe540bbfa60bee15929af5d4/parquet-common/src/main/java/org/apache/parquet/bytes/CapacityByteArrayOutputStream.java#L303]
> The suggested fix is to move String construction under the condition:
> {code:java}
> if (index >= bytesUsed) {
>   throw new IllegalArgumentException("Index: " + index +
>   " is >= the current size of: " + bytesUsed);
> }{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2202) Redundant String allocation on the hot path in CapacityByteArrayOutputStream.setByte

2023-03-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17697670#comment-17697670
 ] 

ASF GitHub Bot commented on PARQUET-2202:
-

wgtmac commented on code in PR #1035:
URL: https://github.com/apache/parquet-mr/pull/1035#discussion_r1128836441


##
parquet-column/src/main/java/org/apache/parquet/column/ParquetProperties.java:
##
@@ -477,15 +477,15 @@ public Builder withMaxBloomFilterBytes(int 
maxBloomFilterBytes) {
  * @return this builder for method chaining
  */
 public Builder withBloomFilterNDV(String columnPath, long ndv) {
-  Preconditions.checkArgument(ndv > 0, "Invalid NDV for column \"%s\": 
%d", columnPath, ndv);
+  Preconditions.checkArgument(ndv > 0, "Invalid NDV for column \"%s\": 
%s", columnPath, ndv);
   this.bloomFilterNDVs.withValue(columnPath, ndv);
   // Setting an NDV for a column implies writing a bloom filter
   this.bloomFilterEnabled.withValue(columnPath, true);
   return this;
 }
 
 public Builder withBloomFilterFPP(String columnPath, double fpp) {
-  Preconditions.checkArgument(fpp > 0.0 && fpp < 1.0, "Invalid FPP for 
column \"%s\": %d", columnPath, fpp);
+  Preconditions.checkArgument(fpp > 0.0 && fpp < 1.0, "Invalid FPP for 
column \"%s\": %s", columnPath, fpp);

Review Comment:
   Is `%f` better for `fpp`?



##
parquet-column/src/main/java/org/apache/parquet/column/ParquetProperties.java:
##
@@ -477,15 +477,15 @@ public Builder withMaxBloomFilterBytes(int 
maxBloomFilterBytes) {
  * @return this builder for method chaining
  */
 public Builder withBloomFilterNDV(String columnPath, long ndv) {
-  Preconditions.checkArgument(ndv > 0, "Invalid NDV for column \"%s\": 
%d", columnPath, ndv);
+  Preconditions.checkArgument(ndv > 0, "Invalid NDV for column \"%s\": 
%s", columnPath, ndv);

Review Comment:
   Why use `%s` for long?





> Redundant String allocation on the hot path in 
> CapacityByteArrayOutputStream.setByte
> 
>
> Key: PARQUET-2202
> URL: https://issues.apache.org/jira/browse/PARQUET-2202
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.12.3
>Reporter: Andrei Pangin
>Priority: Major
>  Labels: performance
> Attachments: profile-alloc.png, profile-cpu.png
>
>
> Profiling of a Spark application revealed a performance issue in production:
> {{CapacityByteArrayOutputStream.setByte}} consumed 2.2% of total CPU time and 
> made up 4.6% of total allocations. However, in normal case, this method 
> should allocate nothing at all.
> Here is an excerpt from async-profiler report.
> CPU profile:
> !profile-cpu.png|width=560!
> Allocation profile:
> !profile-alloc.png|width=560!
> The reason is a {{checkArgument()}} call with an unconditionally constructed 
> dynamic String:
> [https://github.com/apache/parquet-mr/blob/62b774cd0f0c60cfbe540bbfa60bee15929af5d4/parquet-common/src/main/java/org/apache/parquet/bytes/CapacityByteArrayOutputStream.java#L303]
> The suggested fix is to move String construction under the condition:
> {code:java}
> if (index >= bytesUsed) {
>   throw new IllegalArgumentException("Index: " + index +
>   " is >= the current size of: " + bytesUsed);
> }{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2202) Redundant String allocation on the hot path in CapacityByteArrayOutputStream.setByte

2023-02-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17693672#comment-17693672
 ] 

ASF GitHub Bot commented on PARQUET-2202:
-

jerolba opened a new pull request, #1035:
URL: https://github.com/apache/parquet-mr/pull/1035

   The [original Jira ticket 
](https://issues.apache.org/jira/browse/PARQUET-2202) references to a concrete 
bad usage of the `Preconditions.checkArgument` method, where a String is 
calculated before validating the check, creating an overhead when in theory 
100% of the cases the composed String will not be used.
   
   The proposed solution inlines the call to `Preconditions.checkArgument`, but 
the correct approach is to call the `checkArgument` method with a String to 
format if the argument check is not valid.
   
   A similar issue ocurrs in the 
[constructor](https://github.com/apache/parquet-mr/blob/62b774cd0f0c60cfbe540bbfa60bee15929af5d4/parquet-common/src/main/java/org/apache/parquet/bytes/CapacityByteArrayOutputStream.java#L153)
 of the `CapacityByteArrayOutputStream` class, where the error message is 
always created using the `String.format`, instead of passing the template 
String and its params to the `checkArgument` method. This performance issue is 
also visible in a profiling.
   
   The MR fixes both cases, and reviews the usage of `Preconditions` class, to 
ensure that error messages are not calculated before checking the boolean 
expression.
   
   I've also reviewed `Preconditions` methods to improve the performance. When 
you call to a method with a varargs argument, Java internally allocates an 
array containing all values. To avoid this allocation when the number of params 
is very low, is recomended to overload the method with versions of the method 
with different number of arguments. This approach is heavily used in logging 
frameworks, or in [Guava Preconditions 
implementation](https://github.com/google/guava/blob/4312d949967f3fb245636f66437a00dd8c346d38/guava/src/com/google/common/base/Preconditions.java#L118)
   Because nearly 100% of cases are going to check the condition as valid, the 
Array associated with varargs call to `Preconditions.strings` method will never 
be created.
   
   Because all changes are related with `Preconditions.checkargument`, I've 
created a single PR. I can split it in multiple PRs if needed. Each commit of 
the PR makes a type of change, but I can squash into a single commit if needed.
   
   Make sure you have checked _all_ steps below.
   
   ### Jira
   
   - [x] My PR addresses the following [Parquet 
Jira](https://issues.apache.org/jira/browse/PARQUET/) issues and references 
them in the PR title. For example, "PARQUET-2202: My Parquet PR"
 - https://issues.apache.org/jira/browse/PARQUET-2202
 - In case you are adding a dependency, check if the license complies with 
the [ASF 3rd Party License 
Policy](https://www.apache.org/legal/resolved.html#category-x).
   
   ### Tests
   
   - [x] My PR adds test cases to the following unit tests:
 1. 
https://github.com/jerolba/parquet-mr/blob/review_usage_of_preconditions_checkargument/parquet-common/src/test/java/org/apache/parquet/TestPreconditions.java
   
   ### Commits
   
   - [x] My commits all reference Jira issues in their subject lines. In 
addition, my commits follow the guidelines from "[How to write a good git 
commit message](http://chris.beams.io/posts/git-commit/)":
 1. Subject is separated from body by a blank line
 1. Subject is limited to 50 characters (not including Jira issue reference)
 1. Subject does not end with a period
 1. Subject uses the imperative mood ("add", not "adding")
 1. Body wraps at 72 characters
 1. Body explains "what" and "why", not "how"
   
   ### Documentation
   
   - [x] In case of new functionality, my PR adds documentation that describes 
how to use it.
 - All the public functions and the classes in the PR contain Javadoc that 
explain what it does
   




> Redundant String allocation on the hot path in 
> CapacityByteArrayOutputStream.setByte
> 
>
> Key: PARQUET-2202
> URL: https://issues.apache.org/jira/browse/PARQUET-2202
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.12.3
>Reporter: Andrei Pangin
>Priority: Major
>  Labels: performance
> Attachments: profile-alloc.png, profile-cpu.png
>
>
> Profiling of a Spark application revealed a performance issue in production:
> {{CapacityByteArrayOutputStream.setByte}} consumed 2.2% of total CPU time and 
> made up 4.6% of total allocations. However, in normal case, this method 
> should allocate nothing at all.
> Here is an excerpt from async-profiler report.
> CPU profile:
> !profile-cpu.png|width=560!
>