[jira] [Commented] (HADOOP-15204) Add Configuration API for parsing storage sizes

2018-02-15 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16365975#comment-16365975
 ] 

Steve Loughran commented on HADOOP-15204:
-

bq. I plan to post an initial patch where I will convert the Ozone usage to 
getStorageUnits and I will follow up with trunk later.

that'd be great! 
I suspect hadoop-* is the primary user of the getLongBytes call, so patching it 
in our code will fix things.

Maybe consider tagging the method as Deprecated in the same patch, to warn 
other people

> Add Configuration API for parsing storage sizes
> ---
>
> Key: HADOOP-15204
> URL: https://issues.apache.org/jira/browse/HADOOP-15204
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Minor
> Fix For: 3.1.0, 3.0.1
>
> Attachments: HADOOP-15204.001.patch, HADOOP-15204.002.patch, 
> HADOOP-15204.003.patch
>
>
> Hadoop has a lot of configurations that specify memory and disk size. This 
> JIRA proposes to add an API like {{Configuration.getStorageSize}} which will 
> allow users
>  to specify units like KB, MB, GB etc. This is JIRA is inspired by 
> HADOOP-8608 and Ozone. Adding {{getTimeDuration}} support was a great 
> improvement for ozone code base, this JIRA hopes to do the same thing for 
> configs that deal with disk and memory usage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15204) Add Configuration API for parsing storage sizes

2018-02-15 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16365921#comment-16365921
 ] 

Anu Engineer commented on HADOOP-15204:
---

{quote}
Given its in, I'd like to see a followup "move existing uses of getLongBytes to 
getStorageUnits"
{quote}

Will do, I plan to post an initial patch where I will convert the Ozone usage 
to getStorageUnits and I will follow up with trunk later.

> Add Configuration API for parsing storage sizes
> ---
>
> Key: HADOOP-15204
> URL: https://issues.apache.org/jira/browse/HADOOP-15204
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Minor
> Fix For: 3.1.0, 3.0.1
>
> Attachments: HADOOP-15204.001.patch, HADOOP-15204.002.patch, 
> HADOOP-15204.003.patch
>
>
> Hadoop has a lot of configurations that specify memory and disk size. This 
> JIRA proposes to add an API like {{Configuration.getStorageSize}} which will 
> allow users
>  to specify units like KB, MB, GB etc. This is JIRA is inspired by 
> HADOOP-8608 and Ozone. Adding {{getTimeDuration}} support was a great 
> improvement for ozone code base, this JIRA hopes to do the same thing for 
> configs that deal with disk and memory usage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15204) Add Configuration API for parsing storage sizes

2018-02-15 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16365528#comment-16365528
 ] 

Steve Loughran commented on HADOOP-15204:
-

catching up on this.

 

I'm disappointed by the lack of switching getLongBytes to use this behind the 
scenes, as now, unless all uses of that method across the hadoop libraries are 
changed to this new method, you are going to be able to specify sizes in some 
configs which aren't valid in others, which means that we are exposing 
historical implementation details to people configuring files.

Given its in, I'd like to see a followup "move existing uses of getLongBytes to 
getStorageUnits"

> Add Configuration API for parsing storage sizes
> ---
>
> Key: HADOOP-15204
> URL: https://issues.apache.org/jira/browse/HADOOP-15204
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Minor
> Fix For: 3.1.0, 3.0.1
>
> Attachments: HADOOP-15204.001.patch, HADOOP-15204.002.patch, 
> HADOOP-15204.003.patch
>
>
> Hadoop has a lot of configurations that specify memory and disk size. This 
> JIRA proposes to add an API like {{Configuration.getStorageSize}} which will 
> allow users
>  to specify units like KB, MB, GB etc. This is JIRA is inspired by 
> HADOOP-8608 and Ozone. Adding {{getTimeDuration}} support was a great 
> improvement for ozone code base, this JIRA hopes to do the same thing for 
> configs that deal with disk and memory usage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15204) Add Configuration API for parsing storage sizes

2018-02-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16364824#comment-16364824
 ] 

Hudson commented on HADOOP-15204:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13659 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13659/])
HADOOP-15204. Add Configuration API for parsing storage sizes. (aengineer: rev 
8f66affd6265c9e4231e18d7ca352fb3035dae9a)
* (add) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestStorageUnit.java
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/StorageUnit.java
* (add) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/StorageSize.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java


> Add Configuration API for parsing storage sizes
> ---
>
> Key: HADOOP-15204
> URL: https://issues.apache.org/jira/browse/HADOOP-15204
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: HADOOP-15204.001.patch, HADOOP-15204.002.patch, 
> HADOOP-15204.003.patch
>
>
> Hadoop has a lot of configurations that specify memory and disk size. This 
> JIRA proposes to add an API like {{Configuration.getStorageSize}} which will 
> allow users
>  to specify units like KB, MB, GB etc. This is JIRA is inspired by 
> HADOOP-8608 and Ozone. Adding {{getTimeDuration}} support was a great 
> improvement for ozone code base, this JIRA hopes to do the same thing for 
> configs that deal with disk and memory usage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15204) Add Configuration API for parsing storage sizes

2018-02-14 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16364772#comment-16364772
 ] 

Chris Douglas commented on HADOOP-15204:


+1 lgtm

> Add Configuration API for parsing storage sizes
> ---
>
> Key: HADOOP-15204
> URL: https://issues.apache.org/jira/browse/HADOOP-15204
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: HADOOP-15204.001.patch, HADOOP-15204.002.patch, 
> HADOOP-15204.003.patch
>
>
> Hadoop has a lot of configurations that specify memory and disk size. This 
> JIRA proposes to add an API like {{Configuration.getStorageSize}} which will 
> allow users
>  to specify units like KB, MB, GB etc. This is JIRA is inspired by 
> HADOOP-8608 and Ozone. Adding {{getTimeDuration}} support was a great 
> improvement for ozone code base, this JIRA hopes to do the same thing for 
> configs that deal with disk and memory usage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15204) Add Configuration API for parsing storage sizes

2018-02-14 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16364761#comment-16364761
 ] 

Anu Engineer commented on HADOOP-15204:
---

[~chris.douglas] / [~ste...@apache.org] Please let me know if you have any more 
comments. If this looks good, I will make corresponding changes in Ozone branch 
to use this feature. Thank you for the time and comments.

> Add Configuration API for parsing storage sizes
> ---
>
> Key: HADOOP-15204
> URL: https://issues.apache.org/jira/browse/HADOOP-15204
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: HADOOP-15204.001.patch, HADOOP-15204.002.patch, 
> HADOOP-15204.003.patch
>
>
> Hadoop has a lot of configurations that specify memory and disk size. This 
> JIRA proposes to add an API like {{Configuration.getStorageSize}} which will 
> allow users
>  to specify units like KB, MB, GB etc. This is JIRA is inspired by 
> HADOOP-8608 and Ozone. Adding {{getTimeDuration}} support was a great 
> improvement for ozone code base, this JIRA hopes to do the same thing for 
> configs that deal with disk and memory usage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15204) Add Configuration API for parsing storage sizes

2018-02-05 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16353064#comment-16353064
 ] 

genericqa commented on HADOOP-15204:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
41s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 40s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 241 unchanged - 0 fixed = 242 total (was 241) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 44s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 83m 27s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestRaceWhenRelogin |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15204 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12909288/HADOOP-15204.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 6d31ca2ae4b2 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 33e6cdb |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14073/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14073/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14073/testReport/ |
| Max. process+thread count | 1396 (vs. ulimit of 

[jira] [Commented] (HADOOP-15204) Add Configuration API for parsing storage sizes

2018-02-05 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16352956#comment-16352956
 ] 

Anu Engineer commented on HADOOP-15204:
---

[~ste...@apache.org], [~chris.douglas] Thanks for the comments. Patch v3 
addresses all the comments.
Details below:
bq. IDE shuffled imports; please revert
Thanks for catching this, Fixed.
bq. parseFromString() can just use Precondition.checkArgument for validation
Fixed.
bq. validation/parse errors to include value at error, and, ideally, config 
option too. Compare a stack trace saying "Value not in expected format", with 
one saying "value of option 'buffer.size' not in expected format "54exa"
Fixed.
bq. sanitizedValue.toLowerCase() should specify local for case conversion, same 
everywhere else used.
Fixed.
bq. What if a caller doesn't want to provide a string default value of the new 
getters, but just a number? That would let me return something like -1 to mean 
"no value set", which I can't do with the current API.
There is an API that takes a default float argument, and a default string 
argument with the storage unit.
bq. getStorageSize(String name, String defaultValue,
+ StorageUnit targetUnit) -- Does this come up often? 
We define the standard defaults as "5 GB", etc., so yes it is a convenient 
function.
bq. I'd lean toward MB instead of MEGABYTES, and similar.
Fixed. I agree, thanks for this suggestion, that does improve code readability.
bq. Please, no. This is the silliest dependency we have on Guava.
Fixed. I still use it in Configuration, since it is already in the file as an 
import.
 

> Add Configuration API for parsing storage sizes
> ---
>
> Key: HADOOP-15204
> URL: https://issues.apache.org/jira/browse/HADOOP-15204
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: HADOOP-15204.001.patch, HADOOP-15204.002.patch, 
> HADOOP-15204.003.patch
>
>
> Hadoop has a lot of configurations that specify memory and disk size. This 
> JIRA proposes to add an API like {{Configuration.getStorageSize}} which will 
> allow users
>  to specify units like KB, MB, GB etc. This is JIRA is inspired by 
> HADOOP-8608 and Ozone. Adding {{getTimeDuration}} support was a great 
> improvement for ozone code base, this JIRA hopes to do the same thing for 
> configs that deal with disk and memory usage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15204) Add Configuration API for parsing storage sizes

2018-02-03 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351490#comment-16351490
 ] 

Chris Douglas commented on HADOOP-15204:


Sorry, missed this
bq. Precondition.checkArgument for validation
Please, no. This is the silliest dependency we have on Guava.

> Add Configuration API for parsing storage sizes
> ---
>
> Key: HADOOP-15204
> URL: https://issues.apache.org/jira/browse/HADOOP-15204
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: HADOOP-15204.001.patch, HADOOP-15204.002.patch
>
>
> Hadoop has a lot of configurations that specify memory and disk size. This 
> JIRA proposes to add an API like {{Configuration.getStorageSize}} which will 
> allow users
>  to specify units like KB, MB, GB etc. This is JIRA is inspired by 
> HADOOP-8608 and Ozone. Adding {{getTimeDuration}} support was a great 
> improvement for ozone code base, this JIRA hopes to do the same thing for 
> configs that deal with disk and memory usage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15204) Add Configuration API for parsing storage sizes

2018-02-02 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350889#comment-16350889
 ] 

Chris Douglas commented on HADOOP-15204:


bq. I find getTimeDuration API extremely intuitive, hence imitating that for 
this API
Soright; I only mention it as relevant context.

bq. Rounding causes a significant loss when we convert from x bytes to y 
exabytes. Hence I voted for the least element of surprise and decided to return 
double
Is it sufficient for the caller to provide the precision? If the caller wants 
petabytes with some decimal places, then they can request terabytes. If they 
want to ensure the conversion is within some epsilon, then they can request the 
value with high precision and measure the loss. Even rounding decisions can be 
left to the caller. Instead of passing this context into {{Configuration}} (or 
setting defaults), its role can be limited to converting and scaling the 
stringly-typed value.

Similarly:
bq. That would let me return something like -1 to mean "no value set", which I 
can't do with the current API.
{{Configuration}} supports that with a raw {{get(key)}}. It's only where we 
have the default in hand that it provides typed getters.

bq. This is the curse of writing a unit as a library; we need to be cognizant 
of that single use case which will break us. Hence I have used bigDecimal to be 
safe and correct and return doubles. It yields values that people expect.
Sure, I only mention it because it differs from {{getTimeDuration}}. With 
{{TimeUnit}}, a caller could, with low false positives, check if the result was 
equal to max to detect overflow. Doing that here would have a higher false 
positive rate, so the {{BigDecimal}} approach with explicit precision is 
superior.

Minor:
* In this overload:
{noformat}
+  public double getStorageSize(String name, String defaultValue,
+  StorageUnit targetUnit) {
{noformat}
Does this come up often? The {{getTimeDuration}} assumes that the default will 
be in the same unit as the conversion. So in your example, one would write 
{{getTimeDuration("key", 5000, MEGABYTES)}}. It's less elegant, but it type 
checks.
* I'd lean toward {{MB}} instead of {{MEGABYTES}}, and similar. Even as a 
static import, those are unlikely to collide and they're equally readable.

> Add Configuration API for parsing storage sizes
> ---
>
> Key: HADOOP-15204
> URL: https://issues.apache.org/jira/browse/HADOOP-15204
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: HADOOP-15204.001.patch, HADOOP-15204.002.patch
>
>
> Hadoop has a lot of configurations that specify memory and disk size. This 
> JIRA proposes to add an API like {{Configuration.getStorageSize}} which will 
> allow users
>  to specify units like KB, MB, GB etc. This is JIRA is inspired by 
> HADOOP-8608 and Ozone. Adding {{getTimeDuration}} support was a great 
> improvement for ozone code base, this JIRA hopes to do the same thing for 
> configs that deal with disk and memory usage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15204) Add Configuration API for parsing storage sizes

2018-02-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349835#comment-16349835
 ] 

Steve Loughran commented on HADOOP-15204:
-

Ah I see your reasoning.

Yes it would be good, but I would like it to work outside of Configuration 
itself; Look at {{S3ATestUtils.getTestPropertyBytes}} as an example of a place 
which could adopt this.

{{Configuration.getLongBytes()}} must support the exact same set of units, 
returning it explicitly as bytes. Otherwise you'd have some config options 
which take the new language, some old ones which don't, and no valid 
explanation of why you can't say "32MB" of the "fs.s3a.blocksize" option other 
than "they wrote it before the new API was added." . This particularly matters 
when you start sharing properties via ${property} refs. It would not make sense 
for a storage capacity to be valid in some fields, but not others


Quick code review

* IDE shuffled imports; please revert
* parseFromString() can just use Precondition.checkArgument for validation
* validation/parse errors to include value at error, and, ideally, config 
option too. Compare a stack trace saying "Value not in expected format", with 
one saying "value of option 'buffer.size' not in expected format "54exa"
* {{sanitizedValue.toLowerCase()}} should specify local for case conversion, 
same everywhere else used.
* What if a caller doesn't want to provide a string default value of the new 
getters, but just a number? That would let me return something like -1 to mean 
"no value set", which I can't do with the current API.

> Add Configuration API for parsing storage sizes
> ---
>
> Key: HADOOP-15204
> URL: https://issues.apache.org/jira/browse/HADOOP-15204
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: HADOOP-15204.001.patch, HADOOP-15204.002.patch
>
>
> Hadoop has a lot of configurations that specify memory and disk size. This 
> JIRA proposes to add an API like {{Configuration.getStorageSize}} which will 
> allow users
>  to specify units like KB, MB, GB etc. This is JIRA is inspired by 
> HADOOP-8608 and Ozone. Adding {{getTimeDuration}} support was a great 
> improvement for ozone code base, this JIRA hopes to do the same thing for 
> configs that deal with disk and memory usage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15204) Add Configuration API for parsing storage sizes

2018-02-01 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349769#comment-16349769
 ] 

genericqa commented on HADOOP-15204:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
57s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 39s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 241 unchanged - 0 fixed = 243 total (was 241) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
21s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 84m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15204 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12908911/HADOOP-15204.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c4be2ccb2fe3 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / aa45faf |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14063/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14063/testReport/ |
| Max. process+thread count | 1629 (vs. ulimit of 5000) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14063/console |
| Powered by | Apache Yetus 

[jira] [Commented] (HADOOP-15204) Add Configuration API for parsing storage sizes

2018-02-01 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349709#comment-16349709
 ] 

Anu Engineer commented on HADOOP-15204:
---

[~chris.douglas] I have attached patch v2 that takes care of the comments and 
checkstyle issues.
{quote}I find the API intuitive, but that is not universal (e.g., HDFS-9847). 
Explaining it has taken more cycles than I expected, and perhaps more than a 
good API should.
{quote}
Thank you for the time and comments. Personally, I find _getTimeDuration_ API 
extremely intuitive, hence imitating that for this API; As for others, you have 
done the heavy lifting of educating the crowd, I will just ride on your 
coattails.
{quote}TERRABYTES is misspelled.
{quote}
Thanks for catching that, Fixed.
{quote}Is long insufficient as a return type for getStorageSize? I appreciate 
future-proofing, but for Configuration values, that's what, ~8 petabytes?
{quote}
I started with long; the real issue was returning rounded numbers for large 
storage units. Rounding causes a significant loss when we convert from _x 
bytes_ to _y exabytes_. Hence I voted for the least element of surprise and 
decided to return double.
{quote}Why ROUND_UP of the options? Just curious.
{quote}
I was using RoundingMode.HALF_UP in divide and now I do that for multiply too, 
just to be consistent.

The reason for using 
[HALF_UP|https://docs.oracle.com/javase/8/docs/api/java/math/BigDecimal.html#ROUND_HALF_UP]
 is that it is probably the least surprising result for most users. From the 
Doc: {{Note that this is the rounding mode that most of us were taught in grade 
school.}}
{quote}Storage units are more likely to be exact powers
{quote}
This is the curse of writing a unit as a library; we need to be cognizant of 
that single use case which will break us. Hence I have used bigDecimal to be 
safe and correct and return doubles. It yields values that people expect.

 

> Add Configuration API for parsing storage sizes
> ---
>
> Key: HADOOP-15204
> URL: https://issues.apache.org/jira/browse/HADOOP-15204
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: HADOOP-15204.001.patch, HADOOP-15204.002.patch
>
>
> Hadoop has a lot of configurations that specify memory and disk size. This 
> JIRA proposes to add an API like {{Configuration.getStorageSize}} which will 
> allow users
>  to specify units like KB, MB, GB etc. This is JIRA is inspired by 
> HADOOP-8608 and Ozone. Adding {{getTimeDuration}} support was a great 
> improvement for ozone code base, this JIRA hopes to do the same thing for 
> configs that deal with disk and memory usage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15204) Add Configuration API for parsing storage sizes

2018-02-01 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349492#comment-16349492
 ] 

Chris Douglas commented on HADOOP-15204:


bq. As the author of HADOOP-8608, I would appreciate any perspectives you have 
on this JIRA.
I find the API intuitive, but that is not universal (e.g., HDFS-9847). 
Explaining it has taken more cycles than I expected, and perhaps more than a 
good API should.
* {{TERRABYTES}} is misspelled.
* Is {{long}} insufficient as a return type for {{getStorageSize}}? I 
appreciate future-proofing, but for {{Configuration}} values, that's what, ~8 
petabytes? I haven't looked carefully at the semantics of {{BigDecimal}}, but 
the comments imply that {{setScale}} is used to guarantee the result will fit. 
An overload of {{setStorageSize}} taking {{double}} might make sense, as would 
using doubles in the overload of {{String}}.
* Why 
[ROUND_UP|https://docs.oracle.com/javase/8/docs/api/java/math/BigDecimal.html#ROUND_UP]
 of the options? Just curious.
* {{TimeUnit}} uses min/max values for Long (e.g., 
[TimeUnit::toNanos|https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/TimeUnit.html#toNanos-long-])
 for overflow/underflow. Storage units are more likely to be exact powers of 
two so that may not be appropriate.

> Add Configuration API for parsing storage sizes
> ---
>
> Key: HADOOP-15204
> URL: https://issues.apache.org/jira/browse/HADOOP-15204
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: HADOOP-15204.001.patch
>
>
> Hadoop has a lot of configurations that specify memory and disk size. This 
> JIRA proposes to add an API like {{Configuration.getStorageSize}} which will 
> allow users
>  to specify units like KB, MB, GB etc. This is JIRA is inspired by 
> HADOOP-8608 and Ozone. Adding {{getTimeDuration}} support was a great 
> improvement for ozone code base, this JIRA hopes to do the same thing for 
> configs that deal with disk and memory usage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15204) Add Configuration API for parsing storage sizes

2018-02-01 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349191#comment-16349191
 ] 

Anu Engineer commented on HADOOP-15204:
---

bq.[~anu] , you mean HADOOP-8608?

[~xyao] Thanks for catching that. Fixed.

> Add Configuration API for parsing storage sizes
> ---
>
> Key: HADOOP-15204
> URL: https://issues.apache.org/jira/browse/HADOOP-15204
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: HADOOP-15204.001.patch
>
>
> Hadoop has a lot of configurations that specify memory and disk size. This 
> JIRA proposes to add an API like {{Configuration.getStorageSize}} which will 
> allow users
>  to specify units like KB, MB, GB etc. This is JIRA is inspired by 
> HADOOP-8608 and Ozone. Adding {{getTimeDuration}} support was a great 
> improvement for ozone code base, this JIRA hopes to do the same thing for 
> configs that deal with disk and memory usage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15204) Add Configuration API for parsing storage sizes

2018-02-01 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349121#comment-16349121
 ] 

Anu Engineer commented on HADOOP-15204:
---

[~ste...@apache.org] I did look at getLongBytes. The issue is that it does not 
provide the same code improvement as {{getTimeDuration}} and 
{{setTimeDuration}}. 

Let me support my assertion with some examples that you can look at.

Let us suppose that I have a case to read the standard configured size of 
containers. This is a configurable parameter in Ozone.

Here is how the code would look like with get/setStorageSize.
{code:java}
setStorageSize(OZONE_CONTAINER_SIZE, 1, GIGABYTES);
// let us suppose we want to read back this config value as MBs.
long valueInMB = getStorageSize(OZONE_CONTAINER_SIZE, "5 GB", MEGABYTES);{code}
now let us see how we write this code using getLongBytes..
{code:java}
// there is no symetric function called setLongBytes.. So I have to fall back 
to setLong
// 
// Here is my first attempt.
setLong(OZONE_CONTAINER_SIZE, 1048576); 
// This looks bad, since I am not sure if the OZONE_CONTAINER_SIZE is in MB/GB 
or what, 
// So the many keys get tagged as OZONE_CONTAINER_SIZE_GB

// Now second attempt.
setLong(OZONE_CONTAINER_SIZE_GB, 1048576); 
// But this is bad too, now I can not set container size in MB, since setLong 
takes a
// whole number and not a fraction.
// So now third attempt -- convert all fields to bytes
setLong(OZONE_CONTAINER_SIZE_BYTES, 5368709120); // The default is 5 GG.
{code}
Before you think this is a made up example, this is part of changes that we 
tried which triggered this change.

Now let us go to back get examples --
{code:java}

// getLongBytes forces us to write code in a certain way which does not match 
with
// rest of code like getTimeDuration. For example, if I have a case where I 
want to read // the read the value in MB, and the ozone-default.xml is 
configured to GB.
// the case that this line solves 
// getStorageSize(OZONE_CONTAINER_SIZE, "5 GB", MEGABYTES);

long defaultValueInBytes = getDefaultValue(OZONE_CONTAINER_SIZE_DEFAULT); // in 
bytes.
long valueInMB = BYTES.fromBytes(getLongBytes(OZONE_CONTAINER_SIZE, 
defaultValueInBytes)).toMBs();{code}
 

Now imagine repeating this code many times all over the code base, plus, the 
BYTES.fromBytes(xxx).ToMBs() is a fuction from this patch. We need some 
equivalent code.

 

In other words, I submit that these following factors make getLongBytes a less 
desirable function compared to getTimeDuration/getStorageSize.
 * lack of symetric function in getLongBytes - Without a set function it 
degenerates to a messy set of multiplication and division each time we have to 
use a storage Unit. With this those issue are cleanly isolated to a single 
place.
 * Lack of a format storage unit - lack of a formal value like to 
TimeUnit/StorageUnit makes the code less readable (see the example 
setStorageSize) where the context also tells you the unit that we are operating 
with. 
 *  Does not suit our usage pattern - Ozone code follows the patterns in HDFS. 
Hence the default value mapping is not handled well in getLongBytes. 

 * Units and Conversation is needed - In ozone, there are several places where 
we convert these numbers. Users can specify Quota as a easy to read Storage 
Units, like 5 GB or 10 TB. We have a dedicated code for handling that, we use 
the storage numbers to specify how large the off-heap size should be, or as I 
am showing in this example, container sizes etc.
 * The getLongBytes by itself does not address the lack of StorageUnits, what 
this patch does is that it introduces a class that is very similar to 
{{TimeUnit}}. This makes the code more readable and easy to maintain.

 

 

> Add Configuration API for parsing storage sizes
> ---
>
> Key: HADOOP-15204
> URL: https://issues.apache.org/jira/browse/HADOOP-15204
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: HADOOP-15204.001.patch
>
>
> Hadoop has a lot of configurations that specify memory and disk size. This 
> JIRA proposes to add an API like {{Configuration.getStorageSize}} which will 
> allow users
> to specify units like KB, MB, GB etc. This is JIRA is inspired by HDFS-8608 
> and Ozone. Adding {{getTimeDuration}} support was a great improvement for 
> ozone code base, this JIRA hopes to do the same thing for configs that deal 
> with disk and memory usage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, 

[jira] [Commented] (HADOOP-15204) Add Configuration API for parsing storage sizes

2018-02-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349070#comment-16349070
 ] 

Steve Loughran commented on HADOOP-15204:
-

We already have this with getLongBytes(), which does K, M, G, T, P

If there's one change I'd like here, it'd be to pull that parsing (and that of 
time) out of Config into something standalone which can be used elsewhere, e.g. 
parsing JSON and system properties in test runs

> Add Configuration API for parsing storage sizes
> ---
>
> Key: HADOOP-15204
> URL: https://issues.apache.org/jira/browse/HADOOP-15204
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: HADOOP-15204.001.patch
>
>
> Hadoop has a lot of configurations that specify memory and disk size. This 
> JIRA proposes to add an API like {{Configuration.getStorageSize}} which will 
> allow users
> to specify units like KB, MB, GB etc. This is JIRA is inspired by HDFS-8608 
> and Ozone. Adding {{getTimeDuration}} support was a great improvement for 
> ozone code base, this JIRA hopes to do the same thing for configs that deal 
> with disk and memory usage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15204) Add Configuration API for parsing storage sizes

2018-01-31 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16348050#comment-16348050
 ] 

Xiaoyu Yao commented on HADOOP-15204:
-

[~anu] , you mean HADOOP-8606?

> Add Configuration API for parsing storage sizes
> ---
>
> Key: HADOOP-15204
> URL: https://issues.apache.org/jira/browse/HADOOP-15204
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: HADOOP-15204.001.patch
>
>
> Hadoop has a lot of configurations that specify memory and disk size. This 
> JIRA proposes to add an API like {{Configuration.getStorageSize}} which will 
> allow users
> to specify units like KB, MB, GB etc. This is JIRA is inspired by HDFS-8608 
> and Ozone. Adding {{getTimeDuration}} support was a great improvement for 
> ozone code base, this JIRA hopes to do the same thing for configs that deal 
> with disk and memory usage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15204) Add Configuration API for parsing storage sizes

2018-01-31 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347954#comment-16347954
 ] 

genericqa commented on HADOOP-15204:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
53s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 40s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 10 new + 241 unchanged - 0 fixed = 251 total (was 241) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 17s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 83m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestRaceWhenRelogin |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15204 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12908696/HADOOP-15204.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a8d53d326091 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0bee384 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14054/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14054/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14054/testReport/ |
| Max. process+thread count | 1500 (vs. ulimit of 

[jira] [Commented] (HADOOP-15204) Add Configuration API for parsing storage sizes

2018-01-31 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347890#comment-16347890
 ] 

Anu Engineer commented on HADOOP-15204:
---

[~chris.douglas] As the author of HDFS-8608, I would appreciate any 
perspectives you have on this JIRA. 

> Add Configuration API for parsing storage sizes
> ---
>
> Key: HADOOP-15204
> URL: https://issues.apache.org/jira/browse/HADOOP-15204
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: HADOOP-15204.001.patch
>
>
> Hadoop has a lot of configurations that specify memory and disk size. This 
> JIRA proposes to add an API like {{Configuration.getStorageSize}} which will 
> allow users
> to specify units like KB, MB, GB etc. This is JIRA is inspired by HDFS-8608 
> and Ozone. Adding {{getTimeDuration}} support was a great improvement for 
> ozone code base, this JIRA hopes to do the same thing for configs that deal 
> with disk and memory usage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15204) Add Configuration API for parsing storage sizes

2018-01-31 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347884#comment-16347884
 ] 

Anu Engineer commented on HADOOP-15204:
---

[~arpitagarwal] ,[~xyao] ,[~nandakumar131],[~elek],[~msingh] ,[~jnp] Please 
take a look when you get a chance. 

> Add Configuration API for parsing storage sizes
> ---
>
> Key: HADOOP-15204
> URL: https://issues.apache.org/jira/browse/HADOOP-15204
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: HADOOP-15204.001.patch
>
>
> Hadoop has a lot of configurations that specify memory and disk size. This 
> JIRA proposes to add an API like {{Configuration.getStorageSize}} which will 
> allow users
> to specify units like KB, MB, GB etc. This is JIRA is inspired by HDFS-8608 
> and Ozone. Adding {{getTimeDuration}} support was a great improvement for 
> ozone code base, this JIRA hopes to do the same thing for configs that deal 
> with disk and memory usage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org