[jira] [Commented] (HADOOP-13948) Create automated scripts to update LICENSE/NOTICE

2018-04-13 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16438224#comment-16438224
 ] 

Xiao Chen commented on HADOOP-13948:


Unassigned from myself as I won't have cycles to work on this to the level of 
commit-ready. But patch 1 should work if anyone wants to try and pick it up

> Create automated scripts to update LICENSE/NOTICE
> -
>
> Key: HADOOP-13948
> URL: https://issues.apache.org/jira/browse/HADOOP-13948
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Xiao Chen
>Priority: Major
> Attachments: HADOOP-13948.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13948) Create automated scripts to update LICENSE/NOTICE

2018-04-13 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen reassigned HADOOP-13948:
--

Assignee: (was: Xiao Chen)

> Create automated scripts to update LICENSE/NOTICE
> -
>
> Key: HADOOP-13948
> URL: https://issues.apache.org/jira/browse/HADOOP-13948
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Xiao Chen
>Priority: Major
> Attachments: HADOOP-13948.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work stopped] (HADOOP-13948) Create automated scripts to update LICENSE/NOTICE

2018-04-13 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-13948 stopped by Xiao Chen.
--
> Create automated scripts to update LICENSE/NOTICE
> -
>
> Key: HADOOP-13948
> URL: https://issues.apache.org/jira/browse/HADOOP-13948
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HADOOP-13948.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15205) maven release: missing source attachments for hadoop-mapreduce-client-core

2018-04-13 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16438126#comment-16438126
 ] 

Lei (Eddy) Xu edited comment on HADOOP-15205 at 4/14/18 3:39 AM:
-

Hi, [~shv]

If we run "mvn deploy -Psign -DskipTests" as suggested on 
https://wiki.apache.org/hadoop/HowToRelease, there is no source jars for all.

However, if run "mvn deploy -Psign -DskipTests -Dgpg.executable=gpg2 
-Pdist,src,yarn-ui -Dtar" seems to work, as the repository located below:

https://repository.apache.org/content/repositories/orgapachehadoop-1102/

"dev-support/bin/create-release --asfrelease --docker --dockercache"  seems 
work too.

Update:

Some package has jars without sources:

https://repository.apache.org/content/repositories/orgapachehadoop-1102/org/apache/hadoop/hadoop-client-runtime/3.0.2/

But others have sources:

https://repository.apache.org/content/repositories/orgapachehadoop-1102/org/apache/hadoop/hadoop-hdfs-client/3.0.2/


was (Author: eddyxu):
Hi, [~shv]

If we run "mvn deploy -Psign -DskipTests" as suggested on 
https://wiki.apache.org/hadoop/HowToRelease, there is no source jars for all.

However, if run "mvn deploy -Psign -DskipTests -Dgpg.executable=gpg2 
-Pdist,src,yarn-ui -Dtar" seems to work, as the repository located below:

https://repository.apache.org/content/repositories/orgapachehadoop-1102/

"dev-support/bin/create-release --asfrelease --docker --dockercache"  seems 
work too.

> maven release: missing source attachments for hadoop-mapreduce-client-core
> --
>
> Key: HADOOP-15205
> URL: https://issues.apache.org/jira/browse/HADOOP-15205
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.5, 3.0.0
>Reporter: Zoltan Haindrich
>Priority: Major
>
> I wanted to use the source attachment; however it looks like since 2.7.5 that 
> artifact is not present at maven central ; it looks like the last release 
> which had source attachments / javadocs was 2.7.4
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.4/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.5/
> this seems to be not limited to mapreduce; as the same change is present for 
> yarn-common as well
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-yarn-common/2.7.4/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-yarn-common/2.7.5/
> and also hadoop-common
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/2.7.4/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/2.7.5/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/3.0.0/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15205) maven release: missing source attachments for hadoop-mapreduce-client-core

2018-04-13 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16438126#comment-16438126
 ] 

Lei (Eddy) Xu commented on HADOOP-15205:


Hi, [~shv]

If we run "mvn deploy -Psign -DskipTests" as suggested on 
https://wiki.apache.org/hadoop/HowToRelease, there is no source jars for all.

However, if run "mvn deploy -Psign -DskipTests -Dgpg.executable=gpg2 
-Pdist,src,yarn-ui -Dtar" seems to work, as the repository located below:

https://repository.apache.org/content/repositories/orgapachehadoop-1102/

"dev-support/bin/create-release --asfrelease --docker --dockercache"  seems 
work too.

> maven release: missing source attachments for hadoop-mapreduce-client-core
> --
>
> Key: HADOOP-15205
> URL: https://issues.apache.org/jira/browse/HADOOP-15205
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.5, 3.0.0
>Reporter: Zoltan Haindrich
>Priority: Major
>
> I wanted to use the source attachment; however it looks like since 2.7.5 that 
> artifact is not present at maven central ; it looks like the last release 
> which had source attachments / javadocs was 2.7.4
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.4/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.5/
> this seems to be not limited to mapreduce; as the same change is present for 
> yarn-common as well
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-yarn-common/2.7.4/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-yarn-common/2.7.5/
> and also hadoop-common
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/2.7.4/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/2.7.5/
> http://central.maven.org/maven2/org/apache/hadoop/hadoop-common/3.0.0/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14667) Flexible Visual Studio support

2018-04-13 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16437666#comment-16437666
 ] 

Brahma Reddy Battula commented on HADOOP-14667:
---

don't we need to mention in release notes that *Windows SDK 7.1* will not 
support anymore..?

> Flexible Visual Studio support
> --
>
> Key: HADOOP-14667
> URL: https://issues.apache.org/jira/browse/HADOOP-14667
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-beta1
> Environment: Windows
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Major
> Fix For: 3.2.0, 3.1.1
>
> Attachments: HADOOP-14667.00.patch, HADOOP-14667.01.patch, 
> HADOOP-14667.02.patch, HADOOP-14667.03.patch, HADOOP-14667.04.patch, 
> HADOOP-14667.05.patch
>
>
> Is it time to upgrade the Windows native project files to use something more 
> modern than Visual Studio 2010?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15330) Remove jdk1.7 profile from hadoop-annotations module

2018-04-13 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-15330:
---
Fix Version/s: 3.1.1

Cherry-picked to branch-3.1.

> Remove jdk1.7 profile from hadoop-annotations module
> 
>
> Key: HADOOP-15330
> URL: https://issues.apache.org/jira/browse/HADOOP-15330
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: fang zhenyi
>Priority: Minor
>  Labels: newbie
> Fix For: 3.2.0, 3.1.1
>
> Attachments: HADOOP-15330.001.patch
>
>
> Java 7 is not supported in Hadoop 3. Let's remove the profile.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15332) Fix typos in hadoop-aws markdown docs

2018-04-13 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-15332:
---
Fix Version/s: 3.1.1

Cherry-picked to branch-3.1.

> Fix typos in hadoop-aws markdown docs
> -
>
> Key: HADOOP-15332
> URL: https://issues.apache.org/jira/browse/HADOOP-15332
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Minor
> Fix For: 3.2.0, 3.1.1
>
> Attachments: HADOOP-15332.001.patch, HADOOP-15332.patch
>
>
> While reading through 
> https://github.com/apache/hadoop/tree/trunk/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws
>  I've found some very obvious typos, and I thought it would be a nice 
> improvement to fix those.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15331) Fix a race condition causing parsing error of java.io.BufferedInputStream in class org.apache.hadoop.conf.Configuration

2018-04-13 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-15331:
---
Fix Version/s: 3.1.1

Cherry-picked to branch-3.1.

> Fix a race condition causing parsing error of java.io.BufferedInputStream in 
> class org.apache.hadoop.conf.Configuration
> ---
>
> Key: HADOOP-15331
> URL: https://issues.apache.org/jira/browse/HADOOP-15331
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.9.0, 2.8.3, 2.7.5, 3.0.0, 3.1.0, 2.10.0
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Major
> Fix For: 3.2.0, 3.1.1
>
> Attachments: HADOOP-15331.000.patch, HADOOP-15331.001.patch
>
>
> There is a race condition in the way Hadoop handles the Configuration class. 
> The scenario is the following. Let's assume that there are two threads 
> sharing the same Configuration class. One adds some resources to the 
> configuration, while the other one clones it. Resources are loaded lazily in 
> a deferred call to {{loadResources()}}. If the cloning happens after adding 
> the resources but before parsing them, some temporary resources like input 
> stream pointers are cloned. Eventually both copies will load the input stream 
> resources pointing to the same input streams. One parses the input stream XML 
> and closes it updating it's own copy of the resource. The other one has 
> another pointer to the same input stream. When it tries to load it, it will 
> crash with a stream closed exception.
> Here is an example unit test:
> {code:java}
> @Test
> public void testResourceRace() {
>   InputStream is =
>   new BufferedInputStream(new ByteArrayInputStream(
>   "".getBytes()));
>   Configuration conf = new Configuration();
>   // Thread 1
>   conf.addResource(is);
>   // Thread 2
>   Configuration confClone = new Configuration(conf);
>   // Thread 2
>   confClone.get("firstParse");
>   // Thread 1
>   conf.get("secondParse");
> }{code}
> Example real world stack traces:
> {code:java}
> 2018-02-28 08:23:19,589 ERROR org.apache.hadoop.conf.Configuration: error 
> parsing conf java.io.BufferedInputStream@7741d346
> com.ctc.wstx.exc.WstxIOException: Stream closed
>   at 
> com.ctc.wstx.stax.WstxInputFactory.doCreateSR(WstxInputFactory.java:578)
>   at 
> com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:633)
>   at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2803)
>   at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2853)
>   at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2817)
>   at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2689)
>   at org.apache.hadoop.conf.Configuration.get(Configuration.java:1420)
>   at 
> org.apache.hadoop.security.authorize.ServiceAuthorizationManager.refreshWithLoadedConfiguration(ServiceAuthorizationManager.java:161)
>   at 
> org.apache.hadoop.ipc.Server.refreshServiceAclWithLoadedConfiguration(Server.java:607)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshServiceAcls(AdminService.java:586)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.startServer(AdminService.java:188)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.serviceStart(AdminService.java:165)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>   at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1231)
>   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1421)
> {code}
> Another example:
> {code:java}
> 2018-02-28 08:23:20,702 ERROR org.apache.hadoop.conf.Configuration: error 
> parsing conf java.io.BufferedInputStream@7741d346
> com.ctc.wstx.exc.WstxIOException: Stream closed
>   at 
> com.ctc.wstx.stax.WstxInputFactory.doCreateSR(WstxInputFactory.java:578)
>   at 
> com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:633)
>   at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2803)
>   at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2853)
>   at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2817)
>   at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2689)
>   at org.apache.hadoop.conf.Configuration.set(Configuration.java:1326)
>   at 

[jira] [Updated] (HADOOP-15062) TestCryptoStreamsWithOpensslAesCtrCryptoCodec fails on Debian 9

2018-04-13 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-15062:
---
Fix Version/s: 3.1.1

Cherry-picked to branch-3.1.

> TestCryptoStreamsWithOpensslAesCtrCryptoCodec fails on Debian 9
> ---
>
> Key: HADOOP-15062
> URL: https://issues.apache.org/jira/browse/HADOOP-15062
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Major
> Fix For: 3.2.0, 3.1.1
>
> Attachments: HADOOP-15062.000.patch
>
>
> {code}
> [ERROR] 
> org.apache.hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec  Time 
> elapsed: 0.478 s  <<< FAILURE!
> java.lang.AssertionError: Unable to instantiate codec 
> org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec, is the required version of 
> OpenSSL installed?
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertNotNull(Assert.java:621)
>   at 
> org.apache.hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec.init(TestCryptoStreamsWithOpensslAesCtrCryptoCodec.java:43)
> {code}
> This happened due to the following openssl change:
> https://github.com/openssl/openssl/commit/ff4b7fafb315df5f8374e9b50c302460e068f188



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14667) Flexible Visual Studio support

2018-04-13 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-14667:
---
Fix Version/s: 3.1.1

Cherry-picked to branch-3.1.

> Flexible Visual Studio support
> --
>
> Key: HADOOP-14667
> URL: https://issues.apache.org/jira/browse/HADOOP-14667
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-beta1
> Environment: Windows
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Major
> Fix For: 3.2.0, 3.1.1
>
> Attachments: HADOOP-14667.00.patch, HADOOP-14667.01.patch, 
> HADOOP-14667.02.patch, HADOOP-14667.03.patch, HADOOP-14667.04.patch, 
> HADOOP-14667.05.patch
>
>
> Is it time to upgrade the Windows native project files to use something more 
> modern than Visual Studio 2010?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15382) Log kinit output in credential renewal thread

2018-04-13 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16437617#comment-16437617
 ] 

genericqa commented on HADOOP-15382:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 25m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 25m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 44s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
56s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}111m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HADOOP-15382 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918971/HADOOP-15382.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d73ee03d8bc2 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0725953 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14489/testReport/ |
| Max. process+thread count | 1436 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14489/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Log kinit output in credential 

[jira] [Commented] (HADOOP-15239) S3ABlockOutputStream.flush() be no-op when stream closed

2018-04-13 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16437559#comment-16437559
 ] 

genericqa commented on HADOOP-15239:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 31m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 32s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m  
2s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 19s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HADOOP-15239 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918968/HADOOP-15239.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 25cfc5e8ae71 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0725953 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14487/testReport/ |
| Max. process+thread count | 289 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14487/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> S3ABlockOutputStream.flush() be no-op when stream closed
> 
>
> Key: HADOOP-15239
> URL: 

[jira] [Comment Edited] (HADOOP-15362) Review of Configuration.java

2018-04-13 Thread BELUGA BEHR (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16437553#comment-16437553
 ] 

BELUGA BEHR edited comment on HADOOP-15362 at 4/13/18 4:42 PM:
---

[~ajayydv], the generated Checkstyles are "LineLength" warnings and we're 
talking just a couple of characters past an arguably outdated number of 80.  To 
shorten the lines harms readability just for line-length cap.  Please consider 
this patch for inclusions as it fixes many check style issue than it adds.


was (Author: belugabehr):
[~ajayydv], the generated Checkstyles are "LineLength" warnings and we're 
talking just a couple of characters past a dated number of 80.  To shorten the 
lines harms readability just for line-length cap.  Please consider this patch 
for inclusions as it fixes many check style issue than it adds.

> Review of Configuration.java
> 
>
> Key: HADOOP-15362
> URL: https://issues.apache.org/jira/browse/HADOOP-15362
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15362.1.patch, HADOOP-15362.2.patch, 
> HADOOP-15362.3.patch, HADOOP-15362.4.patch, HADOOP-15362.5.patch
>
>
> * Various improvements
>  * Fix a lot of checks style errors
> When I ran a recent debug log against a MR job, I was spammed from the 
> following messages.  I ask that we move them to 'trace' as there is already a 
> debug level logging preceding them.
> {code:java}
> LOG.debug("Handling deprecation for all properties in config");
> foreach item {
> -  LOG.debug("Handling deprecation for " + (String)item);
> +  LOG.trace("Handling deprecation for {}", item);
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15362) Review of Configuration.java

2018-04-13 Thread BELUGA BEHR (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16437553#comment-16437553
 ] 

BELUGA BEHR commented on HADOOP-15362:
--

[~ajayydv], the generated Checkstyles are "LineLength" warnings and we're 
talking just a couple of characters past a dated number of 80.  To shorten the 
lines harms readability just for line-length cap.  Please consider this patch 
for inclusions as it fixes many check style issue than it adds.

> Review of Configuration.java
> 
>
> Key: HADOOP-15362
> URL: https://issues.apache.org/jira/browse/HADOOP-15362
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15362.1.patch, HADOOP-15362.2.patch, 
> HADOOP-15362.3.patch, HADOOP-15362.4.patch, HADOOP-15362.5.patch
>
>
> * Various improvements
>  * Fix a lot of checks style errors
> When I ran a recent debug log against a MR job, I was spammed from the 
> following messages.  I ask that we move them to 'trace' as there is already a 
> debug level logging preceding them.
> {code:java}
> LOG.debug("Handling deprecation for all properties in config");
> foreach item {
> -  LOG.debug("Handling deprecation for " + (String)item);
> +  LOG.trace("Handling deprecation for {}", item);
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15378) Hadoop client unable to relogin because a remote DataNode has an incorrect krb5.conf

2018-04-13 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16437545#comment-16437545
 ] 

Wei-Chiu Chuang commented on HADOOP-15378:
--

Thank you, Steve. BTW your book is valuable in troubleshooting this issue.

No  unfortunately current CDH5 doesn't have KDiag (I thought of backporting 
it but I forgot).
We did ask for JDK Kerberos debug and Hadoop debug log. But we corrected the 
invalid krb5.conf before the debug log was put into place.

> Hadoop client unable to relogin because a remote DataNode has an incorrect 
> krb5.conf
> 
>
> Key: HADOOP-15378
> URL: https://issues.apache.org/jira/browse/HADOOP-15378
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
> Environment: CDH5.8.3, Kerberized, Impala
>Reporter: Wei-Chiu Chuang
>Priority: Critical
>
> This is a very weird bug.
> We received a report where a Hadoop client (Impala Catalog server) failed to 
> relogin and crashed every several hours. Initial indication suggested the 
> symptom matched HADOOP-13433.
> But after we patched HADOOP-13433 (as well as HADOOP-15143), Impala Catalog 
> server still kept crashing.
>  
> {noformat}
> W0114 05:49:24.676743 41444 UserGroupInformation.java:1838] 
> PriviledgedActionException as:impala/host1.example@example.com 
> (auth:KERBEROS) 
> cause:org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException):
>  Failure to initialize security context
> W0114 05:49:24.680363 41444 UserGroupInformation.java:1137] The first 
> kerberos ticket is not TGT(the server principal is 
> hdfs/host2.example@example.com), remove and destroy it.
> W0114 05:49:24.680501 41444 UserGroupInformation.java:1137] The first 
> kerberos ticket is not TGT(the server principal is 
> hdfs/host3.example@example.com), remove and destroy it.
> W0114 05:49:24.680593 41444 UserGroupInformation.java:1153] Warning, no 
> kerberos ticket found while attempting to renew ticket{noformat}
> The error “Failure to initialize security context” is suspicious here. 
> Catalogd was unable to log in because of a Kerberos issue. The JDK expects 
> the first kerberos ticket of a principal to be a TGT, however it seems that 
> after this error, because it was unable to login successfully, the first 
> ticket was no longer a TGT. The patch HADOOP-13433 removed other tickets of 
> the principal, because it expects the TGT to be in the principal’s ticket, 
> which is untrue in this case. So finally, it removed all tickets.
> And then
> {noformat}
> W0114 05:49:24.681946 41443 UserGroupInformation.java:1838] 
> PriviledgedActionException as:impala/host1.example@example.com 
> (auth:KERBEROS) cause:javax.security.sasl.SaslException: GSS initiate failed 
> [Caused by GSSException: No valid credentials provided (Mechanism level: 
> Failed to find any Kerberos tgt)]
> {noformat}
> The error “Failed to find any Kerberos tgt” is typically an indication that 
> the user’s Kerberos ticket has expired. However, that’s definitely not the 
> case here, since it was just a little over 8 hours.
> After we patched HADOOP-13433, the error handling code exhibited NPE, as 
> reported in HADOOP-15143.
>  
> {code:java}
> I0114 05:50:26.758565 6384 RetryInvocationHandler.java:148] Exception while 
> invoking listCachePools of class ClientNamenodeProtocolTranslatorPB over 
> host4.example.com/10.0.121.66:8020 after 2 fail over attempts. Trying to fail 
> over immediately. Java exception follows: java.io.IOException: Failed on 
> local exception: java.io.IOException: Couldn't set up IO streams; Host 
> Details : local host is: "host1.example.com/10.0.121.45"; destination host 
> is: "host4.example.com":8020; at 
> org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) at 
> org.apache.hadoop.ipc.Client.call(Client.java:1506) at 
> org.apache.hadoop.ipc.Client.call(Client.java:1439) at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
>  at com.sun.proxy.$Proxy9.listCachePools(Unknown Source) at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.listCachePools(ClientNamenodeProtocolTranslatorPB.java:1261)
>  at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
>  at com.sun.proxy.$Proxy10.listCachePools(Unknown Source) at 
> 

[jira] [Commented] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2018-04-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16437525#comment-16437525
 ] 

Hudson commented on HADOOP-14445:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13993 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13993/])
HDFS-13430. Fix TestEncryptionZonesWithKMS failure due to HADOOP-14445. (xiao: 
rev 650359371175fba416331e73aa03d2a96ccb90e5)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java


> Delegation tokens are not shared between KMS instances
> --
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
> Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Xiao Chen
>Priority: Major
> Fix For: 2.10.0, 2.8.4, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HADOOP-14445-branch-2.8.002.patch, 
> HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, 
> HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, 
> HADOOP-14445.06.patch, HADOOP-14445.07.patch, HADOOP-14445.08.patch, 
> HADOOP-14445.09.patch, HADOOP-14445.10.patch, HADOOP-14445.11.patch, 
> HADOOP-14445.12.patch, HADOOP-14445.13.patch, 
> HADOOP-14445.branch-2.000.precommit.patch, 
> HADOOP-14445.branch-2.001.precommit.patch, HADOOP-14445.branch-2.01.patch, 
> HADOOP-14445.branch-2.02.patch, HADOOP-14445.branch-2.03.patch, 
> HADOOP-14445.branch-2.04.patch, HADOOP-14445.branch-2.05.patch, 
> HADOOP-14445.branch-2.06.patch, HADOOP-14445.branch-2.8.003.patch, 
> HADOOP-14445.branch-2.8.004.patch, HADOOP-14445.branch-2.8.005.patch, 
> HADOOP-14445.branch-2.8.006.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from ZooKeeper.
> {quote}
> We should either update the KMS documentation, or fix this code to share 
> delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15180) branch-2 : daemon processes' sysout overwrites 'ulimit -a' in daemon's out file

2018-04-13 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16437515#comment-16437515
 ] 

genericqa commented on HADOOP-15180:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 26m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  9m 
48s{color} | {color:red} root in branch-2 failed. {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
10s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
51s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:f667ef1 |
| JIRA Issue | HADOOP-15180 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918967/HADOOP-15180-branch-2-002.patch
 |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux fc6177372a37 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2 / a772108 |
| maven | version: Apache Maven 3.3.9 
(bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00) |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14488/artifact/out/branch-mvninstall-root.txt
 |
| shellcheck | v0.4.7 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14488/testReport/ |
| Max. process+thread count | 77 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14488/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> branch-2 : daemon processes' sysout overwrites 'ulimit -a' in daemon's out 
> file
> ---
>
> Key: HADOOP-15180
> URL: https://issues.apache.org/jira/browse/HADOOP-15180
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.7.2
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Minor
> Attachments: HADOOP-15180-branch-2-002.patch, 
> HADOOP-15180_branch-2.diff
>
>
> Whenever the balancer starts, it will redirect the sys out to the out log 
> file.  And balancer writes the system output to the log file, at the same 
> time  script also will try to append ulimit output. 
> {noformat}
>  # capture the ulimit output
> if [ "true" = "$starting_secure_dn" ]; then
>   echo "ulimit -a for secure datanode user $HADOOP_SECURE_DN_USER" >> $log
>   # capture the ulimit info for the appropriate user
>   su --shell=/bin/bash $HADOOP_SECURE_DN_USER -c 'ulimit -a' >> $log 2>&1
> elif [ "true" = "$starting_privileged_nfs" ]; then
> echo "ulimit -a for privileged nfs user $HADOOP_PRIVILEGED_NFS_USER" 
> >> $log
> su 

[jira] [Updated] (HADOOP-15379) Make IrqHandler.bind() public

2018-04-13 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HADOOP-15379:

Hadoop Flags: Reviewed

> Make IrqHandler.bind() public
> -
>
> Key: HADOOP-15379
> URL: https://issues.apache.org/jira/browse/HADOOP-15379
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Minor
> Fix For: 3.2.0, 3.1.1
>
> Attachments: HADOOP-15379.00.patch
>
>
> {{org.apache.hadoop.service.launcher.IrqHandler.bind()}} is package private
> this means you can create an {{Interrupted}} handler in a different package, 
> but you can't bind it to a signal.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14756) S3Guard: expose capability query in MetadataStore and add tests of authoritative mode

2018-04-13 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16437466#comment-16437466
 ] 

genericqa commented on HADOOP-14756:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 17s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 1 
new + 3 unchanged - 0 fixed = 4 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
51s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | HADOOP-14756 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12918958/HADOOP-14756.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9768a2008245 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0725953 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14486/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14486/testReport/ |
| Max. process+thread count | 312 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14486/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message 

[jira] [Updated] (HADOOP-15382) Log kinit output in credential renewal thread

2018-04-13 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15382:

Attachment: HADOOP-15382.001.patch

> Log kinit output in credential renewal thread
> -
>
> Key: HADOOP-15382
> URL: https://issues.apache.org/jira/browse/HADOOP-15382
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-15382.001.patch
>
>
> We currently run kinit command in a thread to renew kerberos credentials 
> periodically. 
> {code:java}
> Shell.execCommand(cmd, "-R");
> if (LOG.isDebugEnabled()) {
>   LOG.debug("renewed ticket");
> }
> {code}
> It seems useful to log the output of the kinit too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15382) Log kinit output in credential renewal thread

2018-04-13 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15382:

Status: Patch Available  (was: Open)

> Log kinit output in credential renewal thread
> -
>
> Key: HADOOP-15382
> URL: https://issues.apache.org/jira/browse/HADOOP-15382
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-15382.001.patch
>
>
> We currently run kinit command in a thread to renew kerberos credentials 
> periodically. 
> {code:java}
> Shell.execCommand(cmd, "-R");
> if (LOG.isDebugEnabled()) {
>   LOG.debug("renewed ticket");
> }
> {code}
> It seems useful to log the output of the kinit too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15239) S3ABlockOutputStream.flush() be no-op when stream closed

2018-04-13 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15239:

Attachment: HADOOP-15239.002.patch

> S3ABlockOutputStream.flush() be no-op when stream closed
> 
>
> Key: HADOOP-15239
> URL: https://issues.apache.org/jira/browse/HADOOP-15239
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0, 2.8.3, 2.7.5, 3.0.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Trivial
> Attachments: HADOOP-15239.001.patch, HADOOP-15239.002.patch
>
>
> when you call flush() on a closed S3A output stream, you get a stack trace. 
> This can cause problems in code with race conditions across threads, e.g. 
> FLINK-8543. 
> we could make it log@warn "stream closed" rather than raise an IOE. It's just 
> a hint, after all.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15239) S3ABlockOutputStream.flush() be no-op when stream closed

2018-04-13 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15239:

Status: Open  (was: Patch Available)

> S3ABlockOutputStream.flush() be no-op when stream closed
> 
>
> Key: HADOOP-15239
> URL: https://issues.apache.org/jira/browse/HADOOP-15239
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0, 2.7.5, 2.8.3, 2.9.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Trivial
> Attachments: HADOOP-15239.001.patch, HADOOP-15239.002.patch
>
>
> when you call flush() on a closed S3A output stream, you get a stack trace. 
> This can cause problems in code with race conditions across threads, e.g. 
> FLINK-8543. 
> we could make it log@warn "stream closed" rather than raise an IOE. It's just 
> a hint, after all.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15239) S3ABlockOutputStream.flush() be no-op when stream closed

2018-04-13 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16437408#comment-16437408
 ] 

Gabor Bota commented on HADOOP-15239:
-

Uploaded the patch with the test for the fix.

> S3ABlockOutputStream.flush() be no-op when stream closed
> 
>
> Key: HADOOP-15239
> URL: https://issues.apache.org/jira/browse/HADOOP-15239
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0, 2.8.3, 2.7.5, 3.0.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Trivial
> Attachments: HADOOP-15239.001.patch, HADOOP-15239.002.patch
>
>
> when you call flush() on a closed S3A output stream, you get a stack trace. 
> This can cause problems in code with race conditions across threads, e.g. 
> FLINK-8543. 
> we could make it log@warn "stream closed" rather than raise an IOE. It's just 
> a hint, after all.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15239) S3ABlockOutputStream.flush() be no-op when stream closed

2018-04-13 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15239:

Status: Patch Available  (was: Open)

> S3ABlockOutputStream.flush() be no-op when stream closed
> 
>
> Key: HADOOP-15239
> URL: https://issues.apache.org/jira/browse/HADOOP-15239
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0, 2.7.5, 2.8.3, 2.9.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Trivial
> Attachments: HADOOP-15239.001.patch, HADOOP-15239.002.patch
>
>
> when you call flush() on a closed S3A output stream, you get a stack trace. 
> This can cause problems in code with race conditions across threads, e.g. 
> FLINK-8543. 
> we could make it log@warn "stream closed" rather than raise an IOE. It's just 
> a hint, after all.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15180) branch-2 : daemon processes' sysout overwrites 'ulimit -a' in daemon's out file

2018-04-13 Thread Ranith Sardar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HADOOP-15180:
---
Attachment: HADOOP-15180-branch-2-002.patch

> branch-2 : daemon processes' sysout overwrites 'ulimit -a' in daemon's out 
> file
> ---
>
> Key: HADOOP-15180
> URL: https://issues.apache.org/jira/browse/HADOOP-15180
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.7.2
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Minor
> Attachments: HADOOP-15180-branch-2-002.patch, 
> HADOOP-15180_branch-2.diff
>
>
> Whenever the balancer starts, it will redirect the sys out to the out log 
> file.  And balancer writes the system output to the log file, at the same 
> time  script also will try to append ulimit output. 
> {noformat}
>  # capture the ulimit output
> if [ "true" = "$starting_secure_dn" ]; then
>   echo "ulimit -a for secure datanode user $HADOOP_SECURE_DN_USER" >> $log
>   # capture the ulimit info for the appropriate user
>   su --shell=/bin/bash $HADOOP_SECURE_DN_USER -c 'ulimit -a' >> $log 2>&1
> elif [ "true" = "$starting_privileged_nfs" ]; then
> echo "ulimit -a for privileged nfs user $HADOOP_PRIVILEGED_NFS_USER" 
> >> $log
> su --shell=/bin/bash $HADOOP_PRIVILEGED_NFS_USER -c 'ulimit -a' >> 
> $log 2>&1
> else
>   echo "ulimit -a for user $USER" >> $log
>   ulimit -a >> $log 2>&1
> fi
> sleep 3;
> if ! ps -p $! > /dev/null ; then
>   exit 1
> fi
> {noformat}
> But the problem is first few lines of ulimit is overridding by the log of 
> balancer.
> {noformat}
> vm1:/opt/install/hadoop/namenode/sbin # cat 
> /opt/HA/AIH283/install/hadoop/namenode/logs/hadoop-root-balancer-vm1.out
> Time Stamp   Iteration#  Bytes Already Moved  Bytes Left To Move  
> Bytes Being Moved
> The cluster is balanced. Exiting...
> Jan 9, 2018 6:26:26 PM0  0 B 0 B  
>   0 B
> Jan 9, 2018 6:26:26 PM   Balancing took 3.446 seconds
> x memory size (kbytes, -m) 13428300
> open files  (-n) 1024
> pipe size(512 bytes, -p) 8
> POSIX message queues (bytes, -q) 819200
> real-time priority  (-r) 0
> stack size  (kbytes, -s) 8192
> cpu time   (seconds, -t) unlimited
> max user processes  (-u) 127350
> virtual memory  (kbytes, -v) 15992160
> file locks  (-x) unlimited
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15180) branch-2 : daemon processes' sysout overwrites 'ulimit -a' in daemon's out file

2018-04-13 Thread Ranith Sardar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16437398#comment-16437398
 ] 

Ranith Sardar edited comment on HADOOP-15180 at 4/13/18 2:57 PM:
-

Thanks [~brahmareddy] for your review and Thanks [~vinayrpet] for assigning me.
 I have updated the patch name. Please review it once.


was (Author: ranith):
Thanks [~brahmareddy] for your review and Thanks [~vinayrpet] for assigning me.
I have updated the patch name. Please review it once.

> branch-2 : daemon processes' sysout overwrites 'ulimit -a' in daemon's out 
> file
> ---
>
> Key: HADOOP-15180
> URL: https://issues.apache.org/jira/browse/HADOOP-15180
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.7.2
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Minor
> Attachments: HADOOP-15180_branch-2.diff
>
>
> Whenever the balancer starts, it will redirect the sys out to the out log 
> file.  And balancer writes the system output to the log file, at the same 
> time  script also will try to append ulimit output. 
> {noformat}
>  # capture the ulimit output
> if [ "true" = "$starting_secure_dn" ]; then
>   echo "ulimit -a for secure datanode user $HADOOP_SECURE_DN_USER" >> $log
>   # capture the ulimit info for the appropriate user
>   su --shell=/bin/bash $HADOOP_SECURE_DN_USER -c 'ulimit -a' >> $log 2>&1
> elif [ "true" = "$starting_privileged_nfs" ]; then
> echo "ulimit -a for privileged nfs user $HADOOP_PRIVILEGED_NFS_USER" 
> >> $log
> su --shell=/bin/bash $HADOOP_PRIVILEGED_NFS_USER -c 'ulimit -a' >> 
> $log 2>&1
> else
>   echo "ulimit -a for user $USER" >> $log
>   ulimit -a >> $log 2>&1
> fi
> sleep 3;
> if ! ps -p $! > /dev/null ; then
>   exit 1
> fi
> {noformat}
> But the problem is first few lines of ulimit is overridding by the log of 
> balancer.
> {noformat}
> vm1:/opt/install/hadoop/namenode/sbin # cat 
> /opt/HA/AIH283/install/hadoop/namenode/logs/hadoop-root-balancer-vm1.out
> Time Stamp   Iteration#  Bytes Already Moved  Bytes Left To Move  
> Bytes Being Moved
> The cluster is balanced. Exiting...
> Jan 9, 2018 6:26:26 PM0  0 B 0 B  
>   0 B
> Jan 9, 2018 6:26:26 PM   Balancing took 3.446 seconds
> x memory size (kbytes, -m) 13428300
> open files  (-n) 1024
> pipe size(512 bytes, -p) 8
> POSIX message queues (bytes, -q) 819200
> real-time priority  (-r) 0
> stack size  (kbytes, -s) 8192
> cpu time   (seconds, -t) unlimited
> max user processes  (-u) 127350
> virtual memory  (kbytes, -v) 15992160
> file locks  (-x) unlimited
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15180) branch-2 : daemon processes' sysout overwrites 'ulimit -a' in daemon's out file

2018-04-13 Thread Ranith Sardar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16437398#comment-16437398
 ] 

Ranith Sardar commented on HADOOP-15180:


Thanks [~brahmareddy] for your review and Thanks [~vinayrpet] for assigning me.
I have updated the patch name. Please review it once.

> branch-2 : daemon processes' sysout overwrites 'ulimit -a' in daemon's out 
> file
> ---
>
> Key: HADOOP-15180
> URL: https://issues.apache.org/jira/browse/HADOOP-15180
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.7.2
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Minor
> Attachments: HADOOP-15180_branch-2.diff
>
>
> Whenever the balancer starts, it will redirect the sys out to the out log 
> file.  And balancer writes the system output to the log file, at the same 
> time  script also will try to append ulimit output. 
> {noformat}
>  # capture the ulimit output
> if [ "true" = "$starting_secure_dn" ]; then
>   echo "ulimit -a for secure datanode user $HADOOP_SECURE_DN_USER" >> $log
>   # capture the ulimit info for the appropriate user
>   su --shell=/bin/bash $HADOOP_SECURE_DN_USER -c 'ulimit -a' >> $log 2>&1
> elif [ "true" = "$starting_privileged_nfs" ]; then
> echo "ulimit -a for privileged nfs user $HADOOP_PRIVILEGED_NFS_USER" 
> >> $log
> su --shell=/bin/bash $HADOOP_PRIVILEGED_NFS_USER -c 'ulimit -a' >> 
> $log 2>&1
> else
>   echo "ulimit -a for user $USER" >> $log
>   ulimit -a >> $log 2>&1
> fi
> sleep 3;
> if ! ps -p $! > /dev/null ; then
>   exit 1
> fi
> {noformat}
> But the problem is first few lines of ulimit is overridding by the log of 
> balancer.
> {noformat}
> vm1:/opt/install/hadoop/namenode/sbin # cat 
> /opt/HA/AIH283/install/hadoop/namenode/logs/hadoop-root-balancer-vm1.out
> Time Stamp   Iteration#  Bytes Already Moved  Bytes Left To Move  
> Bytes Being Moved
> The cluster is balanced. Exiting...
> Jan 9, 2018 6:26:26 PM0  0 B 0 B  
>   0 B
> Jan 9, 2018 6:26:26 PM   Balancing took 3.446 seconds
> x memory size (kbytes, -m) 13428300
> open files  (-n) 1024
> pipe size(512 bytes, -p) 8
> POSIX message queues (bytes, -q) 819200
> real-time priority  (-r) 0
> stack size  (kbytes, -s) 8192
> cpu time   (seconds, -t) unlimited
> max user processes  (-u) 127350
> virtual memory  (kbytes, -v) 15992160
> file locks  (-x) unlimited
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15180) branch-2 : daemon processes' sysout overwrites 'ulimit -a' in daemon's out file

2018-04-13 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-15180:
--
Component/s: scripts

> branch-2 : daemon processes' sysout overwrites 'ulimit -a' in daemon's out 
> file
> ---
>
> Key: HADOOP-15180
> URL: https://issues.apache.org/jira/browse/HADOOP-15180
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.7.2
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Minor
> Attachments: HADOOP-15180_branch-2.diff
>
>
> Whenever the balancer starts, it will redirect the sys out to the out log 
> file.  And balancer writes the system output to the log file, at the same 
> time  script also will try to append ulimit output. 
> {noformat}
>  # capture the ulimit output
> if [ "true" = "$starting_secure_dn" ]; then
>   echo "ulimit -a for secure datanode user $HADOOP_SECURE_DN_USER" >> $log
>   # capture the ulimit info for the appropriate user
>   su --shell=/bin/bash $HADOOP_SECURE_DN_USER -c 'ulimit -a' >> $log 2>&1
> elif [ "true" = "$starting_privileged_nfs" ]; then
> echo "ulimit -a for privileged nfs user $HADOOP_PRIVILEGED_NFS_USER" 
> >> $log
> su --shell=/bin/bash $HADOOP_PRIVILEGED_NFS_USER -c 'ulimit -a' >> 
> $log 2>&1
> else
>   echo "ulimit -a for user $USER" >> $log
>   ulimit -a >> $log 2>&1
> fi
> sleep 3;
> if ! ps -p $! > /dev/null ; then
>   exit 1
> fi
> {noformat}
> But the problem is first few lines of ulimit is overridding by the log of 
> balancer.
> {noformat}
> vm1:/opt/install/hadoop/namenode/sbin # cat 
> /opt/HA/AIH283/install/hadoop/namenode/logs/hadoop-root-balancer-vm1.out
> Time Stamp   Iteration#  Bytes Already Moved  Bytes Left To Move  
> Bytes Being Moved
> The cluster is balanced. Exiting...
> Jan 9, 2018 6:26:26 PM0  0 B 0 B  
>   0 B
> Jan 9, 2018 6:26:26 PM   Balancing took 3.446 seconds
> x memory size (kbytes, -m) 13428300
> open files  (-n) 1024
> pipe size(512 bytes, -p) 8
> POSIX message queues (bytes, -q) 819200
> real-time priority  (-r) 0
> stack size  (kbytes, -s) 8192
> cpu time   (seconds, -t) unlimited
> max user processes  (-u) 127350
> virtual memory  (kbytes, -v) 15992160
> file locks  (-x) unlimited
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15180) branch-2 : daemon processes' sysout overwrites 'ulimit -a' in daemon's out file

2018-04-13 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16437358#comment-16437358
 ] 

Brahma Reddy Battula edited comment on HADOOP-15180 at 4/13/18 2:49 PM:


[~RANith] thanks for reporting this issue.. Changes LGTM.

can you please update the patch to trigger the Jenkins ( upload patch like 
HADOOP-15180-branch-2).


was (Author: brahmareddy):
[~RANith] thanks for reporting this issue.. Changes LGTM.

can you please update the patch to trigger the Jenkins ( upload patch like 
HADOOP-15180-branch-2-**).

> branch-2 : daemon processes' sysout overwrites 'ulimit -a' in daemon's out 
> file
> ---
>
> Key: HADOOP-15180
> URL: https://issues.apache.org/jira/browse/HADOOP-15180
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.7.2
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Minor
> Attachments: HADOOP-15180_branch-2.diff
>
>
> Whenever the balancer starts, it will redirect the sys out to the out log 
> file.  And balancer writes the system output to the log file, at the same 
> time  script also will try to append ulimit output. 
> {noformat}
>  # capture the ulimit output
> if [ "true" = "$starting_secure_dn" ]; then
>   echo "ulimit -a for secure datanode user $HADOOP_SECURE_DN_USER" >> $log
>   # capture the ulimit info for the appropriate user
>   su --shell=/bin/bash $HADOOP_SECURE_DN_USER -c 'ulimit -a' >> $log 2>&1
> elif [ "true" = "$starting_privileged_nfs" ]; then
> echo "ulimit -a for privileged nfs user $HADOOP_PRIVILEGED_NFS_USER" 
> >> $log
> su --shell=/bin/bash $HADOOP_PRIVILEGED_NFS_USER -c 'ulimit -a' >> 
> $log 2>&1
> else
>   echo "ulimit -a for user $USER" >> $log
>   ulimit -a >> $log 2>&1
> fi
> sleep 3;
> if ! ps -p $! > /dev/null ; then
>   exit 1
> fi
> {noformat}
> But the problem is first few lines of ulimit is overridding by the log of 
> balancer.
> {noformat}
> vm1:/opt/install/hadoop/namenode/sbin # cat 
> /opt/HA/AIH283/install/hadoop/namenode/logs/hadoop-root-balancer-vm1.out
> Time Stamp   Iteration#  Bytes Already Moved  Bytes Left To Move  
> Bytes Being Moved
> The cluster is balanced. Exiting...
> Jan 9, 2018 6:26:26 PM0  0 B 0 B  
>   0 B
> Jan 9, 2018 6:26:26 PM   Balancing took 3.446 seconds
> x memory size (kbytes, -m) 13428300
> open files  (-n) 1024
> pipe size(512 bytes, -p) 8
> POSIX message queues (bytes, -q) 819200
> real-time priority  (-r) 0
> stack size  (kbytes, -s) 8192
> cpu time   (seconds, -t) unlimited
> max user processes  (-u) 127350
> virtual memory  (kbytes, -v) 15992160
> file locks  (-x) unlimited
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15180) branch-2 : daemon processes' sysout overwrites 'ulimit -a' in daemon's out file

2018-04-13 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16437358#comment-16437358
 ] 

Brahma Reddy Battula commented on HADOOP-15180:
---

[~RANith] thanks for reporting this issue.. Changes LGTM.

can you please update the patch to trigger the Jenkins ( upload patch like 
HADOOP-15180-branch-2-**).

> branch-2 : daemon processes' sysout overwrites 'ulimit -a' in daemon's out 
> file
> ---
>
> Key: HADOOP-15180
> URL: https://issues.apache.org/jira/browse/HADOOP-15180
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.2
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Minor
> Attachments: HADOOP-15180_branch-2.diff
>
>
> Whenever the balancer starts, it will redirect the sys out to the out log 
> file.  And balancer writes the system output to the log file, at the same 
> time  script also will try to append ulimit output. 
> {noformat}
>  # capture the ulimit output
> if [ "true" = "$starting_secure_dn" ]; then
>   echo "ulimit -a for secure datanode user $HADOOP_SECURE_DN_USER" >> $log
>   # capture the ulimit info for the appropriate user
>   su --shell=/bin/bash $HADOOP_SECURE_DN_USER -c 'ulimit -a' >> $log 2>&1
> elif [ "true" = "$starting_privileged_nfs" ]; then
> echo "ulimit -a for privileged nfs user $HADOOP_PRIVILEGED_NFS_USER" 
> >> $log
> su --shell=/bin/bash $HADOOP_PRIVILEGED_NFS_USER -c 'ulimit -a' >> 
> $log 2>&1
> else
>   echo "ulimit -a for user $USER" >> $log
>   ulimit -a >> $log 2>&1
> fi
> sleep 3;
> if ! ps -p $! > /dev/null ; then
>   exit 1
> fi
> {noformat}
> But the problem is first few lines of ulimit is overridding by the log of 
> balancer.
> {noformat}
> vm1:/opt/install/hadoop/namenode/sbin # cat 
> /opt/HA/AIH283/install/hadoop/namenode/logs/hadoop-root-balancer-vm1.out
> Time Stamp   Iteration#  Bytes Already Moved  Bytes Left To Move  
> Bytes Being Moved
> The cluster is balanced. Exiting...
> Jan 9, 2018 6:26:26 PM0  0 B 0 B  
>   0 B
> Jan 9, 2018 6:26:26 PM   Balancing took 3.446 seconds
> x memory size (kbytes, -m) 13428300
> open files  (-n) 1024
> pipe size(512 bytes, -p) 8
> POSIX message queues (bytes, -q) 819200
> real-time priority  (-r) 0
> stack size  (kbytes, -s) 8192
> cpu time   (seconds, -t) unlimited
> max user processes  (-u) 127350
> virtual memory  (kbytes, -v) 15992160
> file locks  (-x) unlimited
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15239) S3ABlockOutputStream.flush() be no-op when stream closed

2018-04-13 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431323#comment-16431323
 ] 

Gabor Bota edited comment on HADOOP-15239 at 4/13/18 2:21 PM:
--

Tests ran on us-west-2 successfully. Should I submit patches for other branches 
as well? (2.7.5, 2.8.3, 2.9.0)


was (Author: gabor.bota):
Tests ran on us-west-2 successfully. Should I submit patches for other as well? 
(2.7.5, 2.8.3, 2.9.0)

> S3ABlockOutputStream.flush() be no-op when stream closed
> 
>
> Key: HADOOP-15239
> URL: https://issues.apache.org/jira/browse/HADOOP-15239
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0, 2.8.3, 2.7.5, 3.0.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Trivial
> Attachments: HADOOP-15239.001.patch
>
>
> when you call flush() on a closed S3A output stream, you get a stack trace. 
> This can cause problems in code with race conditions across threads, e.g. 
> FLINK-8543. 
> we could make it log@warn "stream closed" rather than raise an IOE. It's just 
> a hint, after all.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14756) S3Guard: expose capability query in MetadataStore and add tests of authoritative mode

2018-04-13 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16437346#comment-16437346
 ] 

Gabor Bota commented on HADOOP-14756:
-

I've uploaded a new patch, and I also have some questions:
* Right now the MetadataStore#getDiagnostics javadoc describes that the 
information from the returned map is for debugging only. If this information 
still valid? If it is true, is it really a good place to store capabilities 
then?
* I use final class with private constructor for MetadataStoreCapabilities - to 
store constants because I've seen in the project this is the general way (e.g 
org.apache.hadoop.fs.s3a.Constants) to store constants. Is this sufficient, or 
should I use interface - where all String constants are public static final by 
default? Which choice is preferred? 

(Test & verify ran on us-west-2 successfully for the patch.)

> S3Guard: expose capability query in MetadataStore and add tests of 
> authoritative mode
> -
>
> Key: HADOOP-14756
> URL: https://issues.apache.org/jira/browse/HADOOP-14756
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-14756.001.patch, HADOOP-14756.002.patch
>
>
> {{MetadataStoreTestBase.testListChildren}} would be improved with the ability 
> to query the features offered by the store, and the outcome of {{put()}}, so 
> probe the correctness of the authoritative mode
> # Add predicate to MetadataStore interface  
> {{supportsAuthoritativeDirectories()}} or similar
> # If #1 is true, assert that directory is fully cached after changes
> # Add "isNew" flag to MetadataStore.put(DirListingMetadata); use to verify 
> when changes are made



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14756) S3Guard: expose capability query in MetadataStore and add tests of authoritative mode

2018-04-13 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14756:

Status: Patch Available  (was: In Progress)

> S3Guard: expose capability query in MetadataStore and add tests of 
> authoritative mode
> -
>
> Key: HADOOP-14756
> URL: https://issues.apache.org/jira/browse/HADOOP-14756
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-14756.001.patch, HADOOP-14756.002.patch
>
>
> {{MetadataStoreTestBase.testListChildren}} would be improved with the ability 
> to query the features offered by the store, and the outcome of {{put()}}, so 
> probe the correctness of the authoritative mode
> # Add predicate to MetadataStore interface  
> {{supportsAuthoritativeDirectories()}} or similar
> # If #1 is true, assert that directory is fully cached after changes
> # Add "isNew" flag to MetadataStore.put(DirListingMetadata); use to verify 
> when changes are made



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11640) add user defined delimiter support to Configuration

2018-04-13 Thread Jim Brennan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16437340#comment-16437340
 ] 

Jim Brennan commented on HADOOP-11640:
--

[~in-chief], [~aw], [~chris.douglas], [~cmccabe], given the [MAPREDUCE-7069] 
solution, perhaps we can close this one as Not Doing?  Is there still a need 
for a more general solution?


> add user defined delimiter support to Configuration
> ---
>
> Key: HADOOP-11640
> URL: https://issues.apache.org/jira/browse/HADOOP-11640
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Xiaoshuang LU
>Assignee: Xiaoshuang LU
>Priority: Major
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11640.patch
>
>
> As mentioned by org.apache.hadoop.conf.Configuration.getStrings ("Get the 
> comma delimited values of the name property as an array of Strings"), only 
> comma separated strings can be used.  It would be much better if user defined 
> separators are supported.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14756) S3Guard: expose capability query in MetadataStore and add tests of authoritative mode

2018-04-13 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14756:

Attachment: HADOOP-14756.002.patch

> S3Guard: expose capability query in MetadataStore and add tests of 
> authoritative mode
> -
>
> Key: HADOOP-14756
> URL: https://issues.apache.org/jira/browse/HADOOP-14756
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-14756.001.patch, HADOOP-14756.002.patch
>
>
> {{MetadataStoreTestBase.testListChildren}} would be improved with the ability 
> to query the features offered by the store, and the outcome of {{put()}}, so 
> probe the correctness of the authoritative mode
> # Add predicate to MetadataStore interface  
> {{supportsAuthoritativeDirectories()}} or similar
> # If #1 is true, assert that directory is fully cached after changes
> # Add "isNew" flag to MetadataStore.put(DirListingMetadata); use to verify 
> when changes are made



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14756) S3Guard: expose capability query in MetadataStore and add tests of authoritative mode

2018-04-13 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14756:

Attachment: (was: HADOOP-14756.002.patch)

> S3Guard: expose capability query in MetadataStore and add tests of 
> authoritative mode
> -
>
> Key: HADOOP-14756
> URL: https://issues.apache.org/jira/browse/HADOOP-14756
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-14756.001.patch
>
>
> {{MetadataStoreTestBase.testListChildren}} would be improved with the ability 
> to query the features offered by the store, and the outcome of {{put()}}, so 
> probe the correctness of the authoritative mode
> # Add predicate to MetadataStore interface  
> {{supportsAuthoritativeDirectories()}} or similar
> # If #1 is true, assert that directory is fully cached after changes
> # Add "isNew" flag to MetadataStore.put(DirListingMetadata); use to verify 
> when changes are made



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14756) S3Guard: expose capability query in MetadataStore and add tests of authoritative mode

2018-04-13 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14756:

Attachment: HADOOP-14756.002.patch

> S3Guard: expose capability query in MetadataStore and add tests of 
> authoritative mode
> -
>
> Key: HADOOP-14756
> URL: https://issues.apache.org/jira/browse/HADOOP-14756
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-14756.001.patch, HADOOP-14756.002.patch
>
>
> {{MetadataStoreTestBase.testListChildren}} would be improved with the ability 
> to query the features offered by the store, and the outcome of {{put()}}, so 
> probe the correctness of the authoritative mode
> # Add predicate to MetadataStore interface  
> {{supportsAuthoritativeDirectories()}} or similar
> # If #1 is true, assert that directory is fully cached after changes
> # Add "isNew" flag to MetadataStore.put(DirListingMetadata); use to verify 
> when changes are made



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14584) WASB to support high-performance commit protocol

2018-04-13 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-14584.
-
Resolution: Won't Fix

wasb is fast enough without this

> WASB to support high-performance commit protocol
> 
>
> Key: HADOOP-14584
> URL: https://issues.apache.org/jira/browse/HADOOP-14584
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.0.0-alpha3
>Reporter: Steve Loughran
>Priority: Major
>
> Once MAPREDUCE-6823 allows FileOutputFormat to take alternate committers, and 
> HADOOP-13786 provides the first implementation and tests of a blobstore 
> specific committer, WASB could do its own. The same strategy: upload 
> uncommitted blobs and coalesce at the end should work; the same marshalling 
> of lists of etags  *probably* work the same, though there will inevitably be 
> some subtle differences.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15379) Make IrqHandler.bind() public

2018-04-13 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16437107#comment-16437107
 ] 

Steve Loughran commented on HADOOP-15379:
-

thanks everyone. FWIW I found this as I was trying to write a special handler 
to intercept control-c interrupts in the spark-shell; I keep on accidentally 
existing it. My current workaround: place a class into the package. Which is a 
shame as it would otherwise be a few lines in the spark-shell itself:
{code}
import org.apache.hadoop.service.launcher.IrqHandler

object irq extends IrqHandler.Interrupted {
  @Override def interrupted(interruptData: IrqHandler.InterruptData): Unit = {}
}

new IrqHandler("INT", irq).bind()

{code}

> Make IrqHandler.bind() public
> -
>
> Key: HADOOP-15379
> URL: https://issues.apache.org/jira/browse/HADOOP-15379
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Minor
> Fix For: 3.2.0, 3.1.1
>
> Attachments: HADOOP-15379.00.patch
>
>
> {{org.apache.hadoop.service.launcher.IrqHandler.bind()}} is package private
> this means you can create an {{Interrupted}} handler in a different package, 
> but you can't bind it to a signal.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15384) distcp numListstatusThreads option doesn't get to -delete scan

2018-04-13 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15384:
---

 Summary: distcp numListstatusThreads option doesn't get to -delete 
scan
 Key: HADOOP-15384
 URL: https://issues.apache.org/jira/browse/HADOOP-15384
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: tools/distcp
Affects Versions: 3.1.0
Reporter: Steve Loughran
Assignee: Steve Loughran


The distcp {{numListstatusThreads}} option isn't used when configuring the 
GlobbedCopyListing used in {{CopyComitter.deleteMissing()}}

This means that for large scans of object stores, performance is significantly 
worse.

Fix: pass the option down from the task conf



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14970) MiniHadoopClusterManager doesn't respect lack of format option

2018-04-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436986#comment-16436986
 ] 

Hudson commented on HADOOP-14970:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13991 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13991/])
HADOOP-14970. MiniHadoopClusterManager doesn't respect lack of format (shv: rev 
1a407bc9906306801690bc75ff0f0456f8f265fd)
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/MiniHadoopClusterManager.java


> MiniHadoopClusterManager doesn't respect lack of format option
> --
>
> Key: HADOOP-14970
> URL: https://issues.apache.org/jira/browse/HADOOP-14970
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Minor
> Fix For: 2.10.0, 2.8.4, 3.2.0, 3.1.1, 2.9.2, 3.0.3, 2.7.7
>
> Attachments: HADOOP-14970.000.patch
>
>
> The CLI MiniCluster, {{MiniHadoopClusterManager}}, says that by default it 
> does not format its directories, and provides the {{-format}} option to 
> specify that it should do so. However, it builds its {{MiniDFSCluster}} like:
> {code}
>   dfs = new MiniDFSCluster.Builder(conf).nameNodePort(nnPort)
>   .nameNodeHttpPort(nnHttpPort).numDataNodes(numDataNodes)
>   .startupOption(dfsOpts).build();
> {code}
> {{MiniDFSCluster.Builder}}, by default, sets {{format}} to true, so even 
> though the {{startupOption}} is {{REGULAR}}, it will still format regardless 
> of whether or not the flag is supplied.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14970) MiniHadoopClusterManager doesn't respect lack of format option

2018-04-13 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HADOOP-14970:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.3
   2.9.2
   3.1.1
   3.2.0
   2.8.4
   2.10.0
   2.7.7
   Status: Resolved  (was: Patch Available)

I just committed this down to branch-2.7. Thank you [~xkrogen].

> MiniHadoopClusterManager doesn't respect lack of format option
> --
>
> Key: HADOOP-14970
> URL: https://issues.apache.org/jira/browse/HADOOP-14970
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Minor
> Fix For: 2.7.7, 2.10.0, 2.8.4, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HADOOP-14970.000.patch
>
>
> The CLI MiniCluster, {{MiniHadoopClusterManager}}, says that by default it 
> does not format its directories, and provides the {{-format}} option to 
> specify that it should do so. However, it builds its {{MiniDFSCluster}} like:
> {code}
>   dfs = new MiniDFSCluster.Builder(conf).nameNodePort(nnPort)
>   .nameNodeHttpPort(nnHttpPort).numDataNodes(numDataNodes)
>   .startupOption(dfsOpts).build();
> {code}
> {{MiniDFSCluster.Builder}}, by default, sets {{format}} to true, so even 
> though the {{startupOption}} is {{REGULAR}}, it will still format regardless 
> of whether or not the flag is supplied.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15304) [JDK10] Migrate from com.sun.tools.doclets to the replacement

2018-04-13 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16436911#comment-16436911
 ] 

Takanobu Asanuma commented on HADOOP-15304:
---

Sorry, I want to fix my last comment. I confirmed {{mvn javadoc:javadoc 
--projects hadoop-common-project/hadoop-annotations}} with jdk10, not for the 
overall projects. Anyway, I also didn't face the error about missing class 
{{ExcludePrivateAnnotationsStandardDoclet}}.

> [JDK10] Migrate from com.sun.tools.doclets to the replacement
> -
>
> Key: HADOOP-15304
> URL: https://issues.apache.org/jira/browse/HADOOP-15304
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-15304.01.patch, HADOOP-15304.02.patch
>
>
> com.sun.tools.doclets.* packages were removed in Java 10. 
> [https://bugs.openjdk.java.net/browse/JDK-8177511]
> This causes hadoop-annotations module to fail.
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) 
> on project hadoop-annotations: Compilation failure: Compilation failure:
> [ERROR] 
> /Users/ajisaka/git/hadoop/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/IncludePublicAnnotationsStandardDoclet.java:[61,20]
>  cannot find symbol
> [ERROR] symbol:   method 
> validOptions(java.lang.String[][],com.sun.javadoc.DocErrorReporter)
> [ERROR] location: class com.sun.tools.doclets.standard.Standard
> [ERROR] 
> /Users/ajisaka/git/hadoop/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/ExcludePrivateAnnotationsStandardDoclet.java:[56,20]
>  cannot find symbol
> [ERROR] symbol:   method 
> validOptions(java.lang.String[][],com.sun.javadoc.DocErrorReporter)
> [ERROR] location: class com.sun.tools.doclets.standard.Standard
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org