[jira] [Updated] (HADOOP-15063) IOException will be thrown when read from Aliyun OSS

2017-11-21 Thread wujinhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wujinhu updated HADOOP-15063:
-
Description: 
IOException will be thrown in this case
1. set part size = n(102400)
2. assume current position = 0, then partRemaining = 102400
3. we call seek(pos = 101802), with pos > position && pos < position + 
partRemaining, so it will skip pos - position bytes, but partRemaining remains 
the same
4. if we read bytes more than n - pos, it will throw IOException.

Current code:
{code:java}
@Override
  public synchronized void seek(long pos) throws IOException {
checkNotClosed();
if (position == pos) {
  return;
} else if (pos > position && pos < position + partRemaining) {
  AliyunOSSUtils.skipFully(wrappedStream, pos - position);
  // we need update partRemaining here
  position = pos;
} else {
  reopen(pos);
}
  }
{code}

Logs:
java.io.IOException: Failed to read from stream. Remaining:101802

at 
org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.read(AliyunOSSInputStream.java:182)
at org.apache.hadoop.fs.FSInputStream.read(FSInputStream.java:75)
at 
org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:92)

How to re-produce:
1. create a file with 10MB size
2. 
{code:java}
int seekTimes = 150;
for (int i = 0; i < seekTimes; i++) {
  long pos = size / (seekTimes - i) - 1;
  LOG.info("begin seeking for pos: " + pos);
  byte []buf = new byte[1024];
  instream.read(pos, buf, 0, 1024);
}
{code}


  was:
IOException will be thrown in this case
1. set part size = n(102400)
2. assume current position = 0, then partRemaining = 102400
3. we call seek(pos = 101802), with pos > position && pos < position + 
partRemaining, so it will skip pos - position bytes, but partRemaining remains 
the same
4. if we read bytes more than n - pos, it will throw IOException.

Current code:
{code:java}
@Override
  public synchronized void seek(long pos) throws IOException {
checkNotClosed();
if (position == pos) {
  return;
} else if (pos > position && pos < position + partRemaining) {
  AliyunOSSUtils.skipFully(wrappedStream, pos - position);
*  // we need update partRemaining here
*  position = pos;
} else {
  reopen(pos);
}
  }
{code}

Logs:
java.io.IOException: Failed to read from stream. Remaining:101802

at 
org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.read(AliyunOSSInputStream.java:182)
at org.apache.hadoop.fs.FSInputStream.read(FSInputStream.java:75)
at 
org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:92)

How to re-produce:
1. create a file with 10MB size
2. 
{code:java}
int seekTimes = 150;
for (int i = 0; i < seekTimes; i++) {
  long pos = size / (seekTimes - i) - 1;
  LOG.info("begin seeking for pos: " + pos);
  byte []buf = new byte[1024];
  instream.read(pos, buf, 0, 1024);
}
{code}



> IOException will be thrown when read from Aliyun OSS
> 
>
> Key: HADOOP-15063
> URL: https://issues.apache.org/jira/browse/HADOOP-15063
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: wujinhu
>Priority: Critical
>
> IOException will be thrown in this case
> 1. set part size = n(102400)
> 2. assume current position = 0, then partRemaining = 102400
> 3. we call seek(pos = 101802), with pos > position && pos < position + 
> partRemaining, so it will skip pos - position bytes, but partRemaining 
> remains the same
> 4. if we read bytes more than n - pos, it will throw IOException.
> Current code:
> {code:java}
> @Override
>   public synchronized void seek(long pos) throws IOException {
> checkNotClosed();
> if (position == pos) {
>   return;
> } else if (pos > position && pos < position + partRemaining) {
>   AliyunOSSUtils.skipFully(wrappedStream, pos - position);
>   // we need update partRemaining here
>   position = pos;
> } else {
>   reopen(pos);
> }
>   }
> {code}
> Logs:
> java.io.IOException: Failed to read from stream. Remaining:101802
>   at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.read(AliyunOSSInputStream.java:182)
>   at org.apache.hadoop.fs.FSInputStream.read(FSInputStream.java:75)
>   at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:92)
> How to re-produce:
> 1. create a file with 10MB size
> 2. 
> {code:java}
> int seekTimes = 150;
> for (int i = 0; i < seekTimes; i++) {
>   long pos = size / (seekTimes - i) - 1;
>   LOG.info("begin seeking for pos: " + pos);
>   byte []buf = new byte[1024];
>   instream.read(pos, buf, 0, 1024);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HADOOP-15063) IOException will be thrown when read from Aliyun OSS

2017-11-21 Thread wujinhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wujinhu updated HADOOP-15063:
-
Description: 
IOException will be thrown in this case
1. set part size = n(102400)
2. assume current position = 0, then partRemaining = 102400
3. we call seek(pos = 101802), with pos > position && pos < position + 
partRemaining, so it will skip pos - position bytes, but partRemaining remains 
the same
4. if we read bytes more than n - pos, it will throw IOException.

Current code:
{code:java}
@Override
  public synchronized void seek(long pos) throws IOException {
checkNotClosed();
if (position == pos) {
  return;
} else if (pos > position && pos < position + partRemaining) {
  AliyunOSSUtils.skipFully(wrappedStream, pos - position);
*  // we need update partRemaining here
*  position = pos;
} else {
  reopen(pos);
}
  }
{code}

Logs:
java.io.IOException: Failed to read from stream. Remaining:101802

at 
org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.read(AliyunOSSInputStream.java:182)
at org.apache.hadoop.fs.FSInputStream.read(FSInputStream.java:75)
at 
org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:92)

How to re-produce:
1. create a file with 10MB size
2. 
{code:java}
int seekTimes = 150;
for (int i = 0; i < seekTimes; i++) {
  long pos = size / (seekTimes - i) - 1;
  LOG.info("begin seeking for pos: " + pos);
  byte []buf = new byte[1024];
  instream.read(pos, buf, 0, 1024);
}
{code}


  was:
IOException will be thrown in this case
1. set part size = n(102400)
2. assume current position = 0, then partRemaining = 102400
3. we call seek(pos = 101802), with pos > position && pos < position + 
partRemaining, so it will skip pos - position bytes, but partRemaining remains 
the same
4. if we read bytes more than n - pos, it will throw IOException.

Current code:
{code:java}
@Override
  public synchronized void seek(long pos) throws IOException {
checkNotClosed();
if (position == pos) {
  return;
} else if (pos > position && pos < position + partRemaining) {
  AliyunOSSUtils.skipFully(wrappedStream, pos - position);
*{color:#d04437}  // we need update partRemaining here
{color}*  position = pos;
} else {
  reopen(pos);
}
  }
{code}

Logs:
java.io.IOException: Failed to read from stream. Remaining:101802

at 
org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.read(AliyunOSSInputStream.java:182)
at org.apache.hadoop.fs.FSInputStream.read(FSInputStream.java:75)
at 
org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:92)

How to re-produce:
1. create a file with 10MB size
2. 
{code:java}
int seekTimes = 150;
for (int i = 0; i < seekTimes; i++) {
  long pos = size / (seekTimes - i) - 1;
  LOG.info("begin seeking for pos: " + pos);
  byte []buf = new byte[1024];
  instream.read(pos, buf, 0, 1024);
}
{code}



> IOException will be thrown when read from Aliyun OSS
> 
>
> Key: HADOOP-15063
> URL: https://issues.apache.org/jira/browse/HADOOP-15063
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: wujinhu
>Priority: Critical
>
> IOException will be thrown in this case
> 1. set part size = n(102400)
> 2. assume current position = 0, then partRemaining = 102400
> 3. we call seek(pos = 101802), with pos > position && pos < position + 
> partRemaining, so it will skip pos - position bytes, but partRemaining 
> remains the same
> 4. if we read bytes more than n - pos, it will throw IOException.
> Current code:
> {code:java}
> @Override
>   public synchronized void seek(long pos) throws IOException {
> checkNotClosed();
> if (position == pos) {
>   return;
> } else if (pos > position && pos < position + partRemaining) {
>   AliyunOSSUtils.skipFully(wrappedStream, pos - position);
> *  // we need update partRemaining here
> *  position = pos;
> } else {
>   reopen(pos);
> }
>   }
> {code}
> Logs:
> java.io.IOException: Failed to read from stream. Remaining:101802
>   at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.read(AliyunOSSInputStream.java:182)
>   at org.apache.hadoop.fs.FSInputStream.read(FSInputStream.java:75)
>   at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:92)
> How to re-produce:
> 1. create a file with 10MB size
> 2. 
> {code:java}
> int seekTimes = 150;
> for (int i = 0; i < seekTimes; i++) {
>   long pos = size / (seekTimes - i) - 1;
>   LOG.info("begin seeking for pos: " + pos);
>   byte []buf = new byte[1024];
>   instream.read(pos, buf, 0, 1024);
> }
> {code}



--
This message was sent by Atlassian JIRA

[jira] [Updated] (HADOOP-15063) IOException will be thrown when read from Aliyun OSS

2017-11-21 Thread wujinhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wujinhu updated HADOOP-15063:
-
Description: 
IOException will be thrown in this case
1. set part size = n(102400)
2. assume current position = 0, then partRemaining = 102400
3. we call seek(pos = 101802), with pos > position && pos < position + 
partRemaining, so it will skip pos - position bytes, but partRemaining remains 
the same
4. if we read bytes more than n - pos, it will throw IOException.

Current code:
{code:java}
@Override
  public synchronized void seek(long pos) throws IOException {
checkNotClosed();
if (position == pos) {
  return;
} else if (pos > position && pos < position + partRemaining) {
  AliyunOSSUtils.skipFully(wrappedStream, pos - position);
*{color:#d04437}  // we need update partRemaining here
{color}*  position = pos;
} else {
  reopen(pos);
}
  }
{code}

Logs:
java.io.IOException: Failed to read from stream. Remaining:101802

at 
org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.read(AliyunOSSInputStream.java:182)
at org.apache.hadoop.fs.FSInputStream.read(FSInputStream.java:75)
at 
org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:92)

How to re-produce:
1. create a file with 10MB size
2. 
{code:java}
int seekTimes = 150;
for (int i = 0; i < seekTimes; i++) {
  long pos = size / (seekTimes - i) - 1;
  LOG.info("begin seeking for pos: " + pos);
  byte []buf = new byte[1024];
  instream.read(pos, buf, 0, 1024);
}
{code}


  was:
IOException will be thrown in some case
Logs:
java.io.IOException: Failed to read from stream. Remaining:101802

at 
org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.read(AliyunOSSInputStream.java:182)
at org.apache.hadoop.fs.FSInputStream.read(FSInputStream.java:75)
at 
org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:92)

How to re-produce:
1. create a file with 10MB size
2. 
{code:java}
int seekTimes = 150;
for (int i = 0; i < seekTimes; i++) {
  long pos = size / (seekTimes - i) - 1;
  LOG.info("begin seeking for pos: " + pos);
  //instream.seek(pos);
  byte []buf = new byte[1024];
  instream.read(pos, buf, 0, 1024);
}
{code}



> IOException will be thrown when read from Aliyun OSS
> 
>
> Key: HADOOP-15063
> URL: https://issues.apache.org/jira/browse/HADOOP-15063
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: wujinhu
>Priority: Critical
>
> IOException will be thrown in this case
> 1. set part size = n(102400)
> 2. assume current position = 0, then partRemaining = 102400
> 3. we call seek(pos = 101802), with pos > position && pos < position + 
> partRemaining, so it will skip pos - position bytes, but partRemaining 
> remains the same
> 4. if we read bytes more than n - pos, it will throw IOException.
> Current code:
> {code:java}
> @Override
>   public synchronized void seek(long pos) throws IOException {
> checkNotClosed();
> if (position == pos) {
>   return;
> } else if (pos > position && pos < position + partRemaining) {
>   AliyunOSSUtils.skipFully(wrappedStream, pos - position);
> *{color:#d04437}  // we need update partRemaining here
> {color}*  position = pos;
> } else {
>   reopen(pos);
> }
>   }
> {code}
> Logs:
> java.io.IOException: Failed to read from stream. Remaining:101802
>   at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.read(AliyunOSSInputStream.java:182)
>   at org.apache.hadoop.fs.FSInputStream.read(FSInputStream.java:75)
>   at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:92)
> How to re-produce:
> 1. create a file with 10MB size
> 2. 
> {code:java}
> int seekTimes = 150;
> for (int i = 0; i < seekTimes; i++) {
>   long pos = size / (seekTimes - i) - 1;
>   LOG.info("begin seeking for pos: " + pos);
>   byte []buf = new byte[1024];
>   instream.read(pos, buf, 0, 1024);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15063) IOException will be thrown when read from Aliyun OSS

2017-11-21 Thread wujinhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wujinhu updated HADOOP-15063:
-
Summary: IOException will be thrown when read from Aliyun OSS  (was: 
IOException is likely to be thrown when read from Aliyun OSS)

> IOException will be thrown when read from Aliyun OSS
> 
>
> Key: HADOOP-15063
> URL: https://issues.apache.org/jira/browse/HADOOP-15063
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: wujinhu
>Priority: Critical
>
> IOException will be thrown in some case
> Logs:
> java.io.IOException: Failed to read from stream. Remaining:101802
>   at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.read(AliyunOSSInputStream.java:182)
>   at org.apache.hadoop.fs.FSInputStream.read(FSInputStream.java:75)
>   at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:92)
> How to re-produce:
> 1. create a file with 10MB size
> 2. 
> {code:java}
> int seekTimes = 150;
> for (int i = 0; i < seekTimes; i++) {
>   long pos = size / (seekTimes - i) - 1;
>   LOG.info("begin seeking for pos: " + pos);
>   //instream.seek(pos);
>   byte []buf = new byte[1024];
>   instream.read(pos, buf, 0, 1024);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15063) IOException is likely to be thrown when read from Aliyun OSS

2017-11-21 Thread wujinhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wujinhu updated HADOOP-15063:
-
Summary: IOException is likely to be thrown when read from Aliyun OSS  
(was: IOException will be thrown when read from Aliyun OSS)

> IOException is likely to be thrown when read from Aliyun OSS
> 
>
> Key: HADOOP-15063
> URL: https://issues.apache.org/jira/browse/HADOOP-15063
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: wujinhu
>Priority: Critical
>
> IOException will be thrown in some case
> Logs:
> java.io.IOException: Failed to read from stream. Remaining:101802
>   at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.read(AliyunOSSInputStream.java:182)
>   at org.apache.hadoop.fs.FSInputStream.read(FSInputStream.java:75)
>   at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:92)
> How to re-produce:
> 1. create a file with 10MB size
> 2. 
> {code:java}
> int seekTimes = 150;
> for (int i = 0; i < seekTimes; i++) {
>   long pos = size / (seekTimes - i) - 1;
>   LOG.info("begin seeking for pos: " + pos);
>   //instream.seek(pos);
>   byte []buf = new byte[1024];
>   instream.read(pos, buf, 0, 1024);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15063) IOException will be thrown when read from Aliyun OSS

2017-11-21 Thread wujinhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wujinhu updated HADOOP-15063:
-
Description: 
IOException will be thrown in some case
Logs:
java.io.IOException: Failed to read from stream. Remaining:101802

at 
org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.read(AliyunOSSInputStream.java:182)
at org.apache.hadoop.fs.FSInputStream.read(FSInputStream.java:75)
at 
org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:92)

How to re-produce:
1. create a file with 10MB size
2. 
{code:java}
int seekTimes = 150;
for (int i = 0; i < seekTimes; i++) {
  long pos = size / (seekTimes - i) - 1;
  LOG.info("begin seeking for pos: " + pos);
  //instream.seek(pos);
  byte []buf = new byte[1024];
  instream.read(pos, buf, 0, 1024);
}
{code}


  was:
Logs:
java.io.IOException: Failed to read from stream. Remaining:101802

at 
org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.read(AliyunOSSInputStream.java:182)
at org.apache.hadoop.fs.FSInputStream.read(FSInputStream.java:75)
at 
org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:92)

How to re-produce:
1. create a file with 10MB size
2. int seekTimes = 150;
for (int i = 0; i < seekTimes; i++) {
  long pos = size / (seekTimes - i) - 1;
  LOG.info("begin seeking for pos: " + pos);
  //instream.seek(pos);
  byte []buf = new byte[1024];
  instream.read(pos, buf, 0, 1024);
}


> IOException will be thrown when read from Aliyun OSS
> 
>
> Key: HADOOP-15063
> URL: https://issues.apache.org/jira/browse/HADOOP-15063
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: wujinhu
>Priority: Critical
>
> IOException will be thrown in some case
> Logs:
> java.io.IOException: Failed to read from stream. Remaining:101802
>   at 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.read(AliyunOSSInputStream.java:182)
>   at org.apache.hadoop.fs.FSInputStream.read(FSInputStream.java:75)
>   at 
> org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:92)
> How to re-produce:
> 1. create a file with 10MB size
> 2. 
> {code:java}
> int seekTimes = 150;
> for (int i = 0; i < seekTimes; i++) {
>   long pos = size / (seekTimes - i) - 1;
>   LOG.info("begin seeking for pos: " + pos);
>   //instream.seek(pos);
>   byte []buf = new byte[1024];
>   instream.read(pos, buf, 0, 1024);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15059) 3.0 deployment cannot work with old version MR tar ball which break rolling upgrade

2017-11-21 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16262023#comment-16262023
 ] 

Rohith Sharma K S commented on HADOOP-15059:


Atsv2 officially claims HBase-1.2.6 as backend. It works _absolutely fine_ in 
non-secure mode i.e installing *Hadoop-3.0 + HBase-1.2.6*. 
But the same deployment in secured cluster does not work because HBase-1.2.6 
does not communicate to Hadoop-3.x because of token proto mismatch. Basically 
HMaster daemon start fails with exception while it is connecting into 
Hadoop-3.x in secure cluster!

To simplify the problem, Hadoop-2.x clients(HBase-1.2.6 compiled against 
Hadoop-2.x) doesn't communicate with Hadoop-3.x cluster. Are we going to keep 
binary compatibility across hadoop-2.x and hadoop-3.x? Similar scenario can 
happen while rolling upgrade as well which reported in this JIRA. 

Btw, from ATSv2 we are planning to add this in documentation as known issue 
until hbase release 2.x. cc:/[~vrushalic] [~varun_saxena]

> 3.0 deployment cannot work with old version MR tar ball which break rolling 
> upgrade
> ---
>
> Key: HADOOP-15059
> URL: https://issues.apache.org/jira/browse/HADOOP-15059
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Junping Du
>Priority: Blocker
>
> I tried to deploy 3.0 cluster with 2.9 MR tar ball. The MR job is failed 
> because following error:
> {noformat}
> 2017-11-21 12:42:50,911 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for 
> application appattempt_1511295641738_0003_01
> 2017-11-21 12:42:51,070 WARN [main] org.apache.hadoop.util.NativeCodeLoader: 
> Unable to load native-hadoop library for your platform... using builtin-java 
> classes where applicable
> 2017-11-21 12:42:51,118 FATAL [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
> java.lang.RuntimeException: Unable to determine current user
>   at 
> org.apache.hadoop.conf.Configuration$Resource.getRestrictParserDefault(Configuration.java:254)
>   at 
> org.apache.hadoop.conf.Configuration$Resource.(Configuration.java:220)
>   at 
> org.apache.hadoop.conf.Configuration$Resource.(Configuration.java:212)
>   at 
> org.apache.hadoop.conf.Configuration.addResource(Configuration.java:888)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1638)
> Caused by: java.io.IOException: Exception reading 
> /tmp/nm-local-dir/usercache/jdu/appcache/application_1511295641738_0003/container_e03_1511295641738_0003_01_01/container_tokens
>   at 
> org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:208)
>   at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:907)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:820)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:689)
>   at 
> org.apache.hadoop.conf.Configuration$Resource.getRestrictParserDefault(Configuration.java:252)
>   ... 4 more
> Caused by: java.io.IOException: Unknown version 1 in token storage.
>   at 
> org.apache.hadoop.security.Credentials.readTokenStorageStream(Credentials.java:226)
>   at 
> org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:205)
>   ... 8 more
> 2017-11-21 12:42:51,122 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting 
> with status 1: java.lang.RuntimeException: Unable to determine current user
> {noformat}
> I think it is due to token incompatiblity change between 2.9 and 3.0. As we 
> claim "rolling upgrade" is supported in Hadoop 3, we should fix this before 
> we ship 3.0 otherwise all MR running applications will get stuck during/after 
> upgrade.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15063) IOException will be thrown when read from Aliyun OSS

2017-11-21 Thread wujinhu (JIRA)
wujinhu created HADOOP-15063:


 Summary: IOException will be thrown when read from Aliyun OSS
 Key: HADOOP-15063
 URL: https://issues.apache.org/jira/browse/HADOOP-15063
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/oss
Affects Versions: 3.0.0-alpha2
Reporter: wujinhu
Priority: Critical


Logs:
java.io.IOException: Failed to read from stream. Remaining:101802

at 
org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.read(AliyunOSSInputStream.java:182)
at org.apache.hadoop.fs.FSInputStream.read(FSInputStream.java:75)
at 
org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:92)

How to re-produce:
1. create a file with 10MB size
2. int seekTimes = 150;
for (int i = 0; i < seekTimes; i++) {
  long pos = size / (seekTimes - i) - 1;
  LOG.info("begin seeking for pos: " + pos);
  //instream.seek(pos);
  byte []buf = new byte[1024];
  instream.read(pos, buf, 0, 1024);
}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15058) create-release site build outputs dummy shaded jars due to skipShade

2017-11-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16261995#comment-16261995
 ] 

Hadoop QA commented on HADOOP-15058:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
8s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 2s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
12s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15058 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898795/HADOOP-15058.001.patch
 |
| Optional Tests |  asflicense  shellcheck  shelldocs  |
| uname | Linux 3a2e5d644fc4 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 782ba3b |
| maven | version: Apache Maven 3.3.9 |
| shellcheck | v0.4.6 |
| Max. process+thread count | 341 (vs. ulimit of 5000) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13735/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> create-release site build outputs dummy shaded jars due to skipShade
> 
>
> Key: HADOOP-15058
> URL: https://issues.apache.org/jira/browse/HADOOP-15058
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Blocker
> Attachments: HADOOP-15058.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15058) create-release site build outputs dummy shaded jars due to skipShade

2017-11-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-15058:
-
Status: Patch Available  (was: Open)

> create-release site build outputs dummy shaded jars due to skipShade
> 
>
> Key: HADOOP-15058
> URL: https://issues.apache.org/jira/browse/HADOOP-15058
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Blocker
> Attachments: HADOOP-15058.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15058) create-release site build outputs dummy shaded jars due to skipShade

2017-11-21 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-15058:
-
Attachment: HADOOP-15058.001.patch

Here's a patch which adds an option to mvn deploy rather than just install. I 
had to rejigger the order of the steps taken since the doc build requires 
-DskipShade due to the jdiff/xerces dependency.

Now:

* Do maven default lifecycle build, with optional deploy
* Stage bin/src tarballs
* Do mvn site build
* Stage site and fixup the bin tarball to have the docs

I checked that running --asfrelease deploys things to Nexus, and that the 
staged shaded jars there are not empty like before.

Checked that the binary tarball has the docs, and did basic validation with 
HDFS.

> create-release site build outputs dummy shaded jars due to skipShade
> 
>
> Key: HADOOP-15058
> URL: https://issues.apache.org/jira/browse/HADOOP-15058
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Blocker
> Attachments: HADOOP-15058.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14960) Add GC time percentage monitor/alerter

2017-11-21 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16261952#comment-16261952
 ] 

Xiao Chen commented on HADOOP-14960:


Thanks Erik for the good catch, and Misha for confirming.

Misha, could you please create a new jira and post the fix there?

> Add GC time percentage monitor/alerter
> --
>
> Key: HADOOP-14960
> URL: https://issues.apache.org/jira/browse/HADOOP-14960
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
> Fix For: 3.0.0, 2.10.0
>
> Attachments: HADOOP-14960.01.patch, HADOOP-14960.02.patch, 
> HADOOP-14960.03.patch, HADOOP-14960.04.patch
>
>
> Currently class {{org.apache.hadoop.metrics2.source.JvmMetrics}} provides 
> several metrics related to GC. Unfortunately, all these metrics are not as 
> useful as they could be, because they don't answer the first and most 
> important question related to GC and JVM health: what percentage of time my 
> JVM is paused in GC? This percentage, calculated as the sum of the GC pauses 
> over some period, like 1 minute, divided by that period - is the most 
> convenient measure of the GC health because:
> - it is just one number, and it's clear that, say, 1..5% is good, but 80..90% 
> is really bad
> - it allows for easy apple-to-apple comparison between runs, even between 
> different apps
> - when this metric reaches some critical value like 70%, it almost always 
> indicates a "GC death spiral", from which the app can recover only if it 
> drops some task(s) etc.
> The existing "total GC time", "total number of GCs" etc. metrics only give 
> numbers that can be used to rougly estimate this percentage. Thus it is 
> suggested to add a new metric to this class, and possibly allow users to 
> register handlers that will be automatically invoked if this metric reaches 
> the specified threshold.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15062) TestCryptoStreamsWithOpensslAesCtrCryptoCodec fails on Debian 9

2017-11-21 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated HADOOP-15062:

Attachment: HADOOP-15062.000.patch

> TestCryptoStreamsWithOpensslAesCtrCryptoCodec fails on Debian 9
> ---
>
> Key: HADOOP-15062
> URL: https://issues.apache.org/jira/browse/HADOOP-15062
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Attachments: HADOOP-15062.000.patch
>
>
> {code}
> [ERROR] 
> org.apache.hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec  Time 
> elapsed: 0.478 s  <<< FAILURE!
> java.lang.AssertionError: Unable to instantiate codec 
> org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec, is the required version of 
> OpenSSL installed?
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertNotNull(Assert.java:621)
>   at 
> org.apache.hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec.init(TestCryptoStreamsWithOpensslAesCtrCryptoCodec.java:43)
> {code}
> This happened due to the following openssl change:
> https://github.com/openssl/openssl/commit/ff4b7fafb315df5f8374e9b50c302460e068f188



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-15062) TestCryptoStreamsWithOpensslAesCtrCryptoCodec fails on Debian 9

2017-11-21 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi moved YARN-7554 to HADOOP-15062:
---

Key: HADOOP-15062  (was: YARN-7554)
Project: Hadoop Common  (was: Hadoop YARN)

> TestCryptoStreamsWithOpensslAesCtrCryptoCodec fails on Debian 9
> ---
>
> Key: HADOOP-15062
> URL: https://issues.apache.org/jira/browse/HADOOP-15062
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>
> {code}
> [ERROR] 
> org.apache.hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec  Time 
> elapsed: 0.478 s  <<< FAILURE!
> java.lang.AssertionError: Unable to instantiate codec 
> org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec, is the required version of 
> OpenSSL installed?
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertNotNull(Assert.java:621)
>   at 
> org.apache.hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec.init(TestCryptoStreamsWithOpensslAesCtrCryptoCodec.java:43)
> {code}
> This happened due to the following openssl change:
> https://github.com/openssl/openssl/commit/ff4b7fafb315df5f8374e9b50c302460e068f188



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9747) Reduce unnecessary UGI synchronization

2017-11-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16261879#comment-16261879
 ] 

Hadoop QA commented on HADOOP-9747:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 39s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 14 new + 191 unchanged - 25 fixed = 205 total (was 216) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 57s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
32s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 83m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-9747 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898768/HADOOP-9747-trunk.01.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux d91c609ee750 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 03c311e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13734/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13734/testReport/ |
| Max. process+thread count | 1399 (vs. ulimit of 5000) |
| modules | C: hadoop-common-project/hadoop-common U: 

[jira] [Updated] (HADOOP-15047) Python is required for -Preleasedoc but not documented in branch-2.8

2017-11-21 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15047:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.3
   Status: Resolved  (was: Patch Available)

Committed this to branch-2.8 and branch-2.8.3. Thanks [~bharatviswa] for the 
contribution!

> Python is required for -Preleasedoc but not documented in branch-2.8
> 
>
> Key: HADOOP-15047
> URL: https://issues.apache.org/jira/browse/HADOOP-15047
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, documentation
>Reporter: Akira Ajisaka
>Assignee: Bharat Viswanadham
> Fix For: 2.8.3
>
> Attachments: HADOOP-15047-branch-2.8.00.patch
>
>
> Python is required for -Preleasedoc but not documented in branch-2.8.
> * In trunk and branch-3.0, it was documented by HADOOP-10854.
> * In branch-2 and branch-2.9, it was documented by YARN-4849.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14960) Add GC time percentage monitor/alerter

2017-11-21 Thread Misha Dmitriev (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16261861#comment-16261861
 ] 

Misha Dmitriev commented on HADOOP-14960:
-

[~xkrogen] indeed, the GC time percentage is likely to go up and down all the 
time. Do you mean this code at line 190 in JvmMetrics.java, correct?

{code}
if (gcTimeMonitor != null) {
  rb.addCounter(GcTimePercentage,
  gcTimeMonitor.getLatestGcData().getGcTimePercentage());
}
{code}

Replacing {{addCounter}} with {{addGauge}} is trivial. [~xiaochen] how (in 
which ticket) would you recommend to submit a patch for this change?

> Add GC time percentage monitor/alerter
> --
>
> Key: HADOOP-14960
> URL: https://issues.apache.org/jira/browse/HADOOP-14960
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
> Fix For: 3.0.0, 2.10.0
>
> Attachments: HADOOP-14960.01.patch, HADOOP-14960.02.patch, 
> HADOOP-14960.03.patch, HADOOP-14960.04.patch
>
>
> Currently class {{org.apache.hadoop.metrics2.source.JvmMetrics}} provides 
> several metrics related to GC. Unfortunately, all these metrics are not as 
> useful as they could be, because they don't answer the first and most 
> important question related to GC and JVM health: what percentage of time my 
> JVM is paused in GC? This percentage, calculated as the sum of the GC pauses 
> over some period, like 1 minute, divided by that period - is the most 
> convenient measure of the GC health because:
> - it is just one number, and it's clear that, say, 1..5% is good, but 80..90% 
> is really bad
> - it allows for easy apple-to-apple comparison between runs, even between 
> different apps
> - when this metric reaches some critical value like 70%, it almost always 
> indicates a "GC death spiral", from which the app can recover only if it 
> drops some task(s) etc.
> The existing "total GC time", "total number of GCs" etc. metrics only give 
> numbers that can be used to rougly estimate this percentage. Thus it is 
> suggested to add a new metric to this class, and possibly allow users to 
> register handlers that will be automatically invoked if this metric reaches 
> the specified threshold.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15047) Python is required for -Preleasedoc but not documented in branch-2.8

2017-11-21 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16261860#comment-16261860
 ] 

Akira Ajisaka commented on HADOOP-15047:


+1, checking this in.

> Python is required for -Preleasedoc but not documented in branch-2.8
> 
>
> Key: HADOOP-15047
> URL: https://issues.apache.org/jira/browse/HADOOP-15047
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, documentation
>Reporter: Akira Ajisaka
>Assignee: Bharat Viswanadham
> Attachments: HADOOP-15047-branch-2.8.00.patch
>
>
> Python is required for -Preleasedoc but not documented in branch-2.8.
> * In trunk and branch-3.0, it was documented by HADOOP-10854.
> * In branch-2 and branch-2.9, it was documented by YARN-4849.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15060) TestShellBasedUnixGroupsMapping.testFiniteGroupResolutionTime flaky

2017-11-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16261854#comment-16261854
 ] 

Hadoop QA commented on HADOOP-15060:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
7s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
1s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 57s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
38s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 82m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15060 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898757/YARN-7553.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e7380c426f7d 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 03c311e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13732/testReport/ |
| Max. process+thread count | 1363 (vs. ulimit of 5000) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13732/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> 

[jira] [Commented] (HADOOP-14229) hadoop.security.auth_to_local example is incorrect in the documentation

2017-11-21 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16261851#comment-16261851
 ] 

Bharat Viswanadham commented on HADOOP-14229:
-

Can this jira be backported to branch-2 also, as the same issue exists in 2.x 
releases also.

> hadoop.security.auth_to_local example is incorrect in the documentation
> ---
>
> Key: HADOOP-14229
> URL: https://issues.apache.org/jira/browse/HADOOP-14229
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14229.01.patch, HADOOP-14229.02.patch, 
> HADOOP-14229.03.patch
>
>
> Let's see jhs as example:
> {code}RULE:[2:$1@$0](jhs/.*@.*REALM.TLD)s/.*/mapred/{code}
> That means principal has 2 components (jhs/myhost@REALM).
> The second column converts this to jhs@REALM. So the regex will not match on 
> this since regex expects / in the principal.
> My suggestion is
> {code}RULE:[2:$1](jhs)s/.*/mapred/{code}
> https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SecureMode.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-9747) Reduce unnecessary UGI synchronization

2017-11-21 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16261795#comment-16261795
 ] 

Bharat Viswanadham edited comment on HADOOP-9747 at 11/22/17 1:07 AM:
--

Proceeded with code changes on top of [~daryn] patch.
[~daryn]
Uploaded a new patch v01 to address review comments.





was (Author: bharatviswa):
[~daryn]
Uploaded a new patch v01 to address review comments.




> Reduce unnecessary UGI synchronization
> --
>
> Key: HADOOP-9747
> URL: https://issues.apache.org/jira/browse/HADOOP-9747
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0-alpha1
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HADOOP-9747-trunk.01.patch, 
> HADOOP-9747.2.branch-2.patch, HADOOP-9747.2.trunk.patch, 
> HADOOP-9747.branch-2.patch, HADOOP-9747.trunk.patch
>
>
> Jstacks of heavily loaded NNs show up to dozens of threads blocking in the 
> UGI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9747) Reduce unnecessary UGI synchronization

2017-11-21 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16261795#comment-16261795
 ] 

Bharat Viswanadham commented on HADOOP-9747:


[~daryn]
Uploaded a new patch v01 to address review comments.




> Reduce unnecessary UGI synchronization
> --
>
> Key: HADOOP-9747
> URL: https://issues.apache.org/jira/browse/HADOOP-9747
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0-alpha1
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HADOOP-9747-trunk.01.patch, 
> HADOOP-9747.2.branch-2.patch, HADOOP-9747.2.trunk.patch, 
> HADOOP-9747.branch-2.patch, HADOOP-9747.trunk.patch
>
>
> Jstacks of heavily loaded NNs show up to dozens of threads blocking in the 
> UGI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9747) Reduce unnecessary UGI synchronization

2017-11-21 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HADOOP-9747:
---
Attachment: HADOOP-9747-trunk.01.patch

> Reduce unnecessary UGI synchronization
> --
>
> Key: HADOOP-9747
> URL: https://issues.apache.org/jira/browse/HADOOP-9747
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0-alpha1
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HADOOP-9747-trunk.01.patch, 
> HADOOP-9747.2.branch-2.patch, HADOOP-9747.2.trunk.patch, 
> HADOOP-9747.branch-2.patch, HADOOP-9747.trunk.patch
>
>
> Jstacks of heavily loaded NNs show up to dozens of threads blocking in the 
> UGI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9747) Reduce unnecessary UGI synchronization

2017-11-21 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HADOOP-9747:
---
Attachment: (was: HADOOP-9747-trunk.01.patch)

> Reduce unnecessary UGI synchronization
> --
>
> Key: HADOOP-9747
> URL: https://issues.apache.org/jira/browse/HADOOP-9747
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0-alpha1
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HADOOP-9747-trunk.01.patch, 
> HADOOP-9747.2.branch-2.patch, HADOOP-9747.2.trunk.patch, 
> HADOOP-9747.branch-2.patch, HADOOP-9747.trunk.patch
>
>
> Jstacks of heavily loaded NNs show up to dozens of threads blocking in the 
> UGI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9747) Reduce unnecessary UGI synchronization

2017-11-21 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HADOOP-9747:
---
Attachment: HADOOP-9747-trunk.01.patch

> Reduce unnecessary UGI synchronization
> --
>
> Key: HADOOP-9747
> URL: https://issues.apache.org/jira/browse/HADOOP-9747
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0-alpha1
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HADOOP-9747-trunk.01.patch, 
> HADOOP-9747.2.branch-2.patch, HADOOP-9747.2.trunk.patch, 
> HADOOP-9747.branch-2.patch, HADOOP-9747.trunk.patch
>
>
> Jstacks of heavily loaded NNs show up to dozens of threads blocking in the 
> UGI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15061) Regenerate editsStored and editsStored.xml in HDFS tests

2017-11-21 Thread Lei (Eddy) Xu (JIRA)
Lei (Eddy) Xu created HADOOP-15061:
--

 Summary: Regenerate editsStored and editsStored.xml in HDFS tests
 Key: HADOOP-15061
 URL: https://issues.apache.org/jira/browse/HADOOP-15061
 Project: Hadoop Common
  Issue Type: Task
  Components: test
Affects Versions: 3.0.0-beta1
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu


>From HDFS-12840, we found that the `editsStored` in HDFS tests missing a few 
>operations, i.e., the following operations from 
>{{DFSTestUtils#runOperations()}}.
{code}
 // OP_UPDATE_BLOCKS 25
final String updateBlockFile = "/update_blocks";
FSDataOutputStream fout = filesystem.create(new Path(updateBlockFile), 
true, 4096, (short)1, 4096L);
fout.write(1);
fout.hflush();
long fileId = ((DFSOutputStream)fout.getWrappedStream()).getFileId();
DFSClient dfsclient = DFSClientAdapter.getDFSClient(filesystem);
LocatedBlocks blocks = 
dfsclient.getNamenode().getBlockLocations(updateBlockFile, 0, 
Integer.MAX_VALUE);
dfsclient.getNamenode().abandonBlock(blocks.get(0).getBlock(), fileId, 
updateBlockFile, dfsclient.clientName);
fout.close();
{code}

We should re-generate to edits and related XML to sync with the code.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15060) TestShellBasedUnixGroupsMapping.testFiniteGroupResolutionTime flaky

2017-11-21 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated HADOOP-15060:

Attachment: YARN-7553.000.patch

> TestShellBasedUnixGroupsMapping.testFiniteGroupResolutionTime flaky
> ---
>
> Key: HADOOP-15060
> URL: https://issues.apache.org/jira/browse/HADOOP-15060
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Attachments: YARN-7553.000.patch
>
>
> {code}
> [ERROR] 
> testFiniteGroupResolutionTime(org.apache.hadoop.security.TestShellBasedUnixGroupsMapping)
>   Time elapsed: 61.975 s  <<< FAILURE!
> java.lang.AssertionError: 
> Expected the logs to carry a message about command timeout but was: 
> 2017-11-22 00:10:57,523 WARN  security.ShellBasedUnixGroupsMapping 
> (ShellBasedUnixGroupsMapping.java:getUnixGroups(181)) - unable to return 
> groups for user foobarnonexistinguser
> PartialGroupNameException The user name 'foobarnonexistinguser' is not found. 
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.resolvePartialGroupNames(ShellBasedUnixGroupsMapping.java:275)
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:178)
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:97)
>   at 
> org.apache.hadoop.security.TestShellBasedUnixGroupsMapping.testFiniteGroupResolutionTime(TestShellBasedUnixGroupsMapping.java:278)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15060) TestShellBasedUnixGroupsMapping.testFiniteGroupResolutionTime flaky

2017-11-21 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated HADOOP-15060:

Status: Patch Available  (was: Open)

> TestShellBasedUnixGroupsMapping.testFiniteGroupResolutionTime flaky
> ---
>
> Key: HADOOP-15060
> URL: https://issues.apache.org/jira/browse/HADOOP-15060
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
> Attachments: YARN-7553.000.patch
>
>
> {code}
> [ERROR] 
> testFiniteGroupResolutionTime(org.apache.hadoop.security.TestShellBasedUnixGroupsMapping)
>   Time elapsed: 61.975 s  <<< FAILURE!
> java.lang.AssertionError: 
> Expected the logs to carry a message about command timeout but was: 
> 2017-11-22 00:10:57,523 WARN  security.ShellBasedUnixGroupsMapping 
> (ShellBasedUnixGroupsMapping.java:getUnixGroups(181)) - unable to return 
> groups for user foobarnonexistinguser
> PartialGroupNameException The user name 'foobarnonexistinguser' is not found. 
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.resolvePartialGroupNames(ShellBasedUnixGroupsMapping.java:275)
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:178)
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:97)
>   at 
> org.apache.hadoop.security.TestShellBasedUnixGroupsMapping.testFiniteGroupResolutionTime(TestShellBasedUnixGroupsMapping.java:278)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15060) TestShellBasedUnixGroupsMapping.testFiniteGroupResolutionTime flaky

2017-11-21 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16261741#comment-16261741
 ] 

Wei-Chiu Chuang commented on HADOOP-15060:
--

Moved it to Hadoop Common for better visibility.

> TestShellBasedUnixGroupsMapping.testFiniteGroupResolutionTime flaky
> ---
>
> Key: HADOOP-15060
> URL: https://issues.apache.org/jira/browse/HADOOP-15060
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>
> {code}
> [ERROR] 
> testFiniteGroupResolutionTime(org.apache.hadoop.security.TestShellBasedUnixGroupsMapping)
>   Time elapsed: 61.975 s  <<< FAILURE!
> java.lang.AssertionError: 
> Expected the logs to carry a message about command timeout but was: 
> 2017-11-22 00:10:57,523 WARN  security.ShellBasedUnixGroupsMapping 
> (ShellBasedUnixGroupsMapping.java:getUnixGroups(181)) - unable to return 
> groups for user foobarnonexistinguser
> PartialGroupNameException The user name 'foobarnonexistinguser' is not found. 
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.resolvePartialGroupNames(ShellBasedUnixGroupsMapping.java:275)
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:178)
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:97)
>   at 
> org.apache.hadoop.security.TestShellBasedUnixGroupsMapping.testFiniteGroupResolutionTime(TestShellBasedUnixGroupsMapping.java:278)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-15060) TestShellBasedUnixGroupsMapping.testFiniteGroupResolutionTime flaky

2017-11-21 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang moved YARN-7553 to HADOOP-15060:


Key: HADOOP-15060  (was: YARN-7553)
Project: Hadoop Common  (was: Hadoop YARN)

> TestShellBasedUnixGroupsMapping.testFiniteGroupResolutionTime flaky
> ---
>
> Key: HADOOP-15060
> URL: https://issues.apache.org/jira/browse/HADOOP-15060
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>
> {code}
> [ERROR] 
> testFiniteGroupResolutionTime(org.apache.hadoop.security.TestShellBasedUnixGroupsMapping)
>   Time elapsed: 61.975 s  <<< FAILURE!
> java.lang.AssertionError: 
> Expected the logs to carry a message about command timeout but was: 
> 2017-11-22 00:10:57,523 WARN  security.ShellBasedUnixGroupsMapping 
> (ShellBasedUnixGroupsMapping.java:getUnixGroups(181)) - unable to return 
> groups for user foobarnonexistinguser
> PartialGroupNameException The user name 'foobarnonexistinguser' is not found. 
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.resolvePartialGroupNames(ShellBasedUnixGroupsMapping.java:275)
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:178)
>   at 
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:97)
>   at 
> org.apache.hadoop.security.TestShellBasedUnixGroupsMapping.testFiniteGroupResolutionTime(TestShellBasedUnixGroupsMapping.java:278)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14960) Add GC time percentage monitor/alerter

2017-11-21 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16261671#comment-16261671
 ] 

Erik Krogen commented on HADOOP-14960:
--

Hey [~mi...@cloudera.com], sorry to be late here, but shouldn't this metric be 
a gauge rather than a counter? Counters should always be increasing-only. It 
looks like this added value represents the percentage within the last 
observation window, meaning it will vary up and down, so should be a gauge. 
Please let me know if I am misunderstanding.

> Add GC time percentage monitor/alerter
> --
>
> Key: HADOOP-14960
> URL: https://issues.apache.org/jira/browse/HADOOP-14960
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
> Fix For: 3.0.0, 2.10.0
>
> Attachments: HADOOP-14960.01.patch, HADOOP-14960.02.patch, 
> HADOOP-14960.03.patch, HADOOP-14960.04.patch
>
>
> Currently class {{org.apache.hadoop.metrics2.source.JvmMetrics}} provides 
> several metrics related to GC. Unfortunately, all these metrics are not as 
> useful as they could be, because they don't answer the first and most 
> important question related to GC and JVM health: what percentage of time my 
> JVM is paused in GC? This percentage, calculated as the sum of the GC pauses 
> over some period, like 1 minute, divided by that period - is the most 
> convenient measure of the GC health because:
> - it is just one number, and it's clear that, say, 1..5% is good, but 80..90% 
> is really bad
> - it allows for easy apple-to-apple comparison between runs, even between 
> different apps
> - when this metric reaches some critical value like 70%, it almost always 
> indicates a "GC death spiral", from which the app can recover only if it 
> drops some task(s) etc.
> The existing "total GC time", "total number of GCs" etc. metrics only give 
> numbers that can be used to rougly estimate this percentage. Thus it is 
> suggested to add a new metric to this class, and possibly allow users to 
> register handlers that will be automatically invoked if this metric reaches 
> the specified threshold.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15003) Merge S3A committers into trunk: Yetus patch checker

2017-11-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16261626#comment-16261626
 ] 

Hadoop QA commented on HADOOP-15003:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 58 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  4s{color} | {color:orange} root: The patch generated 22 new + 122 unchanged 
- 27 fixed = 144 total (was 149) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
21s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 79 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
35s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
59s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
12s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 23m  0s{color} 
| {color:red} hadoop-mapreduce-client-app in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
39s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}135m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.mapreduce.v2.app.TestRecovery |
\\
\\
|| Subsystem || Report/Notes 

[jira] [Commented] (HADOOP-13887) Encrypt S3A data client-side with AWS SDK

2017-11-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16261515#comment-16261515
 ] 

Steve Loughran commented on HADOOP-13887:
-

FWIW, presto have this, and they get to see the prestofs issues

* https://github.com/prestodb/presto/issues/7186 : Presto doesn't seem to be 
able to read encrypted Parquet data
* https://github.com/aws/aws-sdk-java/issues/1057 : EMRFS doesn't set the 
x-amz-unencrypted-content-length header

Presto does look for the header, just gets burned with EMRFS saved data which 
doesn't set the header. What does EMR do? From the issues

bq. We had a chat with the EMR people to understand how Hive/Spark is able to 
read encrypted files when the x-amz-unencrypted-content-length is not set. The 
outcome is, EMR Hive/Spark reads the entire file in those cases to determine 
the unencrypted content length, which is something that we don't really want to 
do.



> Encrypt S3A data client-side with AWS SDK
> -
>
> Key: HADOOP-13887
> URL: https://issues.apache.org/jira/browse/HADOOP-13887
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Jeeyoung Kim
>Assignee: Igor Mazur
>Priority: Minor
> Attachments: HADOOP-13887-002.patch, HADOOP-13887-007.patch, 
> HADOOP-13887-branch-2-003.patch, HADOOP-13897-branch-2-004.patch, 
> HADOOP-13897-branch-2-005.patch, HADOOP-13897-branch-2-006.patch, 
> HADOOP-13897-branch-2-008.patch, HADOOP-13897-branch-2-009.patch, 
> HADOOP-13897-branch-2-010.patch, HADOOP-13897-branch-2-012.patch, 
> HADOOP-13897-branch-2-014.patch, HADOOP-13897-trunk-011.patch, 
> HADOOP-13897-trunk-013.patch, HADOOP-14171-001.patch, S3-CSE Proposal.pdf
>
>
> Expose the client-side encryption option documented in Amazon S3 
> documentation  - 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingClientSideEncryption.html
> Currently this is not exposed in Hadoop but it is exposed as an option in AWS 
> Java SDK, which Hadoop currently includes. It should be trivial to propagate 
> this as a parameter passed to the S3client used in S3AFileSystem.java



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15059) 3.0 deployment cannot work with old version MR tar ball which break rolling upgrade

2017-11-21 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16261483#comment-16261483
 ] 

Vinod Kumar Vavilapalli commented on HADOOP-15059:
--

HADOOP-13123 looks related.

> 3.0 deployment cannot work with old version MR tar ball which break rolling 
> upgrade
> ---
>
> Key: HADOOP-15059
> URL: https://issues.apache.org/jira/browse/HADOOP-15059
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Junping Du
>Priority: Blocker
>
> I tried to deploy 3.0 cluster with 2.9 MR tar ball. The MR job is failed 
> because following error:
> {noformat}
> 2017-11-21 12:42:50,911 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for 
> application appattempt_1511295641738_0003_01
> 2017-11-21 12:42:51,070 WARN [main] org.apache.hadoop.util.NativeCodeLoader: 
> Unable to load native-hadoop library for your platform... using builtin-java 
> classes where applicable
> 2017-11-21 12:42:51,118 FATAL [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
> java.lang.RuntimeException: Unable to determine current user
>   at 
> org.apache.hadoop.conf.Configuration$Resource.getRestrictParserDefault(Configuration.java:254)
>   at 
> org.apache.hadoop.conf.Configuration$Resource.(Configuration.java:220)
>   at 
> org.apache.hadoop.conf.Configuration$Resource.(Configuration.java:212)
>   at 
> org.apache.hadoop.conf.Configuration.addResource(Configuration.java:888)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1638)
> Caused by: java.io.IOException: Exception reading 
> /tmp/nm-local-dir/usercache/jdu/appcache/application_1511295641738_0003/container_e03_1511295641738_0003_01_01/container_tokens
>   at 
> org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:208)
>   at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:907)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:820)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:689)
>   at 
> org.apache.hadoop.conf.Configuration$Resource.getRestrictParserDefault(Configuration.java:252)
>   ... 4 more
> Caused by: java.io.IOException: Unknown version 1 in token storage.
>   at 
> org.apache.hadoop.security.Credentials.readTokenStorageStream(Credentials.java:226)
>   at 
> org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:205)
>   ... 8 more
> 2017-11-21 12:42:51,122 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting 
> with status 1: java.lang.RuntimeException: Unable to determine current user
> {noformat}
> I think it is due to token incompatiblity change between 2.9 and 3.0. As we 
> claim "rolling upgrade" is supported in Hadoop 3, we should fix this before 
> we ship 3.0 otherwise all MR running applications will get stuck during/after 
> upgrade.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15059) 3.0 deployment cannot work with old version MR tar ball which break rolling upgrade

2017-11-21 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16261481#comment-16261481
 ] 

Junping Du commented on HADOOP-15059:
-

Move to Hadoop as the fix could be hadoop/YARN/MR.

> 3.0 deployment cannot work with old version MR tar ball which break rolling 
> upgrade
> ---
>
> Key: HADOOP-15059
> URL: https://issues.apache.org/jira/browse/HADOOP-15059
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Junping Du
>Priority: Blocker
>
> I tried to deploy 3.0 cluster with 2.9 MR tar ball. The MR job is failed 
> because following error:
> {noformat}
> 2017-11-21 12:42:50,911 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for 
> application appattempt_1511295641738_0003_01
> 2017-11-21 12:42:51,070 WARN [main] org.apache.hadoop.util.NativeCodeLoader: 
> Unable to load native-hadoop library for your platform... using builtin-java 
> classes where applicable
> 2017-11-21 12:42:51,118 FATAL [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
> java.lang.RuntimeException: Unable to determine current user
>   at 
> org.apache.hadoop.conf.Configuration$Resource.getRestrictParserDefault(Configuration.java:254)
>   at 
> org.apache.hadoop.conf.Configuration$Resource.(Configuration.java:220)
>   at 
> org.apache.hadoop.conf.Configuration$Resource.(Configuration.java:212)
>   at 
> org.apache.hadoop.conf.Configuration.addResource(Configuration.java:888)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1638)
> Caused by: java.io.IOException: Exception reading 
> /tmp/nm-local-dir/usercache/jdu/appcache/application_1511295641738_0003/container_e03_1511295641738_0003_01_01/container_tokens
>   at 
> org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:208)
>   at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:907)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:820)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:689)
>   at 
> org.apache.hadoop.conf.Configuration$Resource.getRestrictParserDefault(Configuration.java:252)
>   ... 4 more
> Caused by: java.io.IOException: Unknown version 1 in token storage.
>   at 
> org.apache.hadoop.security.Credentials.readTokenStorageStream(Credentials.java:226)
>   at 
> org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:205)
>   ... 8 more
> 2017-11-21 12:42:51,122 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting 
> with status 1: java.lang.RuntimeException: Unable to determine current user
> {noformat}
> I think it is due to token incompatiblity change between 2.9 and 3.0. As we 
> claim "rolling upgrade" is supported in Hadoop 3, we should fix this before 
> we ship 3.0 otherwise all MR running applications will get stuck during/after 
> upgrade.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-15059) 3.0 deployment cannot work with old version MR tar ball which break rolling upgrade

2017-11-21 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du moved MAPREDUCE-7012 to HADOOP-15059:


Target Version/s: 3.0.0  (was: 3.0.0)
 Component/s: (was: mrv2)
  (was: distributed-cache)
  security
 Key: HADOOP-15059  (was: MAPREDUCE-7012)
 Project: Hadoop Common  (was: Hadoop Map/Reduce)

> 3.0 deployment cannot work with old version MR tar ball which break rolling 
> upgrade
> ---
>
> Key: HADOOP-15059
> URL: https://issues.apache.org/jira/browse/HADOOP-15059
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Junping Du
>Priority: Blocker
>
> I tried to deploy 3.0 cluster with 2.9 MR tar ball. The MR job is failed 
> because following error:
> {noformat}
> 2017-11-21 12:42:50,911 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for 
> application appattempt_1511295641738_0003_01
> 2017-11-21 12:42:51,070 WARN [main] org.apache.hadoop.util.NativeCodeLoader: 
> Unable to load native-hadoop library for your platform... using builtin-java 
> classes where applicable
> 2017-11-21 12:42:51,118 FATAL [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
> java.lang.RuntimeException: Unable to determine current user
>   at 
> org.apache.hadoop.conf.Configuration$Resource.getRestrictParserDefault(Configuration.java:254)
>   at 
> org.apache.hadoop.conf.Configuration$Resource.(Configuration.java:220)
>   at 
> org.apache.hadoop.conf.Configuration$Resource.(Configuration.java:212)
>   at 
> org.apache.hadoop.conf.Configuration.addResource(Configuration.java:888)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1638)
> Caused by: java.io.IOException: Exception reading 
> /tmp/nm-local-dir/usercache/jdu/appcache/application_1511295641738_0003/container_e03_1511295641738_0003_01_01/container_tokens
>   at 
> org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:208)
>   at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:907)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:820)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:689)
>   at 
> org.apache.hadoop.conf.Configuration$Resource.getRestrictParserDefault(Configuration.java:252)
>   ... 4 more
> Caused by: java.io.IOException: Unknown version 1 in token storage.
>   at 
> org.apache.hadoop.security.Credentials.readTokenStorageStream(Credentials.java:226)
>   at 
> org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:205)
>   ... 8 more
> 2017-11-21 12:42:51,122 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting 
> with status 1: java.lang.RuntimeException: Unable to determine current user
> {noformat}
> I think it is due to token incompatiblity change between 2.9 and 3.0. As we 
> claim "rolling upgrade" is supported in Hadoop 3, we should fix this before 
> we ship 3.0 otherwise all MR running applications will get stuck during/after 
> upgrade.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15054) upgrade hadoop dependency on commons-codec to 1.11

2017-11-21 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16261445#comment-16261445
 ] 

Bharat Viswanadham commented on HADOOP-15054:
-

[~jojochuang]
Thanks for info.

Yes, as we have classpath isolation and having shaded jars, it will not affect 
downstream clients.

> upgrade hadoop dependency on commons-codec to 1.11
> --
>
> Key: HADOOP-15054
> URL: https://issues.apache.org/jira/browse/HADOOP-15054
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: PJ Fanning
>Assignee: Bharat Viswanadham
> Attachments: HADOOP-15054.00.patch
>
>
> https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-auth/3.0.0-beta1 
> retains the dependency on an old commons-codec version (1.4).
> And hadoop-common.
> Would it be possible to consider an upgrade to 1.11?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15054) upgrade hadoop dependency on commons-codec to 1.11

2017-11-21 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16261442#comment-16261442
 ] 

Wei-Chiu Chuang commented on HADOOP-15054:
--

Thanks
Please also note commons-codec is used in many places within Hadoop codebase 
(e.g. yarn, mapreduce) so running tests in that two sub components would be not 
sufficient.

That said, given that Hadoop 3 has client classpath isolation it should be 
easier to bump up a dependency version now.

I applied this patch and am running our internal test suites against it now. 
Expect to have results back soon.

> upgrade hadoop dependency on commons-codec to 1.11
> --
>
> Key: HADOOP-15054
> URL: https://issues.apache.org/jira/browse/HADOOP-15054
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: PJ Fanning
>Assignee: Bharat Viswanadham
> Attachments: HADOOP-15054.00.patch
>
>
> https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-auth/3.0.0-beta1 
> retains the dependency on an old commons-codec version (1.4).
> And hadoop-common.
> Would it be possible to consider an upgrade to 1.11?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14971) Merge S3A committers into trunk

2017-11-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16261439#comment-16261439
 ] 

ASF GitHub Bot commented on HADOOP-14971:
-

Github user steveloughran commented on the issue:

https://github.com/apache/hadoop/pull/282
  
:)



> Merge S3A committers into trunk
> ---
>
> Key: HADOOP-14971
> URL: https://issues.apache.org/jira/browse/HADOOP-14971
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13786-040.patch, HADOOP-13786-041.patch
>
>
> Merge the HADOOP-13786 committer into trunk. This branch is being set up as a 
> github PR for review there & to keep it out the mailboxes of the watchers on 
> the main JIRA



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15047) Python is required for -Preleasedoc but not documented in branch-2.8

2017-11-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16261401#comment-16261401
 ] 

Hadoop QA commented on HADOOP-15047:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 
46s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} branch-2.8 Compile Tests {color} ||
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:c2d96dd |
| JIRA Issue | HADOOP-15047 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898713/HADOOP-15047-branch-2.8.00.patch
 |
| Optional Tests |  asflicense  |
| uname | Linux c08b811a570c 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2.8 / deb21e8 |
| maven | version: Apache Maven 3.0.5 |
| Max. process+thread count | 38 (vs. ulimit of 5000) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13730/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Python is required for -Preleasedoc but not documented in branch-2.8
> 
>
> Key: HADOOP-15047
> URL: https://issues.apache.org/jira/browse/HADOOP-15047
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, documentation
>Reporter: Akira Ajisaka
>Assignee: Bharat Viswanadham
> Attachments: HADOOP-15047-branch-2.8.00.patch
>
>
> Python is required for -Preleasedoc but not documented in branch-2.8.
> * In trunk and branch-3.0, it was documented by HADOOP-10854.
> * In branch-2 and branch-2.9, it was documented by YARN-4849.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15003) Merge S3A committers into trunk: Yetus patch checker

2017-11-21 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16261384#comment-16261384
 ] 

Aaron Fabbri commented on HADOOP-15003:
---

My testing in us-west-2 looks good.  +1 latest patch.

Just need to restart jenkins here.

> Merge S3A committers into trunk: Yetus patch checker
> 
>
> Key: HADOOP-15003
> URL: https://issues.apache.org/jira/browse/HADOOP-15003
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13786-041.patch, HADOOP-13786-042.patch, 
> HADOOP-13786-043.patch, HADOOP-13786-044.patch, HADOOP-13786-045.patch, 
> HADOOP-13786-046.patch, HADOOP-13786-047.patch, HADOOP-13786-048.patch, 
> HADOOP-13786-049.patch, HADOOP-13786-050.patch, HADOOP-13786-051.patch, 
> HADOOP-13786-052.patch, HADOOP-13786-053.patch, HADOOP-15033-testfix-1.diff
>
>
> This is a Yetus only JIRA created to have Yetus review the 
> HADOOP-13786/HADOOP-14971 patch as a .patch file, as the review PR 
> [https://github.com/apache/hadoop/pull/282] is stopping this happening in 
> HADOOP-14971.
> Reviews should go into the PR/other task



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15054) upgrade hadoop dependency on commons-codec to 1.11

2017-11-21 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16261367#comment-16261367
 ] 

Bharat Viswanadham commented on HADOOP-15054:
-

Ran hadoop-auth tests and hadoop-hdfs-client tests.
Tests passed locally, few failures I have seen but they have open jiras, but 
not related to the patch.

> upgrade hadoop dependency on commons-codec to 1.11
> --
>
> Key: HADOOP-15054
> URL: https://issues.apache.org/jira/browse/HADOOP-15054
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: PJ Fanning
>Assignee: Bharat Viswanadham
> Attachments: HADOOP-15054.00.patch
>
>
> https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-auth/3.0.0-beta1 
> retains the dependency on an old commons-codec version (1.4).
> And hadoop-common.
> Would it be possible to consider an upgrade to 1.11?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15047) Python is required for -Preleasedoc but not documented in branch-2.8

2017-11-21 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HADOOP-15047:

Attachment: HADOOP-15047-branch-2.8.00.patch

> Python is required for -Preleasedoc but not documented in branch-2.8
> 
>
> Key: HADOOP-15047
> URL: https://issues.apache.org/jira/browse/HADOOP-15047
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, documentation
>Reporter: Akira Ajisaka
>Assignee: Bharat Viswanadham
> Attachments: HADOOP-15047-branch-2.8.00.patch
>
>
> Python is required for -Preleasedoc but not documented in branch-2.8.
> * In trunk and branch-3.0, it was documented by HADOOP-10854.
> * In branch-2 and branch-2.9, it was documented by YARN-4849.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15047) Python is required for -Preleasedoc but not documented in branch-2.8

2017-11-21 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HADOOP-15047:

Attachment: (was: HADOOP-15047.00.patch)

> Python is required for -Preleasedoc but not documented in branch-2.8
> 
>
> Key: HADOOP-15047
> URL: https://issues.apache.org/jira/browse/HADOOP-15047
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, documentation
>Reporter: Akira Ajisaka
>Assignee: Bharat Viswanadham
> Attachments: HADOOP-15047-branch-2.8.00.patch
>
>
> Python is required for -Preleasedoc but not documented in branch-2.8.
> * In trunk and branch-3.0, it was documented by HADOOP-10854.
> * In branch-2 and branch-2.9, it was documented by YARN-4849.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15047) Python is required for -Preleasedoc but not documented in branch-2.8

2017-11-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16261351#comment-16261351
 ] 

Hadoop QA commented on HADOOP-15047:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HADOOP-15047 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-15047 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898712/HADOOP-15047.00.patch 
|
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13729/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Python is required for -Preleasedoc but not documented in branch-2.8
> 
>
> Key: HADOOP-15047
> URL: https://issues.apache.org/jira/browse/HADOOP-15047
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, documentation
>Reporter: Akira Ajisaka
>Assignee: Bharat Viswanadham
> Attachments: HADOOP-15047.00.patch
>
>
> Python is required for -Preleasedoc but not documented in branch-2.8.
> * In trunk and branch-3.0, it was documented by HADOOP-10854.
> * In branch-2 and branch-2.9, it was documented by YARN-4849.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15047) Python is required for -Preleasedoc but not documented in branch-2.8

2017-11-21 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16261350#comment-16261350
 ] 

Bharat Viswanadham commented on HADOOP-15047:
-

[~ajisakaa]
Thank you for info.
Provided a patch for branch-2.8

> Python is required for -Preleasedoc but not documented in branch-2.8
> 
>
> Key: HADOOP-15047
> URL: https://issues.apache.org/jira/browse/HADOOP-15047
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, documentation
>Reporter: Akira Ajisaka
>Assignee: Bharat Viswanadham
> Attachments: HADOOP-15047.00.patch
>
>
> Python is required for -Preleasedoc but not documented in branch-2.8.
> * In trunk and branch-3.0, it was documented by HADOOP-10854.
> * In branch-2 and branch-2.9, it was documented by YARN-4849.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15047) Python is required for -Preleasedoc but not documented in branch-2.8

2017-11-21 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HADOOP-15047:

Status: Patch Available  (was: Open)

> Python is required for -Preleasedoc but not documented in branch-2.8
> 
>
> Key: HADOOP-15047
> URL: https://issues.apache.org/jira/browse/HADOOP-15047
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, documentation
>Reporter: Akira Ajisaka
>Assignee: Bharat Viswanadham
> Attachments: HADOOP-15047.00.patch
>
>
> Python is required for -Preleasedoc but not documented in branch-2.8.
> * In trunk and branch-3.0, it was documented by HADOOP-10854.
> * In branch-2 and branch-2.9, it was documented by YARN-4849.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15047) Python is required for -Preleasedoc but not documented in branch-2.8

2017-11-21 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HADOOP-15047:

Attachment: HADOOP-15047.00.patch

> Python is required for -Preleasedoc but not documented in branch-2.8
> 
>
> Key: HADOOP-15047
> URL: https://issues.apache.org/jira/browse/HADOOP-15047
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, documentation
>Reporter: Akira Ajisaka
>Assignee: Bharat Viswanadham
> Attachments: HADOOP-15047.00.patch
>
>
> Python is required for -Preleasedoc but not documented in branch-2.8.
> * In trunk and branch-3.0, it was documented by HADOOP-10854.
> * In branch-2 and branch-2.9, it was documented by YARN-4849.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14971) Merge S3A committers into trunk

2017-11-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16261325#comment-16261325
 ] 

ASF GitHub Bot commented on HADOOP-14971:
-

Github user ajfabbri commented on the issue:

https://github.com/apache/hadoop/pull/282
  
These last two commits look fine as well.  I'm +1 as of commit d5dcf98


> Merge S3A committers into trunk
> ---
>
> Key: HADOOP-14971
> URL: https://issues.apache.org/jira/browse/HADOOP-14971
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13786-040.patch, HADOOP-13786-041.patch
>
>
> Merge the HADOOP-13786 committer into trunk. This branch is being set up as a 
> github PR for review there & to keep it out the mailboxes of the watchers on 
> the main JIRA



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15003) Merge S3A committers into trunk: Yetus patch checker

2017-11-21 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16261328#comment-16261328
 ] 

Aaron Fabbri commented on HADOOP-15003:
---

Testing latest patch. Latest commits look good (trivial stuff).

> Merge S3A committers into trunk: Yetus patch checker
> 
>
> Key: HADOOP-15003
> URL: https://issues.apache.org/jira/browse/HADOOP-15003
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13786-041.patch, HADOOP-13786-042.patch, 
> HADOOP-13786-043.patch, HADOOP-13786-044.patch, HADOOP-13786-045.patch, 
> HADOOP-13786-046.patch, HADOOP-13786-047.patch, HADOOP-13786-048.patch, 
> HADOOP-13786-049.patch, HADOOP-13786-050.patch, HADOOP-13786-051.patch, 
> HADOOP-13786-052.patch, HADOOP-13786-053.patch, HADOOP-15033-testfix-1.diff
>
>
> This is a Yetus only JIRA created to have Yetus review the 
> HADOOP-13786/HADOOP-14971 patch as a .patch file, as the review PR 
> [https://github.com/apache/hadoop/pull/282] is stopping this happening in 
> HADOOP-14971.
> Reviews should go into the PR/other task



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15058) create-release site build outputs dummy shaded jars due to skipShade

2017-11-21 Thread Andrew Wang (JIRA)
Andrew Wang created HADOOP-15058:


 Summary: create-release site build outputs dummy shaded jars due 
to skipShade
 Key: HADOOP-15058
 URL: https://issues.apache.org/jira/browse/HADOOP-15058
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Blocker






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13282) S3 blob etags to be made visible in status/getFileChecksum() calls

2017-11-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13282:

Attachment: HADOOP-13282-003.patch

patch 003; 002 with two checkstyle indentations reported against patch 001 fixed

> S3 blob etags to be made visible in status/getFileChecksum() calls
> --
>
> Key: HADOOP-13282
> URL: https://issues.apache.org/jira/browse/HADOOP-13282
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13282-001.patch, HADOOP-13282-002.patch, 
> HADOOP-13282-003.patch
>
>
> If the etags of blobs were exported via {{getFileChecksum()}}, it'd be 
> possible to probe for a blob being in sync with a local file. Distcp could 
> use this to decide whether to skip a file or not.
> Now, there's a problem there: distcp needs source and dest filesystems to 
> implement the same algorithm. It'd only work out the box if you were copying 
> between S3 instances. There are also quirks with encryption and multipart: 
> [s3 
> docs|http://docs.aws.amazon.com/AmazonS3/latest/API/RESTCommonResponseHeaders.html].
>  At the very least, it's something which could be used when indexing the FS, 
> to check for changes later.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13282) S3 blob etags to be made visible in status/getFileChecksum() calls

2017-11-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13282:

Status: Open  (was: Patch Available)

> S3 blob etags to be made visible in status/getFileChecksum() calls
> --
>
> Key: HADOOP-13282
> URL: https://issues.apache.org/jira/browse/HADOOP-13282
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13282-001.patch, HADOOP-13282-002.patch
>
>
> If the etags of blobs were exported via {{getFileChecksum()}}, it'd be 
> possible to probe for a blob being in sync with a local file. Distcp could 
> use this to decide whether to skip a file or not.
> Now, there's a problem there: distcp needs source and dest filesystems to 
> implement the same algorithm. It'd only work out the box if you were copying 
> between S3 instances. There are also quirks with encryption and multipart: 
> [s3 
> docs|http://docs.aws.amazon.com/AmazonS3/latest/API/RESTCommonResponseHeaders.html].
>  At the very least, it's something which could be used when indexing the FS, 
> to check for changes later.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13282) S3 blob etags to be made visible in status/getFileChecksum() calls

2017-11-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13282:

Status: Patch Available  (was: Open)

> S3 blob etags to be made visible in status/getFileChecksum() calls
> --
>
> Key: HADOOP-13282
> URL: https://issues.apache.org/jira/browse/HADOOP-13282
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13282-001.patch, HADOOP-13282-002.patch
>
>
> If the etags of blobs were exported via {{getFileChecksum()}}, it'd be 
> possible to probe for a blob being in sync with a local file. Distcp could 
> use this to decide whether to skip a file or not.
> Now, there's a problem there: distcp needs source and dest filesystems to 
> implement the same algorithm. It'd only work out the box if you were copying 
> between S3 instances. There are also quirks with encryption and multipart: 
> [s3 
> docs|http://docs.aws.amazon.com/AmazonS3/latest/API/RESTCommonResponseHeaders.html].
>  At the very least, it's something which could be used when indexing the FS, 
> to check for changes later.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13282) S3 blob etags to be made visible in status/getFileChecksum() calls

2017-11-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13282:

Attachment: HADOOP-13282-002.patch

HADOOP-13282: etag support for s3a.
* Move the EtagChecksum class into a new fs.store package in hadoop common for 
use by other stores
* add tests there on its core equality/round trip operations
* Add a set of ITests for the S3A use. One of these tests is skipped if the FS 
is known to be encrypted, in case the bucket returns different etags here. To 
aid: added a getter for the S3AFS encryption algorithm.

With these tags, you can assume that if an object's etag changes, it is 
different. You cannot safely use it to conclude that other objects, especially 
across stores, are equivalent.

(note this patch reorders all the headers in ITestS3AMiscOperations. They'd got 
out of order, and as it's a low-patch, low-conflict file, I've taken the chance 
to fix it)

Tested

S3 London with encryption turned on; s3 ireland without

> S3 blob etags to be made visible in status/getFileChecksum() calls
> --
>
> Key: HADOOP-13282
> URL: https://issues.apache.org/jira/browse/HADOOP-13282
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13282-001.patch, HADOOP-13282-002.patch
>
>
> If the etags of blobs were exported via {{getFileChecksum()}}, it'd be 
> possible to probe for a blob being in sync with a local file. Distcp could 
> use this to decide whether to skip a file or not.
> Now, there's a problem there: distcp needs source and dest filesystems to 
> implement the same algorithm. It'd only work out the box if you were copying 
> between S3 instances. There are also quirks with encryption and multipart: 
> [s3 
> docs|http://docs.aws.amazon.com/AmazonS3/latest/API/RESTCommonResponseHeaders.html].
>  At the very least, it's something which could be used when indexing the FS, 
> to check for changes later.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15046) Document Apache Hadoop does not support Java 9 in BUILDING.txt

2017-11-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16261003#comment-16261003
 ] 

Hudson commented on HADOOP-15046:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13263 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13263/])
HADOOP-15046. Document Apache Hadoop does not support Java 9 in (aajisaka: rev 
0ed44f25653ad2d97e2726140a7f77a555c40471)
* (edit) BUILDING.txt


> Document Apache Hadoop does not support Java 9 in BUILDING.txt
> --
>
> Key: HADOOP-15046
> URL: https://issues.apache.org/jira/browse/HADOOP-15046
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Hanisha Koneru
>  Labels: newbie
> Fix For: 3.1.0, 3.0.1
>
> Attachments: HADOOP-15046-branch-2.001.patch, HADOOP-15046.001.patch, 
> HADOOP-15046.001.patch
>
>
> Now the java version is documented as "JDK 1.8+" or "JDK 1.7+", we should 
> update this to "JDK 1.8" or "JDK 1.7 or 1.8" to exclude Java 9.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12760) sun.misc.Cleaner has moved to a new location in OpenJDK 9

2017-11-21 Thread Zoltan Haindrich (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16260965#comment-16260965
 ] 

Zoltan Haindrich commented on HADOOP-12760:
---

Are there any chance this will be available in Hadoop-3.0 ?

> sun.misc.Cleaner has moved to a new location in OpenJDK 9
> -
>
> Key: HADOOP-12760
> URL: https://issues.apache.org/jira/browse/HADOOP-12760
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chris Hegarty
>Assignee: Akira Ajisaka
> Attachments: HADOOP-12760.00.patch, HADOOP-12760.01.patch, 
> HADOOP-12760.02.patch, HADOOP-12760.03.patch
>
>
> This is a heads-up: there are upcoming changes in JDK 9 that will require, at 
> least, a small update to org.apache.hadoop.crypto.CryptoStreamUtils & 
> org.apache.hadoop.io.nativeio.NativeIO.
> OpenJDK issue no. 8148117: "Move sun.misc.Cleaner to jdk.internal.ref" [1], 
> will move the Cleaner class from sun.misc to jdk.internal.ref. There is 
> ongoing discussion about the possibility of providing a public supported API, 
> maybe in the JDK 9 timeframe, for releasing NIO direct buffer native memory, 
> see the core-libs-dev mail thread [2]. At the very least CryptoStreamUtils & 
> NativeIO [3] should be updated to have knowledge of the new location of the 
> JDK Cleaner.
> [1] https://bugs.openjdk.java.net/browse/JDK-8148117
> [2] 
> http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-January/038243.html
> [3] https://github.com/apache/hadoop/search?utf8=✓=sun.misc.Cleaner



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15046) Document Apache Hadoop does not support Java 9 in BUILDING.txt

2017-11-21 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15046:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.1
   3.1.0
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-3.0. Thanks [~hanishakoneru]!

> Document Apache Hadoop does not support Java 9 in BUILDING.txt
> --
>
> Key: HADOOP-15046
> URL: https://issues.apache.org/jira/browse/HADOOP-15046
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Hanisha Koneru
>  Labels: newbie
> Fix For: 3.1.0, 3.0.1
>
> Attachments: HADOOP-15046-branch-2.001.patch, HADOOP-15046.001.patch, 
> HADOOP-15046.001.patch
>
>
> Now the java version is documented as "JDK 1.8+" or "JDK 1.7+", we should 
> update this to "JDK 1.8" or "JDK 1.7 or 1.8" to exclude Java 9.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13282) S3 blob etags to be made visible in status/getFileChecksum() calls

2017-11-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13282:

Status: Open  (was: Patch Available)

> S3 blob etags to be made visible in status/getFileChecksum() calls
> --
>
> Key: HADOOP-13282
> URL: https://issues.apache.org/jira/browse/HADOOP-13282
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13282-001.patch
>
>
> If the etags of blobs were exported via {{getFileChecksum()}}, it'd be 
> possible to probe for a blob being in sync with a local file. Distcp could 
> use this to decide whether to skip a file or not.
> Now, there's a problem there: distcp needs source and dest filesystems to 
> implement the same algorithm. It'd only work out the box if you were copying 
> between S3 instances. There are also quirks with encryption and multipart: 
> [s3 
> docs|http://docs.aws.amazon.com/AmazonS3/latest/API/RESTCommonResponseHeaders.html].
>  At the very least, it's something which could be used when indexing the FS, 
> to check for changes later.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15046) Document Apache Hadoop does not support Java 9 in BUILDING.txt

2017-11-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16260786#comment-16260786
 ] 

Hadoop QA commented on HADOOP-15046:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m 
40s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15046 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898664/HADOOP-15046.001.patch
 |
| Optional Tests |  asflicense  |
| uname | Linux 926ece73259d 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 659e85e |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 302 (vs. ulimit of 5000) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13727/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Document Apache Hadoop does not support Java 9 in BUILDING.txt
> --
>
> Key: HADOOP-15046
> URL: https://issues.apache.org/jira/browse/HADOOP-15046
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Hanisha Koneru
>  Labels: newbie
> Attachments: HADOOP-15046-branch-2.001.patch, HADOOP-15046.001.patch, 
> HADOOP-15046.001.patch
>
>
> Now the java version is documented as "JDK 1.8+" or "JDK 1.7+", we should 
> update this to "JDK 1.8" or "JDK 1.7 or 1.8" to exclude Java 9.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15046) Document Apache Hadoop does not support Java 9 in BUILDING.txt

2017-11-21 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15046:
---
Attachment: HADOOP-15046.001.patch

Attaching the same patch to run precommit job using the patch for trunk.

> Document Apache Hadoop does not support Java 9 in BUILDING.txt
> --
>
> Key: HADOOP-15046
> URL: https://issues.apache.org/jira/browse/HADOOP-15046
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Hanisha Koneru
>  Labels: newbie
> Attachments: HADOOP-15046-branch-2.001.patch, HADOOP-15046.001.patch, 
> HADOOP-15046.001.patch
>
>
> Now the java version is documented as "JDK 1.8+" or "JDK 1.7+", we should 
> update this to "JDK 1.8" or "JDK 1.7 or 1.8" to exclude Java 9.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15046) Document Apache Hadoop does not support Java 9 in BUILDING.txt

2017-11-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16260737#comment-16260737
 ] 

Hadoop QA commented on HADOOP-15046:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HADOOP-15046 does not apply to branch-2. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-15046 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898248/HADOOP-15046-branch-2.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13726/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Document Apache Hadoop does not support Java 9 in BUILDING.txt
> --
>
> Key: HADOOP-15046
> URL: https://issues.apache.org/jira/browse/HADOOP-15046
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Hanisha Koneru
>  Labels: newbie
> Attachments: HADOOP-15046-branch-2.001.patch, HADOOP-15046.001.patch
>
>
> Now the java version is documented as "JDK 1.8+" or "JDK 1.7+", we should 
> update this to "JDK 1.8" or "JDK 1.7 or 1.8" to exclude Java 9.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15046) Document Apache Hadoop does not support Java 9 in BUILDING.txt

2017-11-21 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15046:
---
Status: Patch Available  (was: Open)

> Document Apache Hadoop does not support Java 9 in BUILDING.txt
> --
>
> Key: HADOOP-15046
> URL: https://issues.apache.org/jira/browse/HADOOP-15046
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Hanisha Koneru
>  Labels: newbie
> Attachments: HADOOP-15046-branch-2.001.patch, HADOOP-15046.001.patch
>
>
> Now the java version is documented as "JDK 1.8+" or "JDK 1.7+", we should 
> update this to "JDK 1.8" or "JDK 1.7 or 1.8" to exclude Java 9.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15046) Document Apache Hadoop does not support Java 9 in BUILDING.txt

2017-11-21 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16260726#comment-16260726
 ] 

Akira Ajisaka commented on HADOOP-15046:


+1 pending Jenkins for the patch for 3.x

> Document Apache Hadoop does not support Java 9 in BUILDING.txt
> --
>
> Key: HADOOP-15046
> URL: https://issues.apache.org/jira/browse/HADOOP-15046
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Hanisha Koneru
>  Labels: newbie
> Attachments: HADOOP-15046-branch-2.001.patch, HADOOP-15046.001.patch
>
>
> Now the java version is documented as "JDK 1.8+" or "JDK 1.7+", we should 
> update this to "JDK 1.8" or "JDK 1.7 or 1.8" to exclude Java 9.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15003) Merge S3A committers into trunk: Yetus patch checker

2017-11-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16260718#comment-16260718
 ] 

Hadoop QA commented on HADOOP-15003:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 10m  
5s{color} | {color:red} Docker failed to build yetus/hadoop:5b98639. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-15003 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898657/HADOOP-13786-053.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13725/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Merge S3A committers into trunk: Yetus patch checker
> 
>
> Key: HADOOP-15003
> URL: https://issues.apache.org/jira/browse/HADOOP-15003
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13786-041.patch, HADOOP-13786-042.patch, 
> HADOOP-13786-043.patch, HADOOP-13786-044.patch, HADOOP-13786-045.patch, 
> HADOOP-13786-046.patch, HADOOP-13786-047.patch, HADOOP-13786-048.patch, 
> HADOOP-13786-049.patch, HADOOP-13786-050.patch, HADOOP-13786-051.patch, 
> HADOOP-13786-052.patch, HADOOP-13786-053.patch, HADOOP-15033-testfix-1.diff
>
>
> This is a Yetus only JIRA created to have Yetus review the 
> HADOOP-13786/HADOOP-14971 patch as a .patch file, as the review PR 
> [https://github.com/apache/hadoop/pull/282] is stopping this happening in 
> HADOOP-14971.
> Reviews should go into the PR/other task



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15003) Merge S3A committers into trunk: Yetus patch checker

2017-11-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15003:

Status: Patch Available  (was: Open)

> Merge S3A committers into trunk: Yetus patch checker
> 
>
> Key: HADOOP-15003
> URL: https://issues.apache.org/jira/browse/HADOOP-15003
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13786-041.patch, HADOOP-13786-042.patch, 
> HADOOP-13786-043.patch, HADOOP-13786-044.patch, HADOOP-13786-045.patch, 
> HADOOP-13786-046.patch, HADOOP-13786-047.patch, HADOOP-13786-048.patch, 
> HADOOP-13786-049.patch, HADOOP-13786-050.patch, HADOOP-13786-051.patch, 
> HADOOP-13786-052.patch, HADOOP-13786-053.patch, HADOOP-15033-testfix-1.diff
>
>
> This is a Yetus only JIRA created to have Yetus review the 
> HADOOP-13786/HADOOP-14971 patch as a .patch file, as the review PR 
> [https://github.com/apache/hadoop/pull/282] is stopping this happening in 
> HADOOP-14971.
> Reviews should go into the PR/other task



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15003) Merge S3A committers into trunk: Yetus patch checker

2017-11-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15003:

Attachment: HADOOP-13786-053.patch

patch 053, fix TestTasks checkstyle and tabs in committers.md

Test, s3 ireland

I'm pretty much done here

> Merge S3A committers into trunk: Yetus patch checker
> 
>
> Key: HADOOP-15003
> URL: https://issues.apache.org/jira/browse/HADOOP-15003
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13786-041.patch, HADOOP-13786-042.patch, 
> HADOOP-13786-043.patch, HADOOP-13786-044.patch, HADOOP-13786-045.patch, 
> HADOOP-13786-046.patch, HADOOP-13786-047.patch, HADOOP-13786-048.patch, 
> HADOOP-13786-049.patch, HADOOP-13786-050.patch, HADOOP-13786-051.patch, 
> HADOOP-13786-052.patch, HADOOP-13786-053.patch, HADOOP-15033-testfix-1.diff
>
>
> This is a Yetus only JIRA created to have Yetus review the 
> HADOOP-13786/HADOOP-14971 patch as a .patch file, as the review PR 
> [https://github.com/apache/hadoop/pull/282] is stopping this happening in 
> HADOOP-14971.
> Reviews should go into the PR/other task



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15003) Merge S3A committers into trunk: Yetus patch checker

2017-11-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15003:

Status: Open  (was: Patch Available)

> Merge S3A committers into trunk: Yetus patch checker
> 
>
> Key: HADOOP-15003
> URL: https://issues.apache.org/jira/browse/HADOOP-15003
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13786-041.patch, HADOOP-13786-042.patch, 
> HADOOP-13786-043.patch, HADOOP-13786-044.patch, HADOOP-13786-045.patch, 
> HADOOP-13786-046.patch, HADOOP-13786-047.patch, HADOOP-13786-048.patch, 
> HADOOP-13786-049.patch, HADOOP-13786-050.patch, HADOOP-13786-051.patch, 
> HADOOP-13786-052.patch, HADOOP-15033-testfix-1.diff
>
>
> This is a Yetus only JIRA created to have Yetus review the 
> HADOOP-13786/HADOOP-14971 patch as a .patch file, as the review PR 
> [https://github.com/apache/hadoop/pull/282] is stopping this happening in 
> HADOOP-14971.
> Reviews should go into the PR/other task



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15024) AliyunOSS: support user agent configuration and include that & Hadoop version information to oss server

2017-11-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15024:

   Resolution: Fixed
Fix Version/s: 3.1.0
   Status: Resolved  (was: Patch Available)

+1 and applied to trunk. If you want to get it in to branch-3, supply a patch 
which applies cleanly there, test it and attach to this issue (oh, and reopen 
it)

> AliyunOSS: support user agent configuration and include that & Hadoop version 
> information to oss server
> ---
>
> Key: HADOOP-15024
> URL: https://issues.apache.org/jira/browse/HADOOP-15024
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/oss
>Affects Versions: 3.0.0
>Reporter: SammiChen
>Assignee: SammiChen
> Fix For: 3.1.0
>
> Attachments: HADOOP-15024.000.patch, HADOOP-15024.001.patch, 
> HADOOP-15024.002.patch
>
>
> Provide oss client side Hadoop version to oss server, to help build access 
> statistic metrics. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15003) Merge S3A committers into trunk: Yetus patch checker

2017-11-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16260620#comment-16260620
 ] 

Steve Loughran commented on HADOOP-15003:
-

Tabs are in console output in committers.md L693;  will fix

> Merge S3A committers into trunk: Yetus patch checker
> 
>
> Key: HADOOP-15003
> URL: https://issues.apache.org/jira/browse/HADOOP-15003
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13786-041.patch, HADOOP-13786-042.patch, 
> HADOOP-13786-043.patch, HADOOP-13786-044.patch, HADOOP-13786-045.patch, 
> HADOOP-13786-046.patch, HADOOP-13786-047.patch, HADOOP-13786-048.patch, 
> HADOOP-13786-049.patch, HADOOP-13786-050.patch, HADOOP-13786-051.patch, 
> HADOOP-13786-052.patch, HADOOP-15033-testfix-1.diff
>
>
> This is a Yetus only JIRA created to have Yetus review the 
> HADOOP-13786/HADOOP-14971 patch as a .patch file, as the review PR 
> [https://github.com/apache/hadoop/pull/282] is stopping this happening in 
> HADOOP-14971.
> Reviews should go into the PR/other task



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15055) Add s3 metrics from AWS SDK to s3a metrics tracking

2017-11-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-15055.
-
Resolution: Duplicate

Been on the s3a todo list for a while; assigning to you.


> Add s3 metrics from AWS SDK to s3a metrics tracking
> ---
>
> Key: HADOOP-15055
> URL: https://issues.apache.org/jira/browse/HADOOP-15055
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13551) hook up AwsSdkMetrics to hadoop metrics

2017-11-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-13551:
---

Assignee: Sean Mackrory

> hook up AwsSdkMetrics to hadoop metrics
> ---
>
> Key: HADOOP-13551
> URL: https://issues.apache.org/jira/browse/HADOOP-13551
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Sean Mackrory
>Priority: Minor
>
> There's an API in {{com.amazonaws.metrics.AwsSdkMetrics}} to give access to 
> the internal metrics of the AWS libraries. We might want to get at those



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13282) S3 blob etags to be made visible in status/getFileChecksum() calls

2017-11-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16260562#comment-16260562
 ] 

Hadoop QA commented on HADOOP-13282:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 6 
new + 5 unchanged - 0 fixed = 11 total (was 5) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 17s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
42s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-13282 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12885669/HADOOP-13282-001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ee470dcac6c3 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 659e85e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13724/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13724/testReport/ |
| Max. process+thread count | 316 (vs. ulimit of 5000) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 

[jira] [Created] (HADOOP-15057) s3guard bucket-info command to include default bucket encryption info

2017-11-21 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15057:
---

 Summary: s3guard bucket-info command to include default bucket 
encryption info
 Key: HADOOP-15057
 URL: https://issues.apache.org/jira/browse/HADOOP-15057
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Steve Loughran
Priority: Minor


AWS S3 now has the notion of default bucket encryption 
[http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETencryption.html]

Once set, all data uploaded is automatically encrypted, without needing to set 
any client options

We should provide that info in the s3guard bucket-info command, so you can see 
that data being uploaded really is encrypted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11202) SequenceFile crashes with client-side encrypted files that are shorter than FileSystem.getStatus(path)

2017-11-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11202:

Summary: SequenceFile crashes with client-side encrypted files that are 
shorter than FileSystem.getStatus(path)  (was: SequenceFile crashes with 
encrypted files that are shorter than FileSystem.getStatus(path))

> SequenceFile crashes with client-side encrypted files that are shorter than 
> FileSystem.getStatus(path)
> --
>
> Key: HADOOP-11202
> URL: https://issues.apache.org/jira/browse/HADOOP-11202
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.2.0
> Environment: Amazon EMR 3.0.4
>Reporter: Corby Wilson
>
> Encrypted files are often padded to allow for proper encryption on a 2^n-bit 
> boundary.  As a result, the encrypted file might be a few bytes bigger than 
> the unencrypted file.
> We have a case where an encrypted files is 2 bytes bigger due to padding.
> When we run a HIVE job on the file to get a record count (select count(*) 
> from ) it runs org.apache.hadoop.mapred.SequenceFileRecordReader and 
> loads the file in through a custom FS InputStream.
> The InputStream decrypts the file  as it gets read in.  Splits are properly 
> handled as it extends both Seekable and Positioned Readable.
> When the org.apache.hadoop.io.SequenceFile class intializes it reads in the 
> file size from the FileMetadata which returns the file size of the encrypted 
> file on disk (or in this case in S3).
> However, the actual file size is 2 bytes less, so the InputStream will return 
> EOF (-1) before the SequenceFile thinks it's done.
> As a result, the SequenceFile$Reader tried to run the next->readRecordLength 
> after the file has been closed and we get a crash.
> The SequenceFile class SHOULD, instead, pay attention to the EOF marker from 
> the stream instead of the file size reported in the metadata and set the 
> 'more' flag accordingly.
> Sample stack dump from crash
> 2014-10-10 21:25:27,160 WARN [main] org.apache.hadoop.mapred.YarnChild: 
> Exception running child : java.io.IOException: java.io.IOException: 
> java.io.EOFException
>   at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
>   at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
>   at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:304)
>   at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:220)
>   at 
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:199)
>   at 
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:185)
>   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:52)
>   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:433)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:344)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:162)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:157)
> Caused by: java.io.IOException: java.io.EOFException
>   at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
>   at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
>   at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:276)
>   at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:101)
>   at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:41)
>   at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:108)
>   at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:302)
>   ... 11 more
> Caused by: java.io.EOFException
>   at java.io.DataInputStream.readInt(DataInputStream.java:392)
>   at 
> org.apache.hadoop.io.SequenceFile$Reader.readRecordLength(SequenceFile.java:2332)
>   at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:2363)
>   at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:2500)

[jira] [Commented] (HADOOP-14475) Metrics of S3A don't print out when enable it in Hadoop metrics property file

2017-11-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16260516#comment-16260516
 ] 

Steve Loughran commented on HADOOP-14475:
-

I think we'd probably be better off with hostname than full fsURI. The URI will 
include the schema, and may, if we don't sanitise it properly, include 
user:password secrets. Using bucket only will keep things more consistent with 
other code

That said; what does wasb do here?

> Metrics of S3A don't print out  when enable it in Hadoop metrics property file
> --
>
> Key: HADOOP-14475
> URL: https://issues.apache.org/jira/browse/HADOOP-14475
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: uname -a
> Linux client01 4.4.0-74-generic #95-Ubuntu SMP Wed Apr 12 09:50:34 UTC 2017 
> x86_64 x86_64 x86_64 GNU/Linux
>  cat /etc/issue
> Ubuntu 16.04.2 LTS \n \l
>Reporter: Yonger
>Assignee: Yonger
> Attachments: HADOOP-14475-003.patch, HADOOP-14475.002.patch, 
> HADOOP-14475.005.patch, HADOOP-14475.006.patch, HADOOP-14475.008.patch, 
> HADOOP-14475.009.patch, HADOOP-14475.010.patch, HADOOP-14475.011.patch, 
> HADOOP-14475.012.patch, HADOOP-14475.013.patch, HADOOP-14775.007.patch, 
> failsafe-report-s3a-it.html, failsafe-report-s3a-scale.html, 
> failsafe-report-scale.html, failsafe-report-scale.zip, s3a-metrics.patch1, 
> stdout.zip
>
>
> *.sink.file.class=org.apache.hadoop.metrics2.sink.FileSink
> #*.sink.file.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> #*.sink.influxdb.url=http:/xx
> #*.sink.influxdb.influxdb_port=8086
> #*.sink.influxdb.database=hadoop
> #*.sink.influxdb.influxdb_username=hadoop
> #*.sink.influxdb.influxdb_password=hadoop
> #*.sink.ingluxdb.cluster=c1
> *.period=10
> #namenode.sink.influxdb.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> #S3AFileSystem.sink.influxdb.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> S3AFileSystem.sink.file.filename=s3afilesystem-metrics.out
> I can't find the out put file even i run a MR job which should be used s3.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-12949) Add HTrace to the s3a connector

2017-11-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-12949:
---

Assignee: Madhawa Gunasekara

> Add HTrace to the s3a connector
> ---
>
> Key: HADOOP-12949
> URL: https://issues.apache.org/jira/browse/HADOOP-12949
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Madhawa Gunasekara
>Assignee: Madhawa Gunasekara
>
> Hi All, 
> s3, GCS, WASB, and other cloud blob stores are becoming increasingly 
> important in Hadoop. But we don't have distributed tracing for these yet. It 
> would be interesting to add distributed tracing here. It would enable 
> collecting really interesting data like probability distributions of PUT and 
> GET requests to s3 and their impact on MR jobs, etc.
> I would like to implement this feature, Please shed some light on this 
> Thanks,
> Madhawa



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14475) Metrics of S3A don't print out when enable it in Hadoop metrics property file

2017-11-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14475:

Issue Type: Sub-task  (was: Bug)
Parent: HADOOP-14831

> Metrics of S3A don't print out  when enable it in Hadoop metrics property file
> --
>
> Key: HADOOP-14475
> URL: https://issues.apache.org/jira/browse/HADOOP-14475
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: uname -a
> Linux client01 4.4.0-74-generic #95-Ubuntu SMP Wed Apr 12 09:50:34 UTC 2017 
> x86_64 x86_64 x86_64 GNU/Linux
>  cat /etc/issue
> Ubuntu 16.04.2 LTS \n \l
>Reporter: Yonger
>Assignee: Yonger
> Attachments: HADOOP-14475-003.patch, HADOOP-14475.002.patch, 
> HADOOP-14475.005.patch, HADOOP-14475.006.patch, HADOOP-14475.008.patch, 
> HADOOP-14475.009.patch, HADOOP-14475.010.patch, HADOOP-14475.011.patch, 
> HADOOP-14475.012.patch, HADOOP-14475.013.patch, HADOOP-14775.007.patch, 
> failsafe-report-s3a-it.html, failsafe-report-s3a-scale.html, 
> failsafe-report-scale.html, failsafe-report-scale.zip, s3a-metrics.patch1, 
> stdout.zip
>
>
> *.sink.file.class=org.apache.hadoop.metrics2.sink.FileSink
> #*.sink.file.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> #*.sink.influxdb.url=http:/xx
> #*.sink.influxdb.influxdb_port=8086
> #*.sink.influxdb.database=hadoop
> #*.sink.influxdb.influxdb_username=hadoop
> #*.sink.influxdb.influxdb_password=hadoop
> #*.sink.ingluxdb.cluster=c1
> *.period=10
> #namenode.sink.influxdb.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> #S3AFileSystem.sink.influxdb.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> S3AFileSystem.sink.file.filename=s3afilesystem-metrics.out
> I can't find the out put file even i run a MR job which should be used s3.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-12949) Add HTrace to the s3a connector

2017-11-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-12949:
---

Assignee: (was: Madhawa Gunasekara)
Target Version/s: 3.1.0

> Add HTrace to the s3a connector
> ---
>
> Key: HADOOP-12949
> URL: https://issues.apache.org/jira/browse/HADOOP-12949
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Madhawa Gunasekara
>
> Hi All, 
> s3, GCS, WASB, and other cloud blob stores are becoming increasingly 
> important in Hadoop. But we don't have distributed tracing for these yet. It 
> would be interesting to add distributed tracing here. It would enable 
> collecting really interesting data like probability distributions of PUT and 
> GET requests to s3 and their impact on MR jobs, etc.
> I would like to implement this feature, Please shed some light on this 
> Thanks,
> Madhawa



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12949) Add HTrace to the s3a connector

2017-11-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16260508#comment-16260508
 ] 

Steve Loughran commented on HADOOP-12949:
-

Revisiting this

* yes, it would be good. 
* let's not worry about UA headers initially; a later iteration.
* more important: linking across jobs on long lived processes, e.g Spark, Hive 
LLAP. We want those tools to create a context, it to propagate over with their 
queries, and the store clients to pick that up.

Making a subclass of the S3A phase IV work, targeting Hadoop 3.1. 

Patches welcome!

> Add HTrace to the s3a connector
> ---
>
> Key: HADOOP-12949
> URL: https://issues.apache.org/jira/browse/HADOOP-12949
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Madhawa Gunasekara
>Assignee: Madhawa Gunasekara
>
> Hi All, 
> s3, GCS, WASB, and other cloud blob stores are becoming increasingly 
> important in Hadoop. But we don't have distributed tracing for these yet. It 
> would be interesting to add distributed tracing here. It would enable 
> collecting really interesting data like probability distributions of PUT and 
> GET requests to s3 and their impact on MR jobs, etc.
> I would like to implement this feature, Please shed some light on this 
> Thanks,
> Madhawa



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12949) Add HTrace to the s3a connector

2017-11-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12949:

Issue Type: Sub-task  (was: Improvement)
Parent: HADOOP-14831

> Add HTrace to the s3a connector
> ---
>
> Key: HADOOP-12949
> URL: https://issues.apache.org/jira/browse/HADOOP-12949
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Madhawa Gunasekara
>Assignee: Madhawa Gunasekara
>
> Hi All, 
> s3, GCS, WASB, and other cloud blob stores are becoming increasingly 
> important in Hadoop. But we don't have distributed tracing for these yet. It 
> would be interesting to add distributed tracing here. It would enable 
> collecting really interesting data like probability distributions of PUT and 
> GET requests to s3 and their impact on MR jobs, etc.
> I would like to implement this feature, Please shed some light on this 
> Thanks,
> Madhawa



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13022) S3 MD5 check fails on Server Side Encryption-KMS with AWS and default key is used

2017-11-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-13022.
-
Resolution: Cannot Reproduce

closing as cannot reproduce; upgrading the SDK appears to have made it go away

> S3 MD5 check fails on Server Side Encryption-KMS with AWS and default key is 
> used
> -
>
> Key: HADOOP-13022
> URL: https://issues.apache.org/jira/browse/HADOOP-13022
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Leonardo Contreras
>Priority: Minor
>
> When server side encryption with "aws:kms" value and no custom key is used in 
> S3A Filesystem, the AWSClient fails when verifing Md5:
> {noformat}
> Exception in thread "main" com.amazonaws.AmazonClientException: Unable to 
> verify integrity of data upload.  Client calculated content hash (contentMD5: 
> 1B2M2Y8AsgTpgAmY7PhCfg== in base 64) didn't match hash (etag: 
> c29fcc646e17c348bce9cca8f9d205f5 in hex) calculated by Amazon S3.  You may 
> need to delete the data stored in Amazon S3. (metadata.contentMD5: null, 
> md5DigestStream: 
> com.amazonaws.services.s3.internal.MD5DigestCalculatingInputStream@65d9e72a, 
> bucketName: abuse-messages-nonprod, key: 
> venus/raw_events/checkpoint/825eb6aa-543d-46b1-801f-42de9dbc1610/)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1492)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createEmptyObject(S3AFileSystem.java:1295)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.createFakeDirectory(S3AFileSystem.java:1272)
>   at org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:969)
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1888)
>   at 
> org.apache.spark.SparkContext$$anonfun$setCheckpointDir$2.apply(SparkContext.scala:2077)
>   at 
> org.apache.spark.SparkContext$$anonfun$setCheckpointDir$2.apply(SparkContext.scala:2074)
>   at scala.Option.map(Option.scala:145)
>   at 
> org.apache.spark.SparkContext.setCheckpointDir(SparkContext.scala:2074)
>   at 
> org.apache.spark.streaming.StreamingContext.checkpoint(StreamingContext.scala:237)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14832) Listing s3a bucket without credentials gives Interrupted error

2017-11-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14832:

Parent Issue: HADOOP-14831  (was: HADOOP-13204)

> Listing s3a bucket without credentials gives Interrupted error
> --
>
> Key: HADOOP-14832
> URL: https://issues.apache.org/jira/browse/HADOOP-14832
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: John Zhuge
>Priority: Minor
>
> In trunk pseudo distributed mode, without setting s3a credentials, listing an 
> s3a bucket only gives "Interrupted" error :
> {noformat}
> $ hadoop fs -ls s3a://bucket/
> ls: Interrupted
> {noformat}
> In comparison, branch-2 gives a much better error message:
> {noformat}
> (branch-2)$ hadoop_env hadoop fs -ls s3a://bucket/
> ls: doesBucketExist on hdfs-cce: com.amazonaws.AmazonClientException: No AWS 
> Credentials provided by BasicAWSCredentialsProvider 
> EnvironmentVariableCredentialsProvider InstanceProfileCredentialsProvider : 
> com.amazonaws.SdkClientException: Unable to load credentials from service 
> endpoint
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14943) Add common getFileBlockLocations() emulation for object stores, including S3A

2017-11-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14943:

Parent Issue: HADOOP-14831  (was: HADOOP-13204)

> Add common getFileBlockLocations() emulation for object stores, including S3A
> -
>
> Key: HADOOP-14943
> URL: https://issues.apache.org/jira/browse/HADOOP-14943
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14943-001.patch, HADOOP-14943-002.patch, 
> HADOOP-14943-002.patch
>
>
> It looks suspiciously like S3A isn't providing the partitioning data needed 
> in {{listLocatedStatus}} and {{getFileBlockLocations()}} needed to break up a 
> file by the blocksize. This will stop tools using the MRv1 APIS doing the 
> partitioning properly if the input format isn't doing it own split logic.
> FileInputFormat in MRv2 is a bit more configurable about input split 
> calculation & will split up large files. but otherwise, the partitioning is 
> being done more by the default values of the executing engine, rather than 
> any config data from the filesystem about what its "block size" is,
> NativeAzureFS does a better job; maybe that could be factored out to 
> hadoop-common and reused?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13282) S3 blob etags to be made visible in status/getFileChecksum() calls

2017-11-21 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13282:

Parent Issue: HADOOP-14831  (was: HADOOP-13204)

> S3 blob etags to be made visible in status/getFileChecksum() calls
> --
>
> Key: HADOOP-13282
> URL: https://issues.apache.org/jira/browse/HADOOP-13282
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13282-001.patch
>
>
> If the etags of blobs were exported via {{getFileChecksum()}}, it'd be 
> possible to probe for a blob being in sync with a local file. Distcp could 
> use this to decide whether to skip a file or not.
> Now, there's a problem there: distcp needs source and dest filesystems to 
> implement the same algorithm. It'd only work out the box if you were copying 
> between S3 instances. There are also quirks with encryption and multipart: 
> [s3 
> docs|http://docs.aws.amazon.com/AmazonS3/latest/API/RESTCommonResponseHeaders.html].
>  At the very least, it's something which could be used when indexing the FS, 
> to check for changes later.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14475) Metrics of S3A don't print out when enable it in Hadoop metrics property file

2017-11-21 Thread Yonger (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16260436#comment-16260436
 ] 

Yonger commented on HADOOP-14475:
-

[~mackrorysd]Thanks for refining the code, that looks pretty good for me.
The only thing want to discuss with you: 
>From the latest code, 

{code:java}
+String msName = METRICS_SOURCE_BASENAME + number;
+metricsSourceName = msName + "-" + name.getHost();
+this.recordName = metricsSourceName; 
{code}
I think we don't need to add name.getHost to record name, because each record 
has a field "fsURI" that include the host name/bucket name.




> Metrics of S3A don't print out  when enable it in Hadoop metrics property file
> --
>
> Key: HADOOP-14475
> URL: https://issues.apache.org/jira/browse/HADOOP-14475
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: uname -a
> Linux client01 4.4.0-74-generic #95-Ubuntu SMP Wed Apr 12 09:50:34 UTC 2017 
> x86_64 x86_64 x86_64 GNU/Linux
>  cat /etc/issue
> Ubuntu 16.04.2 LTS \n \l
>Reporter: Yonger
>Assignee: Yonger
> Attachments: HADOOP-14475-003.patch, HADOOP-14475.002.patch, 
> HADOOP-14475.005.patch, HADOOP-14475.006.patch, HADOOP-14475.008.patch, 
> HADOOP-14475.009.patch, HADOOP-14475.010.patch, HADOOP-14475.011.patch, 
> HADOOP-14475.012.patch, HADOOP-14475.013.patch, HADOOP-14775.007.patch, 
> failsafe-report-s3a-it.html, failsafe-report-s3a-scale.html, 
> failsafe-report-scale.html, failsafe-report-scale.zip, s3a-metrics.patch1, 
> stdout.zip
>
>
> *.sink.file.class=org.apache.hadoop.metrics2.sink.FileSink
> #*.sink.file.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> #*.sink.influxdb.url=http:/xx
> #*.sink.influxdb.influxdb_port=8086
> #*.sink.influxdb.database=hadoop
> #*.sink.influxdb.influxdb_username=hadoop
> #*.sink.influxdb.influxdb_password=hadoop
> #*.sink.ingluxdb.cluster=c1
> *.period=10
> #namenode.sink.influxdb.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> #S3AFileSystem.sink.influxdb.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> S3AFileSystem.sink.file.filename=s3afilesystem-metrics.out
> I can't find the out put file even i run a MR job which should be used s3.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14818) Can not show help message of namenode/datanode/nodemanager when process started.

2017-11-21 Thread Jack Bearden (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16260426#comment-16260426
 ] 

Jack Bearden commented on HADOOP-14818:
---

Hello! :)  
[~GergelyNovak], I applied your patch and it does now indeed produce help 
output for hdfs daemon commands while they are running. Nice!

While this appears to fix the issue, what would be the impact on our users? I 
can see a few potential side-effects from this:
# The changing of the exit code from 1 to 0 may break existing user's health 
checks
# Automation relying on a text output of this call (e.g. checking process ID of 
the namenode) will no longer work
# Help output may be misleading if the daemon commands for the namenode are 
expected to be executed when the daemon is not running


> Can not show help message of namenode/datanode/nodemanager when process 
> started.
> 
>
> Key: HADOOP-14818
> URL: https://issues.apache.org/jira/browse/HADOOP-14818
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: bin
>Affects Versions: 3.0.0-beta1
>Reporter: Wenxin He
>Assignee: Gergely Novák
>Priority: Minor
> Attachments: HADOOP-14818.001.patch
>
>
> We should always get the help message whenever the process is started or not.
> But now,
> when datanode starts, we get an error message:
> {noformat}
> hadoop# bin/hdfs datanode -h
> datanode is running as process 1701.  Stop it first.
> {noformat}
> when datanode stops, we get what we want:
> {noformat}
> hadoop# bin/hdfs --daemon stop datanode
> hadoop# bin/hdfs datanode -h
> Usage: hdfs datanode [-regular | -rollback | -rollingupgrade rollback ]
> -regular : Normal DataNode startup (default).
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15033) Use java.util.zip.CRC32C for Java 9 and above

2017-11-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16260405#comment-16260405
 ] 

Hadoop QA commented on HADOOP-15033:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 48s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
19s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
38s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 85m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15033 |
| GITHUB PR | https://github.com/apache/hadoop/pull/291 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 6b3d5200af99 3.13.0-135-generic #184-Ubuntu SMP Wed 

[jira] [Commented] (HADOOP-14964) AliyunOSS: backport Aliyun OSS module to branch-2 and 2.7+ branches

2017-11-21 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16260403#comment-16260403
 ] 

Chris Douglas commented on HADOOP-14964:


bq. Yep, that's for the same thing, the one meant in this effort.
Sorry, I was trying to ask if we needed upstream changes, if we could just 
apply them here. Either way, whatever's simplest to maintain/test.

bq. I thought the latest OSS SDK should be using some advanced features in 
newer httpclient version [...] I thought the shade approach should be good 
because in the long run OSS SDK has the freedom to use its own version of 
httpclient library, not worrying about conflict with outside or elsewhere.
Got it. Until Java9 that may be easier, particularly if the SDK uses more 
recent versions of Hadoop dependencies.

> AliyunOSS: backport Aliyun OSS module to branch-2 and 2.7+ branches
> ---
>
> Key: HADOOP-14964
> URL: https://issues.apache.org/jira/browse/HADOOP-14964
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/oss
>Reporter: Genmao Yu
>Assignee: SammiChen
> Attachments: HADOOP-14964-branch-2.000.patch, 
> HADOOP-14964-branch-2.8.000.patch, HADOOP-14964-branch-2.8.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14600) LocatedFileStatus constructor forces RawLocalFS to exec a process to get the permissions

2017-11-21 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-14600:
---
Target Version/s: 3.1.0
  Status: Patch Available  (was: Open)

> LocatedFileStatus constructor forces RawLocalFS to exec a process to get the 
> permissions
> 
>
> Key: HADOOP-14600
> URL: https://issues.apache.org/jira/browse/HADOOP-14600
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.3
> Environment: file:// in a dir with many files
>Reporter: Steve Loughran
>Assignee: Ping Liu
> Attachments: HADOOP-14600.001.patch, HADOOP-14600.002.patch, 
> HADOOP-14600.003.patch, HADOOP-14600.004.patch, HADOOP-14600.005.patch, 
> HADOOP-14600.006.patch, HADOOP-14600.007.patch, HADOOP-14600.008.patch, 
> TestRawLocalFileSystemContract.java
>
>
> Reported in SPARK-21137. a {{FileSystem.listStatus}} call really craws 
> against the local FS, because {{FileStatus.getPemissions}} call forces  
> {{DeprecatedRawLocalFileStatus}} tp spawn a process to read the real UGI 
> values.
> That is: for every other FS, what's a field lookup or even a no-op, on the 
> local FS it's a process exec/spawn, with all the costs. This gets expensive 
> if you have many files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14600) LocatedFileStatus constructor forces RawLocalFS to exec a process to get the permissions

2017-11-21 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-14600:
---
Status: Open  (was: Patch Available)

> LocatedFileStatus constructor forces RawLocalFS to exec a process to get the 
> permissions
> 
>
> Key: HADOOP-14600
> URL: https://issues.apache.org/jira/browse/HADOOP-14600
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.3
> Environment: file:// in a dir with many files
>Reporter: Steve Loughran
>Assignee: Ping Liu
> Attachments: HADOOP-14600.001.patch, HADOOP-14600.002.patch, 
> HADOOP-14600.003.patch, HADOOP-14600.004.patch, HADOOP-14600.005.patch, 
> HADOOP-14600.006.patch, HADOOP-14600.007.patch, HADOOP-14600.008.patch, 
> TestRawLocalFileSystemContract.java
>
>
> Reported in SPARK-21137. a {{FileSystem.listStatus}} call really craws 
> against the local FS, because {{FileStatus.getPemissions}} call forces  
> {{DeprecatedRawLocalFileStatus}} tp spawn a process to read the real UGI 
> values.
> That is: for every other FS, what's a field lookup or even a no-op, on the 
> local FS it's a process exec/spawn, with all the costs. This gets expensive 
> if you have many files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14964) AliyunOSS: backport Aliyun OSS module to branch-2 and 2.7+ branches

2017-11-21 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16260388#comment-16260388
 ] 

Kai Zheng commented on HADOOP-14964:


bq. Could it also apply to the hadoop-oss module?
Yep, that's for the same thing, the one meant in this effort.

bq. Sure, but what changed between these versions?
I haven't got the chance to look at the codes closely, but the changes are all 
introduced by the revision upgrade of httpclient. If hadoopk-aliyun module was 
launched with different httpclient from the expected version by OSS SDK, it 
won't work in our test. I thought the latest OSS SDK should be using some 
advanced features in newer httpclient version.

I thought the shade approach should be good because in the long run OSS SDK has 
the freedom to use its own version of httpclient library, not worrying about 
conflict with outside or elsewhere.

> AliyunOSS: backport Aliyun OSS module to branch-2 and 2.7+ branches
> ---
>
> Key: HADOOP-14964
> URL: https://issues.apache.org/jira/browse/HADOOP-14964
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/oss
>Reporter: Genmao Yu
>Assignee: SammiChen
> Attachments: HADOOP-14964-branch-2.000.patch, 
> HADOOP-14964-branch-2.8.000.patch, HADOOP-14964-branch-2.8.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14600) LocatedFileStatus constructor forces RawLocalFS to exec a process to get the permissions

2017-11-21 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16260386#comment-16260386
 ] 

Chris Douglas commented on HADOOP-14600:


bq. The only question I have is the number of spaces for indentation [...] Oh, 
as I just read the Oracle/Sun code convention, it says indentation should be 
four spaces
That exception is buried in the contribution 
[docs|https://wiki.apache.org/hadoop/HowToContribute#line-67]. Hadoop uses 2 
spaces instead of 4.

Something went horribly awry with Jenkins. Have you been able to test v008 
locally?

> LocatedFileStatus constructor forces RawLocalFS to exec a process to get the 
> permissions
> 
>
> Key: HADOOP-14600
> URL: https://issues.apache.org/jira/browse/HADOOP-14600
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.3
> Environment: file:// in a dir with many files
>Reporter: Steve Loughran
>Assignee: Ping Liu
> Attachments: HADOOP-14600.001.patch, HADOOP-14600.002.patch, 
> HADOOP-14600.003.patch, HADOOP-14600.004.patch, HADOOP-14600.005.patch, 
> HADOOP-14600.006.patch, HADOOP-14600.007.patch, HADOOP-14600.008.patch, 
> TestRawLocalFileSystemContract.java
>
>
> Reported in SPARK-21137. a {{FileSystem.listStatus}} call really craws 
> against the local FS, because {{FileStatus.getPemissions}} call forces  
> {{DeprecatedRawLocalFileStatus}} tp spawn a process to read the real UGI 
> values.
> That is: for every other FS, what's a field lookup or even a no-op, on the 
> local FS it's a process exec/spawn, with all the costs. This gets expensive 
> if you have many files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org