[jira] [Updated] (HADOOP-17333) MetricsRecordFiltered error

2020-10-27 Thread minchengbo (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

minchengbo updated HADOOP-17333:

Description: 
 Got sink exception,when set  
datanode.sink.ganglia.metric.filter.exclude=metricssystem in 
hadoop-metrics2.properties ,

java.lang.ClassCastException: 
org.apache.hadoop.metrics2.impl.MetricsRecordFiltered$1 cannot be cast to 
java.util.Collection
 at 
org.apache.hadoop.metrics2.sink.ganglia.GangliaSink30.putMetrics(GangliaSink30.java:165)
 at 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:184)
 at 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:43)
 at org.apache.hadoop.metrics2.impl.SinkQueue.consumeAll(SinkQueue.java:87)
 at 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.publishMetricsFromQueue(MetricsSinkAdapter.java:135)
 at 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter$1.run(MetricsSinkAdapter.java:89)


//
This case can show the exception
public static void main(String[] args) {
 List metricsd=new 
LinkedList(); 
 MetricsInfo info=MsInfo.ProcessName;
 long timestamp=System.currentTimeMillis();
 List tags=new LinkedList<>();   
 org.apache.hadoop.metrics2.impl.MetricsRecordImpl recordimp = 
new MetricsRecordImpl(info, timestamp, tags, metricsd);
 MetricsFilter filter=new RegexFilter(); 
 MetricsRecordFiltered  recordfilter=new 
MetricsRecordFiltered(recordimp,filter);   
 SubsetConfiguration conf=new SubsetConfiguration(new 
PropertyListConfiguration(),"test");
 
conf.addProperty(AbstractGangliaSink.SUPPORT_SPARSE_METRICS_PROPERTY, true);
 GangliaSink30  ganliasink=new GangliaSink30();
 ganliasink.init(conf);  
 ganliasink.putMetrics(recordfilter);

}

///
The root cause is:
 Gets a Iterable object in  MetricsRecordFiltered.java:
 @Override public Iterable metrics() {
return new Iterable() {
  final Iterator it = delegate.metrics().iterator();
  @Override public Iterator iterator() {
return new AbstractIterator() {
  @Override public AbstractMetric computeNext() {
while (it.hasNext()) {
  AbstractMetric next = it.next();
  if (filter.accepts(next.name())) {
return next;
  }
}
return (AbstractMetric)endOfData();
  }
};
  }
};
  }

but convert to Collection in GangliaSink30.java line 164
Collection metrics = (Collection) record
.metrics();



  was:
 Got sink exception,when set  
datanode.sink.ganglia.metric.filter.exclude=metricssystem in 
hadoop-metrics2.properties ,

java.lang.ClassCastException: 
org.apache.hadoop.metrics2.impl.MetricsRecordFiltered$1 cannot be cast to 
java.util.Collection
 at 
org.apache.hadoop.metrics2.sink.ganglia.GangliaSink30.putMetrics(GangliaSink30.java:165)
 at 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:184)
 at 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:43)
 at org.apache.hadoop.metrics2.impl.SinkQueue.consumeAll(SinkQueue.java:87)
 at 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.publishMetricsFromQueue(MetricsSinkAdapter.java:135)
 at 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter$1.run(MetricsSinkAdapter.java:89)

Environment: 



  was:
This case can show the exception
public static void main(String[] args) {
 List metricsd=new 
LinkedList(); 
 MetricsInfo info=MsInfo.ProcessName;
 long timestamp=System.currentTimeMillis();
 List tags=new LinkedList<>();   
 org.apache.hadoop.metrics2.impl.MetricsRecordImpl recordimp = 
new MetricsRecordImpl(info, timestamp, tags, metricsd);
 MetricsFilter filter=new RegexFilter(); 
 MetricsRecordFiltered  recordfilter=new 
MetricsRecordFiltered(recordimp,filter);   
 SubsetConfiguration conf=new SubsetConfiguration(new 
PropertyListConfiguration(),"test");
 
conf.addProperty(AbstractGangliaSink.SUPPORT_SPARSE_METRICS_PROPERTY, true);
 GangliaSink30  ganliasink=new GangliaSink30();
 ganliasink.init(conf);  
 ganliasink.putMetrics(recordfilter);

}

///
The root cause is:
 Gets a Iterable object in  MetricsRecordFiltered.java:
 @Override public Iterable metrics() {
return new 

[jira] [Created] (HADOOP-17333) MetricsRecordFiltered error

2020-10-27 Thread minchengbo (Jira)
minchengbo created HADOOP-17333:
---

 Summary: MetricsRecordFiltered error
 Key: HADOOP-17333
 URL: https://issues.apache.org/jira/browse/HADOOP-17333
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Affects Versions: 3.2.1
 Environment: This case can show the exception
public static void main(String[] args) {
 List metricsd=new 
LinkedList(); 
 MetricsInfo info=MsInfo.ProcessName;
 long timestamp=System.currentTimeMillis();
 List tags=new LinkedList<>();   
 org.apache.hadoop.metrics2.impl.MetricsRecordImpl recordimp = 
new MetricsRecordImpl(info, timestamp, tags, metricsd);
 MetricsFilter filter=new RegexFilter(); 
 MetricsRecordFiltered  recordfilter=new 
MetricsRecordFiltered(recordimp,filter);   
 SubsetConfiguration conf=new SubsetConfiguration(new 
PropertyListConfiguration(),"test");
 
conf.addProperty(AbstractGangliaSink.SUPPORT_SPARSE_METRICS_PROPERTY, true);
 GangliaSink30  ganliasink=new GangliaSink30();
 ganliasink.init(conf);  
 ganliasink.putMetrics(recordfilter);

}

///
The root cause is:
 Gets a Iterable object in  MetricsRecordFiltered.java:
 @Override public Iterable metrics() {
return new Iterable() {
  final Iterator it = delegate.metrics().iterator();
  @Override public Iterator iterator() {
return new AbstractIterator() {
  @Override public AbstractMetric computeNext() {
while (it.hasNext()) {
  AbstractMetric next = it.next();
  if (filter.accepts(next.name())) {
return next;
  }
}
return (AbstractMetric)endOfData();
  }
};
  }
};
  }

but convert to Collection in GangliaSink30.java line 164
Collection metrics = (Collection) record
.metrics();


Reporter: minchengbo


 Got sink exception,when set  
datanode.sink.ganglia.metric.filter.exclude=metricssystem in 
hadoop-metrics2.properties ,

java.lang.ClassCastException: 
org.apache.hadoop.metrics2.impl.MetricsRecordFiltered$1 cannot be cast to 
java.util.Collection
 at 
org.apache.hadoop.metrics2.sink.ganglia.GangliaSink30.putMetrics(GangliaSink30.java:165)
 at 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:184)
 at 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:43)
 at org.apache.hadoop.metrics2.impl.SinkQueue.consumeAll(SinkQueue.java:87)
 at 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.publishMetricsFromQueue(MetricsSinkAdapter.java:135)
 at 
org.apache.hadoop.metrics2.impl.MetricsSinkAdapter$1.run(MetricsSinkAdapter.java:89)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tasanuma commented on pull request #2418: HDFS-15657. TestRouter#testNamenodeHeartBeatEnableDefault fails by BindException

2020-10-27 Thread GitBox


tasanuma commented on pull request #2418:
URL: https://github.com/apache/hadoop/pull/2418#issuecomment-717705429


   @aajisaka Thanks for the PR.
   We may need to handle the checktyle issue.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] snvijaya commented on pull request #2368: Hadoop-17296. ABFS: Force reads to be always of buffer size

2020-10-27 Thread GitBox


snvijaya commented on pull request #2368:
URL: https://github.com/apache/hadoop/pull/2368#issuecomment-717702309


   @mukund-thakur  - Will appreciate if you can confirm your review status on 
this PR. Have provided updates for your comments.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2418: HDFS-15657. TestRouter#testNamenodeHeartBeatEnableDefault fails by BindException

2020-10-27 Thread GitBox


hadoop-yetus commented on pull request #2418:
URL: https://github.com/apache/hadoop/pull/2418#issuecomment-717691521


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 15s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 46s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 23s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 38s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 21s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m 18s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 15s | 
[/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2418/1/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 15s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   1m 14s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m 34s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 29s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  92m 58s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2418/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2418 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux abb30b097b29 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d0c786db4de |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2418/1/testReport/ |
   | Max. process+thread count | 2764 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2418/1/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.1.3 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



[GitHub] [hadoop] amahussein opened a new pull request #2419: HDFS-15654. TestBPOfferService#testMissBlocksWhenReregister fails intermittently

2020-10-27 Thread GitBox


amahussein opened a new pull request #2419:
URL: https://github.com/apache/hadoop/pull/2419


   …ermittently
   
   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2408: HDFS-15643. TestFileChecksumCompositeCrc fails intermittently.

2020-10-27 Thread GitBox


hadoop-yetus commented on pull request #2408:
URL: https://github.com/apache/hadoop/pull/2408#issuecomment-717681661


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  2s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 47s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 19s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   1m 11s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 46s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 21s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 54s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 52s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 22s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 10s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  8s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  6s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 14s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m 18s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 114m 44s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2408/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 206m 56s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestFsck |
   |   | hadoop.hdfs.TestRollingUpgrade |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate |
   |   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetCache |
   |   | hadoop.hdfs.TestMultipleNNPortQOP |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2408/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2408 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 1055ba63ccdd 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ae74407ac43 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2408/2/testReport/ |
   | Max. process+thread count | 2711 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 

[GitHub] [hadoop] aajisaka opened a new pull request #2418: HDFS-15657. TestRouter#testNamenodeHeartBeatEnableDefault fails by BindException

2020-10-27 Thread GitBox


aajisaka opened a new pull request #2418:
URL: https://github.com/apache/hadoop/pull/2418


   - Use any available port to avoid BindException (main fix)
   - Use try-with-resources for Router
   - According to the javadoc, this test case is to verify the default 
behavior. Don't set the default value (true) explicitly to verify the default 
behavior.
   
   JIRA: https://issues.apache.org/jira/browse/HDFS-15657



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vinayakumarb commented on a change in pull request #2377: HDFS-15624. fix the function of setting quota by storage type

2020-10-27 Thread GitBox


vinayakumarb commented on a change in pull request #2377:
URL: https://github.com/apache/hadoop/pull/2377#discussion_r513143130



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeLayoutVersion.java
##
@@ -89,7 +89,8 @@ public static boolean supports(final LayoutFeature f, final 
int lv) {
 APPEND_NEW_BLOCK(-62, -61, "Support appending to new block"),
 QUOTA_BY_STORAGE_TYPE(-63, -61, "Support quota for specific storage 
types"),
 ERASURE_CODING(-64, -61, "Support erasure coding"),
-EXPANDED_STRING_TABLE(-65, -61, "Support expanded string table in 
fsimage");
+EXPANDED_STRING_TABLE(-65, -61, "Support expanded string table in 
fsimage"),
+NVDIMM_SUPPORT(-66, -66, "Support NVDIMM storage type");

Review comment:
   Is there any reason to change the `minCompatLV`  to `-66`?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] amahussein commented on a change in pull request #2408: HDFS-15643. TestFileChecksumCompositeCrc fails intermittently.

2020-10-27 Thread GitBox


amahussein commented on a change in pull request #2408:
URL: https://github.com/apache/hadoop/pull/2408#discussion_r513142714



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileChecksum.java
##
@@ -575,6 +596,8 @@ private FileChecksum getFileChecksum(String filePath, int 
range,
   dnIdxToDie = getDataNodeToKill(filePath);
   DataNode dnToDie = cluster.getDataNodes().get(dnIdxToDie);
   shutdownDataNode(dnToDie);
+  // wait enough time for the locations to be updated.
+  Thread.sleep(STALE_INTERVAL);

Review comment:
   Yes, I agree with you @goiri .
   I experimented with wait for number of live replicas: this did not work. It 
stayed 8 and did not go back to 9.
   Do you have suggestion what conditions should we be waiting for?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #2408: HDFS-15643. TestFileChecksumCompositeCrc fails intermittently.

2020-10-27 Thread GitBox


goiri commented on a change in pull request #2408:
URL: https://github.com/apache/hadoop/pull/2408#discussion_r513134020



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileChecksum.java
##
@@ -575,6 +596,8 @@ private FileChecksum getFileChecksum(String filePath, int 
range,
   dnIdxToDie = getDataNodeToKill(filePath);
   DataNode dnToDie = cluster.getDataNodes().get(dnIdxToDie);
   shutdownDataNode(dnToDie);
+  // wait enough time for the locations to be updated.
+  Thread.sleep(STALE_INTERVAL);

Review comment:
   Should we waitFor instead?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri merged pull request #2406: HDFS-15460. TestFileCreation#testServerDefaultsWithMinimalCaching fails intermittently.

2020-10-27 Thread GitBox


goiri merged pull request #2406:
URL: https://github.com/apache/hadoop/pull/2406


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #2406: HDFS-15460. TestFileCreation#testServerDefaultsWithMinimalCaching fails intermittently.

2020-10-27 Thread GitBox


goiri commented on a change in pull request #2406:
URL: https://github.com/apache/hadoop/pull/2406#discussion_r513133542



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
##
@@ -273,10 +272,17 @@ public void testServerDefaultsWithMinimalCaching()
   defaults.getDefaultStoragePolicyId());
   doReturn(newDefaults).when(spyNamesystem).getServerDefaults();
 
-  Thread.sleep(1);
-  defaults = dfsClient.getServerDefaults();
-  // Value is updated correctly
-  assertEquals(updatedDefaultBlockSize, defaults.getBlockSize());
+  // Verify that the value is updated correctly
+  GenericTestUtils.waitFor(()->{
+try {
+  FsServerDefaults currDef = dfsClient.getServerDefaults();
+  return (currDef.getBlockSize() == updatedDefaultBlockSize);
+} catch (IOException e) {
+  // do nothing;
+  return false;
+}
+  }, 1, 1000);

Review comment:
   This is changed so I assume @aajisaka is fine with this.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri merged pull request #2407: HDFS-15457. TestFsDatasetImpl fails intermittently.

2020-10-27 Thread GitBox


goiri merged pull request #2407:
URL: https://github.com/apache/hadoop/pull/2407


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ferhui merged pull request #2416: HDFS-15652. Make block size from NNThroughputBenchmark configurable

2020-10-27 Thread GitBox


ferhui merged pull request #2416:
URL: https://github.com/apache/hadoop/pull/2416


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ferhui commented on pull request #2416: HDFS-15652. Make block size from NNThroughputBenchmark configurable

2020-10-27 Thread GitBox


ferhui commented on pull request #2416:
URL: https://github.com/apache/hadoop/pull/2416#issuecomment-717633502


   @jojochuang @ayushtkn Thanks for review!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] amahussein commented on pull request #2407: HDFS-15457. Fix TestFsDatasetImpl readLock unit tests.

2020-10-27 Thread GitBox


amahussein commented on pull request #2407:
URL: https://github.com/apache/hadoop/pull/2407#issuecomment-717628154


   Thank you @goiri for the review.
   This fix is good to go. The failing units are among the ones we are working 
to fix. The couple others may be failing randomly.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] amahussein commented on pull request #2406: HDFS-15460. TestFileCreation#testServerDefaultsWithMinimalCaching fails intermittently.

2020-10-27 Thread GitBox


amahussein commented on pull request #2406:
URL: https://github.com/apache/hadoop/pull/2406#issuecomment-717626798


   Thank you @goiri for your review.
   This fix is ready to go.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2406: HDFS-15460. TestFileCreation#testServerDefaultsWithMinimalCaching fails intermittently.

2020-10-27 Thread GitBox


hadoop-yetus commented on pull request #2406:
URL: https://github.com/apache/hadoop/pull/2406#issuecomment-717606270


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   2m 24s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 41s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 28s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   1m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 56s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 30s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 26s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 29s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 27s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 26s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 47s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  17m 35s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m 43s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 133m 12s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2406/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 235m  2s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestNameNodeMXBean |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.TestDecommissionWithBackoffMonitor |
   |   | hadoop.hdfs.TestGetFileChecksum |
   |   | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.hdfs.server.namenode.TestFSImage |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2406/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2406 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 0efe73112a00 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ae74407ac43 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2406/4/testReport/ |
   | Max. process+thread count | 3294 (vs. ulimit of 5500) |
   | modules | C: 

[jira] [Commented] (HADOOP-17236) Bump up snakeyaml to 1.26 to mitigate CVE-2017-18640

2020-10-27 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17221820#comment-17221820
 ] 

Wei-Chiu Chuang commented on HADOOP-17236:
--

+1

> Bump up snakeyaml to 1.26 to mitigate CVE-2017-18640
> 
>
> Key: HADOOP-17236
> URL: https://issues.apache.org/jira/browse/HADOOP-17236
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP-17236-001-tempToRun.patch, HADOOP-17236-001.patch
>
>
> Bump up snakeyaml to 1.26 to mitigate CVE-2017-18640



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16948) ABFS: Support single writer dirs

2020-10-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-16948:

Labels: abfsactive pull-request-available  (was: abfsactive)

> ABFS: Support single writer dirs
> 
>
> Key: HADOOP-16948
> URL: https://issues.apache.org/jira/browse/HADOOP-16948
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Minor
>  Labels: abfsactive, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This would allow some directories to be configured as single writer 
> directories. The ABFS driver would obtain a lease when creating or opening a 
> file for writing and would automatically renew the lease and release the 
> lease when closing the file.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-16948) ABFS: Support single writer dirs

2020-10-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16948?focusedWorklogId=505434=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-505434
 ]

ASF GitHub Bot logged work on HADOOP-16948:
---

Author: ASF GitHub Bot
Created on: 27/Oct/20 22:34
Start Date: 27/Oct/20 22:34
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1925:
URL: https://github.com/apache/hadoop/pull/1925#issuecomment-717580170


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 29s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  30m 11s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 55s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 58s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 57s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 25s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 18s | 
[/diff-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-1925/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 19 new + 9 unchanged - 0 
fixed = 28 total (was 9)  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  14m  8s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  findbugs  |   1m  2s | 
[/new-findbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-1925/1/artifact/out/new-findbugs-hadoop-tools_hadoop-azure.html)
 |  hadoop-tools/hadoop-azure generated 4 new + 0 unchanged - 0 fixed = 4 total 
(was 0)  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   1m 31s | 
[/patch-unit-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-1925/1/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  73m 17s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-azure |
   |  |  Dead store to op in 
org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.breakLease(Path)  At 
AzureBlobFileSystemStore.java:org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.breakLease(Path)
  At AzureBlobFileSystemStore.java:[line 747] |
   |  |  Dead store to op in 
org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.releaseLease(Path, 
String)  At 
AzureBlobFileSystemStore.java:org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.releaseLease(Path,
 String)  At AzureBlobFileSystemStore.java:[line 740] |
   |  |  Dead store to op in 

[GitHub] [hadoop] hadoop-yetus commented on pull request #1925: HADOOP-16948. Support single writer dirs.

2020-10-27 Thread GitBox


hadoop-yetus commented on pull request #1925:
URL: https://github.com/apache/hadoop/pull/1925#issuecomment-717580170


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 29s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  30m 11s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 55s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 58s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 57s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 25s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 18s | 
[/diff-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-1925/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 19 new + 9 unchanged - 0 
fixed = 28 total (was 9)  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  14m  8s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  findbugs  |   1m  2s | 
[/new-findbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-1925/1/artifact/out/new-findbugs-hadoop-tools_hadoop-azure.html)
 |  hadoop-tools/hadoop-azure generated 4 new + 0 unchanged - 0 fixed = 4 total 
(was 0)  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   1m 31s | 
[/patch-unit-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-1925/1/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  73m 17s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-azure |
   |  |  Dead store to op in 
org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.breakLease(Path)  At 
AzureBlobFileSystemStore.java:org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.breakLease(Path)
  At AzureBlobFileSystemStore.java:[line 747] |
   |  |  Dead store to op in 
org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.releaseLease(Path, 
String)  At 
AzureBlobFileSystemStore.java:org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.releaseLease(Path,
 String)  At AzureBlobFileSystemStore.java:[line 740] |
   |  |  Dead store to op in 
org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.renewLease(Path, String) 
 At 
AzureBlobFileSystemStore.java:org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.renewLease(Path,
 String)  At AzureBlobFileSystemStore.java:[line 733] |
   |  |  new 
org.apache.hadoop.fs.azurebfs.services.SelfRenewingLease(AbfsClient, Path) 
invokes Thread.start()  At SelfRenewingLease.java: At 
SelfRenewingLease.java:[line 117] |
   | Failed junit tests | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2399: HADOOP-17318. Support concurrent S3A commit jobs with same app attempt ID.

2020-10-27 Thread GitBox


hadoop-yetus commented on pull request #2399:
URL: https://github.com/apache/hadoop/pull/2399#issuecomment-717513018


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  7s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 8 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  11m 28s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 35s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 19s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  17m 59s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 49s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 11s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m  2s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m  3s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 10s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 22s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 42s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  20m 42s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 11s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  18m 11s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 46s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2399/2/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 8 new + 30 unchanged - 1 fixed = 38 total (was 
31)  |
   | +1 :green_heart: |  mvnsite  |   2m 12s |  |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s | 
[/whitespace-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2399/2/artifact/out/whitespace-eol.txt)
 |  The patch has 5 line(s) that end in whitespace. Use git apply 
--whitespace=fix <>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  16m 37s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javadoc  |   0m 34s | 
[/diff-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2399/2/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt)
 |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 
with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 6 new + 
88 unchanged - 0 fixed = 94 total (was 88)  |
   | +1 :green_heart: |  findbugs  |   3m 41s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   9m 42s | 
[/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2399/2/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 36s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 46s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 189m 42s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ha.TestZKFailoverController |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2399/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2399 |
   | 

[jira] [Work logged] (HADOOP-17318) S3A committer to support concurrent jobs with same app attempt ID & dest dir

2020-10-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17318?focusedWorklogId=505395=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-505395
 ]

ASF GitHub Bot logged work on HADOOP-17318:
---

Author: ASF GitHub Bot
Created on: 27/Oct/20 20:16
Start Date: 27/Oct/20 20:16
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2399:
URL: https://github.com/apache/hadoop/pull/2399#issuecomment-717513018


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  7s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 8 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  11m 28s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 35s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 19s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  17m 59s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 49s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 11s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m  2s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m  3s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 10s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 22s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 42s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  20m 42s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 11s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  18m 11s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 46s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2399/2/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 8 new + 30 unchanged - 1 fixed = 38 total (was 
31)  |
   | +1 :green_heart: |  mvnsite  |   2m 12s |  |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s | 
[/whitespace-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2399/2/artifact/out/whitespace-eol.txt)
 |  The patch has 5 line(s) that end in whitespace. Use git apply 
--whitespace=fix <>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  16m 37s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javadoc  |   0m 34s | 
[/diff-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2399/2/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt)
 |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 
with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 6 new + 
88 unchanged - 0 fixed = 94 total (was 88)  |
   | +1 :green_heart: |  findbugs  |   3m 41s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   9m 42s | 
[/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2399/2/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 36s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 46s |  |  The patch does not 
generate ASF License warnings.  |
 

[GitHub] [hadoop] amahussein commented on a change in pull request #2406: HDFS-15460. TestFileCreation#testServerDefaultsWithMinimalCaching fails intermittently.

2020-10-27 Thread GitBox


amahussein commented on a change in pull request #2406:
URL: https://github.com/apache/hadoop/pull/2406#discussion_r512987849



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
##
@@ -273,10 +272,17 @@ public void testServerDefaultsWithMinimalCaching()
   defaults.getDefaultStoragePolicyId());
   doReturn(newDefaults).when(spyNamesystem).getServerDefaults();
 
-  Thread.sleep(1);
-  defaults = dfsClient.getServerDefaults();
-  // Value is updated correctly
-  assertEquals(updatedDefaultBlockSize, defaults.getBlockSize());
+  // Verify that the value is updated correctly
+  GenericTestUtils.waitFor(()->{
+try {
+  FsServerDefaults currDef = dfsClient.getServerDefaults();
+  return (currDef.getBlockSize() == updatedDefaultBlockSize);
+} catch (IOException e) {
+  // do nothing;
+  return false;
+}
+  }, 1, 1000);

Review comment:
   Thank you @aajisaka .. I updated the Pr setting the waitTime to 3 
seconds.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-16830) Add public IOStatistics API

2020-10-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16830?focusedWorklogId=505357=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-505357
 ]

ASF GitHub Bot logged work on HADOOP-16830:
---

Author: ASF GitHub Bot
Created on: 27/Oct/20 18:21
Start Date: 27/Oct/20 18:21
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2323:
URL: https://github.com/apache/hadoop/pull/2323#issuecomment-717435068


   Also, should use java.io.UncheckedIOException for the wrapper class for an 
IOE. That's new in Java8, but as its public API there, what we should adopt. 
This is good, even if its a bit more work



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 505357)
Time Spent: 8h 20m  (was: 8h 10m)

> Add public IOStatistics API
> ---
>
> Key: HADOOP-16830
> URL: https://issues.apache.org/jira/browse/HADOOP-16830
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 8h 20m
>  Remaining Estimate: 0h
>
> Applications like to collect the statistics which specific operations take, 
> by collecting exactly those operations done during the execution of FS API 
> calls by their individual worker threads, and returning these to their job 
> driver
> * S3A has a statistics API for some streams, but it's a non-standard one; 
> Impala  can't use it
> * FileSystem storage statistics are public, but as they aren't cross-thread, 
> they don't aggregate properly
> Proposed
> # A new IOStatistics interface to serve up statistics
> # S3A to implement
> # other stores to follow
> # Pass-through from the usual wrapper classes (FS data input/output streams)
> It's hard to think about how best to offer an API for operation context 
> stats, and how to actually implement.
> ThreadLocal isn't enough because the helper threads need to update on the 
> thread local value of the instigator
> My Initial PoC doesn't address that issue, but it shows what I'm thinking of



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2323: HADOOP-16830. Add public IOStatistics API.

2020-10-27 Thread GitBox


steveloughran commented on pull request #2323:
URL: https://github.com/apache/hadoop/pull/2323#issuecomment-717435068


   Also, should use java.io.UncheckedIOException for the wrapper class for an 
IOE. That's new in Java8, but as its public API there, what we should adopt. 
This is good, even if its a bit more work



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17332) S3A marker tool mixes up -min and -max

2020-10-27 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17332:

Description: 
HADOOP-17227 manages to get -min and -max mixed up through the call chain,. 

{code}
hadoop s3guard markers -audit -max 2000  s3a://stevel-london/
{code}
leads to
{code}
2020-10-27 18:11:44,434 [main] DEBUG s3guard.S3GuardTool 
(S3GuardTool.java:main(2154)) - Exception raised
46: Marker count 0 out of range [2000 - 0]
at 
org.apache.hadoop.fs.s3a.tools.MarkerTool$ScanResult.finish(MarkerTool.java:489)
at org.apache.hadoop.fs.s3a.tools.MarkerTool.run(MarkerTool.java:318)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:505)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:2134)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.main(S3GuardTool.java:2146)
2020-10-27 18:11:44,436 [main] INFO  util.ExitUtil 
(ExitUtil.java:terminate(210)) - Exiting with status 46: 46: Marker count 0 out 
of range [2000 - 0]
{code}

Trivial fix.

  was:
HADOOP-17727 manages to get -min and -max mixed up through the call chain,. 

{code}
hadoop s3guard markers -audit -max 2000  s3a://stevel-london/
{code}
leads to
{code}
2020-10-27 18:11:44,434 [main] DEBUG s3guard.S3GuardTool 
(S3GuardTool.java:main(2154)) - Exception raised
46: Marker count 0 out of range [2000 - 0]
at 
org.apache.hadoop.fs.s3a.tools.MarkerTool$ScanResult.finish(MarkerTool.java:489)
at org.apache.hadoop.fs.s3a.tools.MarkerTool.run(MarkerTool.java:318)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:505)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:2134)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.main(S3GuardTool.java:2146)
2020-10-27 18:11:44,436 [main] INFO  util.ExitUtil 
(ExitUtil.java:terminate(210)) - Exiting with status 46: 46: Marker count 0 out 
of range [2000 - 0]
{code}

Trivial fix.


> S3A marker tool mixes up -min and -max
> --
>
> Key: HADOOP-17332
> URL: https://issues.apache.org/jira/browse/HADOOP-17332
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Trivial
>
> HADOOP-17227 manages to get -min and -max mixed up through the call chain,. 
> {code}
> hadoop s3guard markers -audit -max 2000  s3a://stevel-london/
> {code}
> leads to
> {code}
> 2020-10-27 18:11:44,434 [main] DEBUG s3guard.S3GuardTool 
> (S3GuardTool.java:main(2154)) - Exception raised
> 46: Marker count 0 out of range [2000 - 0]
>   at 
> org.apache.hadoop.fs.s3a.tools.MarkerTool$ScanResult.finish(MarkerTool.java:489)
>   at org.apache.hadoop.fs.s3a.tools.MarkerTool.run(MarkerTool.java:318)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:505)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:2134)
>   at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.main(S3GuardTool.java:2146)
> 2020-10-27 18:11:44,436 [main] INFO  util.ExitUtil 
> (ExitUtil.java:terminate(210)) - Exiting with status 46: 46: Marker count 0 
> out of range [2000 - 0]
> {code}
> Trivial fix.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17332) S3A marker tool mixes up -min and -max

2020-10-27 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-17332:
---

 Summary: S3A marker tool mixes up -min and -max
 Key: HADOOP-17332
 URL: https://issues.apache.org/jira/browse/HADOOP-17332
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.3.1
Reporter: Steve Loughran
Assignee: Steve Loughran


HADOOP-17727 manages to get -min and -max mixed up through the call chain,. 

{code}
hadoop s3guard markers -audit -max 2000  s3a://stevel-london/
{code}
leads to
{code}
2020-10-27 18:11:44,434 [main] DEBUG s3guard.S3GuardTool 
(S3GuardTool.java:main(2154)) - Exception raised
46: Marker count 0 out of range [2000 - 0]
at 
org.apache.hadoop.fs.s3a.tools.MarkerTool$ScanResult.finish(MarkerTool.java:489)
at org.apache.hadoop.fs.s3a.tools.MarkerTool.run(MarkerTool.java:318)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:505)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:2134)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.main(S3GuardTool.java:2146)
2020-10-27 18:11:44,436 [main] INFO  util.ExitUtil 
(ExitUtil.java:terminate(210)) - Exiting with status 46: 46: Marker count 0 out 
of range [2000 - 0]
{code}

Trivial fix.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17318) S3A committer to support concurrent jobs with same app attempt ID & dest dir

2020-10-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17318?focusedWorklogId=505327=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-505327
 ]

ASF GitHub Bot logged work on HADOOP-17318:
---

Author: ASF GitHub Bot
Created on: 27/Oct/20 17:16
Start Date: 27/Oct/20 17:16
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2399:
URL: https://github.com/apache/hadoop/pull/2399#issuecomment-717395438


   @dongjoon-hyun thanks...doing a bit more on this as the more tests I write, 
the more corner cases surface. Think I'm control now.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 505327)
Time Spent: 1h 20m  (was: 1h 10m)

> S3A committer to support concurrent jobs with same app attempt ID & dest dir
> 
>
> Key: HADOOP-17318
> URL: https://issues.apache.org/jira/browse/HADOOP-17318
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Reported failure of magic committer block uploads as pending upload ID is 
> unknown. Likely cause: it's been aborted by another job
> # Make it possible to turn off cleanup of pending uploads in magic committer
> # log more about uploads being deleted in committers
> # and upload ID in the S3aBlockOutputStream errors
> There are other concurrency issues when you look close, see SPARK-33230
> * magic committer uses app attempt ID as path under __magic; if there are 
> duplicate then they will conflict
> * staging committer local temp dir uses app attempt id
> Fix will be to have a job UUID which for spark will be picked up from the 
> SPARK-33230 changes, (option to self-generate in job setup for hadoop 3.3.1+ 
> older spark builds); fall back to app-attempt *unless that fallback has been 
> disabled*
> MR: configure to use app attempt ID
> Spark: configure to fail job setup if app attempt ID is the source of a job 
> uuid



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2399: HADOOP-17318. Support concurrent S3A commit jobs slightly better.

2020-10-27 Thread GitBox


steveloughran commented on pull request #2399:
URL: https://github.com/apache/hadoop/pull/2399#issuecomment-717395438


   @dongjoon-hyun thanks...doing a bit more on this as the more tests I write, 
the more corner cases surface. Think I'm control now.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17318) S3A committer to support concurrent jobs with same app attempt ID & dest dir

2020-10-27 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17318:

Description: 
Reported failure of magic committer block uploads as pending upload ID is 
unknown. Likely cause: it's been aborted by another job

# Make it possible to turn off cleanup of pending uploads in magic committer
# log more about uploads being deleted in committers
# and upload ID in the S3aBlockOutputStream errors

There are other concurrency issues when you look close, see SPARK-33230

* magic committer uses app attempt ID as path under __magic; if there are 
duplicate then they will conflict
* staging committer local temp dir uses app attempt id

Fix will be to have a job UUID which for spark will be picked up from the 
SPARK-33230 changes, (option to self-generate in job setup for hadoop 3.3.1+ 
older spark builds); fall back to app-attempt *unless that fallback has been 
disabled*

MR: configure to use app attempt ID
Spark: configure to fail job setup if app attempt ID is the source of a job uuid

  was:
Reported failure of magic committer block uploads as pending upload ID is 
unknown. Likely cause: it's been aborted by another job

# Make it possible to turn off cleanup of pending uploads in magic committer
# log more about uploads being deleted in committers
# and upload ID in the S3aBlockOutputStream errors


> S3A committer to support concurrent jobs with same app attempt ID & dest dir
> 
>
> Key: HADOOP-17318
> URL: https://issues.apache.org/jira/browse/HADOOP-17318
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Reported failure of magic committer block uploads as pending upload ID is 
> unknown. Likely cause: it's been aborted by another job
> # Make it possible to turn off cleanup of pending uploads in magic committer
> # log more about uploads being deleted in committers
> # and upload ID in the S3aBlockOutputStream errors
> There are other concurrency issues when you look close, see SPARK-33230
> * magic committer uses app attempt ID as path under __magic; if there are 
> duplicate then they will conflict
> * staging committer local temp dir uses app attempt id
> Fix will be to have a job UUID which for spark will be picked up from the 
> SPARK-33230 changes, (option to self-generate in job setup for hadoop 3.3.1+ 
> older spark builds); fall back to app-attempt *unless that fallback has been 
> disabled*
> MR: configure to use app attempt ID
> Spark: configure to fail job setup if app attempt ID is the source of a job 
> uuid



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-17318) S3A committer to support concurrent jobs with same app attempt ID & dest dir

2020-10-27 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-17318 started by Steve Loughran.
---
> S3A committer to support concurrent jobs with same app attempt ID & dest dir
> 
>
> Key: HADOOP-17318
> URL: https://issues.apache.org/jira/browse/HADOOP-17318
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Reported failure of magic committer block uploads as pending upload ID is 
> unknown. Likely cause: it's been aborted by another job
> # Make it possible to turn off cleanup of pending uploads in magic committer
> # log more about uploads being deleted in committers
> # and upload ID in the S3aBlockOutputStream errors



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17318) S3A committer to support concurrent jobs with same app attempt ID & dest dir

2020-10-27 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17318:

Summary: S3A committer to support concurrent jobs with same app attempt ID 
& dest dir  (was: S3A Magic committer to make cleanup of pending uploads 
optional)

> S3A committer to support concurrent jobs with same app attempt ID & dest dir
> 
>
> Key: HADOOP-17318
> URL: https://issues.apache.org/jira/browse/HADOOP-17318
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Reported failure of magic committer block uploads as pending upload ID is 
> unknown. Likely cause: it's been aborted by another job
> # Make it possible to turn off cleanup of pending uploads in magic committer
> # log more about uploads being deleted in committers
> # and upload ID in the S3aBlockOutputStream errors



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17329) mvn site fails due to MetricsSystemImpl class

2020-10-27 Thread Sunil G (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17221541#comment-17221541
 ] 

Sunil G commented on HADOOP-17329:
--

+1

Committing shortly. Thanks [~hexiaoqiao]

> mvn site fails due to MetricsSystemImpl class
> -
>
> Key: HADOOP-17329
> URL: https://issues.apache.org/jira/browse/HADOOP-17329
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
> Attachments: HADOOP-17329.001.patch, HADOOP-17329.002.patch
>
>
> When prepare for branch-3.2.2 release, i found there is one issue while 
> create release. And it also exist in trunk.
> command line: mvn install site site:stage -DskipTests -DskipShade -Pdist,src 
> -Preleasedocs,docs
> failed log show as the following: 
> {quote}[ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-site-plugin:3.6:site (default-site) on project 
> hadoop-common: failed to get report for 
> org.apache.maven.plugins:maven-dependency-plugin: Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) 
> on project hadoop-common: Compilation failure
> [ERROR] 
> /Users/hexiaoqiao/Source/hadoop-common/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java:[298,5]
>  method register(java.lang.String,java.lang.String,T) is already defined 
> in class org.apache.hadoop.metrics2.impl.MetricsSystemImpl{quote}
> I am not sure why source code of class MetricsSystemImpl will be changed 
> while building, I try to revert HADOOP-17081 everything seems OK.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] amahussein commented on pull request #2408: HDFS-15643. TestFileChecksumCompositeCrc is flaky

2020-10-27 Thread GitBox


amahussein commented on pull request #2408:
URL: https://github.com/apache/hadoop/pull/2408#issuecomment-717278256


   > hadoop.hdfs.TestFileChecksumCompositeCrc and hadoop.hdfs.TestFileChecksum 
are still failing, are the failures related to the patch?
   
   Thank you Akira.
   It is not due to the patch. Probably, it needs some configurations to set 
the timeouts of the pending reconstruction blocks. I will investigate the stack 
and see if I can fix it.
   
   ```bash
   

org.apache.hadoop.hdfs.TestFileChecksumCompositeCrc.testStripedFileChecksumWithMissedDataBlocksRangeQuery15
Error Details
   
   `/striped/stripedFileChecksum1': Fail to get block checksum for 
LocatedStripedBlock{BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775792_1001;
 getBlockSize()=37748736; corrupt=false; offset=0; 
locs=[DatanodeInfoWithStorage[127.0.0.1:41259,DS-211baa4b-9658-4792-9eb9-971650316b65,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:42903,DS-c7b78dc0-e09e-4368-bc48-7e2d148acb2f,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:35419,DS-4697e810-71cb-44b4-a4f0-e8c105a0d30b,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:39685,DS-c210f999-d67c-490d-b675-e286b0abd6ee,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:44039,DS-51a33914-b87e-43ef-ae14-13caa38ff319,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:39239,DS-e9764209-9d9c-4f2f-843c-183cb415a2ec,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:35165,DS-9b8fb76f-5d5f-412e-b148-c482489342d9,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:44431,DS-dc88f6f1-a3e5-407e-84a7-6e455bb6fdd1,DISK]];
 indices=[0, 1, 2, 3, 4, 6, 7, 8]}
Stack Trace
   
   org.apache.hadoop.fs.PathIOException: `/striped/stripedFileChecksum1': Fail 
to get block checksum for 
LocatedStripedBlock{BP-1233294053-172.17.0.2-1603492866371:blk_-9223372036854775792_1001;
 getBlockSize()=37748736; corrupt=false; offset=0; 
locs=[DatanodeInfoWithStorage[127.0.0.1:41259,DS-211baa4b-9658-4792-9eb9-971650316b65,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:42903,DS-c7b78dc0-e09e-4368-bc48-7e2d148acb2f,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:35419,DS-4697e810-71cb-44b4-a4f0-e8c105a0d30b,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:39685,DS-c210f999-d67c-490d-b675-e286b0abd6ee,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:44039,DS-51a33914-b87e-43ef-ae14-13caa38ff319,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:39239,DS-e9764209-9d9c-4f2f-843c-183cb415a2ec,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:35165,DS-9b8fb76f-5d5f-412e-b148-c482489342d9,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:44431,DS-dc88f6f1-a3e5-407e-84a7-6e455bb6fdd1,DISK]];
 indices=[0, 1, 2, 3, 4, 6, 7, 8]}
at 
org.apache.hadoop.hdfs.FileChecksumHelper$StripedFileNonStripedChecksumComputer.checksumBlocks(FileChecksumHelper.java:640)
at 
org.apache.hadoop.hdfs.FileChecksumHelper$FileChecksumComputer.compute(FileChecksumHelper.java:252)
at 
org.apache.hadoop.hdfs.DFSClient.getFileChecksumInternal(DFSClient.java:1851)
at 
org.apache.hadoop.hdfs.DFSClient.getFileChecksumWithCombineMode(DFSClient.java:1871)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$34.doCall(DistributedFileSystem.java:1903)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$34.doCall(DistributedFileSystem.java:1900)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.getFileChecksum(DistributedFileSystem.java:1917)
at 
org.apache.hadoop.hdfs.TestFileChecksum.getFileChecksum(TestFileChecksum.java:592)
at 
org.apache.hadoop.hdfs.TestFileChecksum.testStripedFileChecksumWithMissedDataBlocksRangeQuery(TestFileChecksum.java:306)
at 
org.apache.hadoop.hdfs.TestFileChecksum.testStripedFileChecksumWithMissedDataBlocksRangeQuery15(TestFileChecksum.java:476)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
Standard Output
   
   2020-10-23 22:41:06,342 [Listener at localhost/37463] INFO  
hdfs.MiniDFSCluster (MiniDFSCluster.java:(529)) - starting cluster: 

[jira] [Work logged] (HADOOP-17191) ABFS: Run the integration tests with various combinations of configurations and publish a consolidated results

2020-10-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17191?focusedWorklogId=505240=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-505240
 ]

ASF GitHub Bot logged work on HADOOP-17191:
---

Author: ASF GitHub Bot
Created on: 27/Oct/20 14:11
Start Date: 27/Oct/20 14:11
Worklog Time Spent: 10m 
  Work Description: bilaharith commented on pull request #2417:
URL: https://github.com/apache/hadoop/pull/2417#issuecomment-717271366


   **Driver test results using accounts in Central India**
   mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
   
   HNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 88, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 457, Failures: 0, Errors: 0, Skipped: 64
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 24
   
   HNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 88, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 457, Failures: 0, Errors: 0, Skipped: 24
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 16
   
   NonHNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 88, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 457, Failures: 0, Errors: 0, Skipped: 245
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 16



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 505240)
Time Spent: 7h 20m  (was: 7h 10m)

> ABFS: Run the integration tests with various combinations of configurations 
> and publish a consolidated results
> --
>
> Key: HADOOP-17191
> URL: https://issues.apache.org/jira/browse/HADOOP-17191
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 7h 20m
>  Remaining Estimate: 0h
>
> ADLS Gen 2 supports accounts with and without hierarchical namespace support. 
> ABFS driver supports various authorization mechanisms like OAuth, haredKey, 
> Shared Access Signature. The integration tests need to be executed against 
> accounts with and without hierarchical namespace support using various 
> authorization mechanisms.
> Currently the developer has to manually run the tests with different 
> combinations of configurations ex: HNS account with SharedKey and OAuth, 
> NonHNS account with SharedKey etc..
> The expectation is to automate these runs with different combinations. This 
> will help the developer to run the integration tests with different variants 
> of configurations automatically. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bilaharith commented on pull request #2417: HADOOP-17191. ABFS: Run the tests with various combinations of configurations and publish a consolidated results

2020-10-27 Thread GitBox


bilaharith commented on pull request #2417:
URL: https://github.com/apache/hadoop/pull/2417#issuecomment-717271366


   **Driver test results using accounts in Central India**
   mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify
   
   HNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 88, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 457, Failures: 0, Errors: 0, Skipped: 64
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 24
   
   HNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 88, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 457, Failures: 0, Errors: 0, Skipped: 24
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 16
   
   NonHNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 88, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 457, Failures: 0, Errors: 0, Skipped: 245
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 16



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17191) ABFS: Run the integration tests with various combinations of configurations and publish a consolidated results

2020-10-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17191?focusedWorklogId=505226=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-505226
 ]

ASF GitHub Bot logged work on HADOOP-17191:
---

Author: ASF GitHub Bot
Created on: 27/Oct/20 13:52
Start Date: 27/Oct/20 13:52
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2417:
URL: https://github.com/apache/hadoop/pull/2417#issuecomment-717258351


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  33m 15s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 40s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 52s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  There were no new 
shellcheck issues.  |
   | +1 :green_heart: |  shelldocs  |   0m 14s |  |  The patch generated 0 new 
+ 104 unchanged - 132 fixed = 104 total (was 236)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  16m 22s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 29s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 32s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 103m 51s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2417/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2417 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs 
markdownlint xml |
   | uname | Linux 98e08ebb010d 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ae74407ac43 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2417/1/testReport/ |
   | Max. process+thread count | 310 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2417/1/console |
   | versions | git=2.17.1 maven=3.6.0 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 505226)
Time Spent: 7h 10m  (was: 7h)

> ABFS: Run the integration tests with various combinations of configurations 
> and publish a consolidated results
> --
>
> Key: HADOOP-17191
> URL: https://issues.apache.org/jira/browse/HADOOP-17191
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 7h 10m
>  Remaining Estimate: 0h
>
> ADLS Gen 2 supports accounts with and without hierarchical namespace support. 
> ABFS driver supports various authorization mechanisms like OAuth, haredKey, 
> Shared Access Signature. The integration tests need to be executed against 
> accounts with and 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2417: HADOOP-17191. ABFS: Run the tests with various combinations of configurations and publish a consolidated results

2020-10-27 Thread GitBox


hadoop-yetus commented on pull request #2417:
URL: https://github.com/apache/hadoop/pull/2417#issuecomment-717258351


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  33m 15s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 40s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 52s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  There were no new 
shellcheck issues.  |
   | +1 :green_heart: |  shelldocs  |   0m 14s |  |  The patch generated 0 new 
+ 104 unchanged - 132 fixed = 104 total (was 236)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  16m 22s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 29s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 32s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 103m 51s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2417/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2417 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs 
markdownlint xml |
   | uname | Linux 98e08ebb010d 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ae74407ac43 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2417/1/testReport/ |
   | Max. process+thread count | 310 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2417/1/console |
   | versions | git=2.17.1 maven=3.6.0 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2390: YARN-10442. RM should make sure node label file highly available.

2020-10-27 Thread GitBox


hadoop-yetus commented on pull request #2390:
URL: https://github.com/apache/hadoop/pull/2390#issuecomment-717229695


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 28s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  10m 31s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  19m 59s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   8m 49s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   7m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   1m 32s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m  2s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 39s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 32s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 57s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 54s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  8s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   8m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   7m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   7m 34s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 48s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m 43s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 26s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   6m 10s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m  6s |  |  hadoop-yarn-api in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   4m 17s |  |  hadoop-yarn-common in the patch 
passed.  |
   | -1 :x: |  unit  |  90m  2s | 
[/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2390/10/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt)
 |  hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 49s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 227m 56s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2390/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2390 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux aba518bb37d8 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / afaab3d3325 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 

[jira] [Work logged] (HADOOP-17191) ABFS: Run the integration tests with various combinations of configurations and publish a consolidated results

2020-10-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17191?focusedWorklogId=505176=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-505176
 ]

ASF GitHub Bot logged work on HADOOP-17191:
---

Author: ASF GitHub Bot
Created on: 27/Oct/20 12:07
Start Date: 27/Oct/20 12:07
Worklog Time Spent: 10m 
  Work Description: bilaharith opened a new pull request #2417:
URL: https://github.com/apache/hadoop/pull/2417


   ADLS Gen 2 supports accounts with and without hierarchical namespace 
support. ABFS driver supports various authorization mechanisms like OAuth, 
haredKey, Shared Access Signature. The integration tests need to be executed 
against accounts with and without hierarchical namespace support using various 
authorization mechanisms.
   Currently the developer has to manually run the tests with different 
combinations of configurations.
   The expectation is to automate these runs with different combinations.
   The PR introduces a shell script with which the developer can specify the 
configuration variants and get different combinations of tests executed.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 505176)
Time Spent: 7h  (was: 6h 50m)

> ABFS: Run the integration tests with various combinations of configurations 
> and publish a consolidated results
> --
>
> Key: HADOOP-17191
> URL: https://issues.apache.org/jira/browse/HADOOP-17191
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 7h
>  Remaining Estimate: 0h
>
> ADLS Gen 2 supports accounts with and without hierarchical namespace support. 
> ABFS driver supports various authorization mechanisms like OAuth, haredKey, 
> Shared Access Signature. The integration tests need to be executed against 
> accounts with and without hierarchical namespace support using various 
> authorization mechanisms.
> Currently the developer has to manually run the tests with different 
> combinations of configurations ex: HNS account with SharedKey and OAuth, 
> NonHNS account with SharedKey etc..
> The expectation is to automate these runs with different combinations. This 
> will help the developer to run the integration tests with different variants 
> of configurations automatically. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bilaharith opened a new pull request #2417: HADOOP-17191. ABFS: Run the tests with various combinations of configurations and publish a consolidated results

2020-10-27 Thread GitBox


bilaharith opened a new pull request #2417:
URL: https://github.com/apache/hadoop/pull/2417


   ADLS Gen 2 supports accounts with and without hierarchical namespace 
support. ABFS driver supports various authorization mechanisms like OAuth, 
haredKey, Shared Access Signature. The integration tests need to be executed 
against accounts with and without hierarchical namespace support using various 
authorization mechanisms.
   Currently the developer has to manually run the tests with different 
combinations of configurations.
   The expectation is to automate these runs with different combinations.
   The PR introduces a shell script with which the developer can specify the 
configuration variants and get different combinations of tests executed.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on pull request #2410: HDFS-9776. TestHAAppend#testMultipleAppendsDuringCatchupTailing is flaky

2020-10-27 Thread GitBox


aajisaka commented on pull request #2410:
URL: https://github.com/apache/hadoop/pull/2410#issuecomment-717186614


   Merged. Thank you @amahussein @goiri @jojochuang 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka merged pull request #2410: HDFS-9776. TestHAAppend#testMultipleAppendsDuringCatchupTailing is flaky

2020-10-27 Thread GitBox


aajisaka merged pull request #2410:
URL: https://github.com/apache/hadoop/pull/2410


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17331) [JDK 15] TestDNS fails by UncheckedIOException

2020-10-27 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17331:
---
Parent: HADOOP-17177
Issue Type: Sub-task  (was: Bug)

> [JDK 15] TestDNS fails by UncheckedIOException
> --
>
> Key: HADOOP-17331
> URL: https://issues.apache.org/jira/browse/HADOOP-17331
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Priority: Major
>
> After [JDK-8235783|https://bugs.openjdk.java.net/browse/JDK-8235783], 
> DatagramSocket::connect throws UncheckedIOException if connect fails.
> {noformat}
> [INFO] Running org.apache.hadoop.net.TestDNS
> [ERROR] Tests run: 12, Failures: 0, Errors: 5, Skipped: 0, Time elapsed: 
> 0.403 s <<< FAILURE! - in org.apache.hadoop.net.TestDNS
> [ERROR] testNullDnsServer(org.apache.hadoop.net.TestDNS)  Time elapsed: 0.134 
> s  <<< ERROR!
> java.io.UncheckedIOException: java.net.SocketException: Unsupported address 
> type
>   at 
> java.base/sun.nio.ch.DatagramSocketAdaptor.connect(DatagramSocketAdaptor.java:120)
>   at java.base/java.net.DatagramSocket.connect(DatagramSocket.java:341)
> {noformat}
> Full error log: 
> https://gist.github.com/aajisaka/2a24cb2b110cc3d19f7dec6256db6844



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17331) [JDK 15] TestDNS fails by UncheckedIOException

2020-10-27 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-17331:
--

 Summary: [JDK 15] TestDNS fails by UncheckedIOException
 Key: HADOOP-17331
 URL: https://issues.apache.org/jira/browse/HADOOP-17331
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Akira Ajisaka


After [JDK-8235783|https://bugs.openjdk.java.net/browse/JDK-8235783], 
DatagramSocket::connect throws UncheckedIOException if connect fails.
{noformat}
[INFO] Running org.apache.hadoop.net.TestDNS
[ERROR] Tests run: 12, Failures: 0, Errors: 5, Skipped: 0, Time elapsed: 0.403 
s <<< FAILURE! - in org.apache.hadoop.net.TestDNS
[ERROR] testNullDnsServer(org.apache.hadoop.net.TestDNS)  Time elapsed: 0.134 s 
 <<< ERROR!
java.io.UncheckedIOException: java.net.SocketException: Unsupported address type
at 
java.base/sun.nio.ch.DatagramSocketAdaptor.connect(DatagramSocketAdaptor.java:120)
at java.base/java.net.DatagramSocket.connect(DatagramSocket.java:341)
{noformat}
Full error log: 
https://gist.github.com/aajisaka/2a24cb2b110cc3d19f7dec6256db6844



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17165) Implement service-user feature in DecayRPCScheduler

2020-10-27 Thread Janus Chow (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17221312#comment-17221312
 ] 

Janus Chow commented on HADOOP-17165:
-

Yes, we are implementing the impersonate solution, but the Presto case is just 
an example of the bad use of service-user.

If all service-users can use the impersonate solution, there wouldn't be a need 
for this patch to deal with service-users.

What I was trying to say is that when the service-user can not use the 
impersonate solution, but it's sending a lot of requests that are not really 
important, and with this patch, the performance of the server wouldn't be 
improved.

> Implement service-user feature in DecayRPCScheduler
> ---
>
> Key: HADOOP-17165
> URL: https://issues.apache.org/jira/browse/HADOOP-17165
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: HADOOP-17165.001.patch, HADOOP-17165.002.patch, 
> after.png, before.png
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> In our cluster, we want to use FairCallQueue to limit heavy users, but not 
> want to restrict certain users who are submitting important requests. This 
> jira proposes to implement the service-user feature that the user is always 
> scheduled high-priority queue.
> According to HADOOP-9640, the initial concept of FCQ has this feature, but 
> not implemented finally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17165) Implement service-user feature in DecayRPCScheduler

2020-10-27 Thread Takanobu Asanuma (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17221310#comment-17221310
 ] 

Takanobu Asanuma commented on HADOOP-17165:
---

[~Symious] Thanks for your feedback.

Presto can impersonate the end user when accessing HDFS. Please see the 
following document. It would go well with service-user feature.
https://prestodb.io/docs/current/connector/hive-security.html#end-user-impersonation

> Implement service-user feature in DecayRPCScheduler
> ---
>
> Key: HADOOP-17165
> URL: https://issues.apache.org/jira/browse/HADOOP-17165
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: HADOOP-17165.001.patch, HADOOP-17165.002.patch, 
> after.png, before.png
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> In our cluster, we want to use FairCallQueue to limit heavy users, but not 
> want to restrict certain users who are submitting important requests. This 
> jira proposes to implement the service-user feature that the user is always 
> scheduled high-priority queue.
> According to HADOOP-9640, the initial concept of FCQ has this feature, but 
> not implemented finally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on a change in pull request #2406: HDFS-15460. fix testServerDefaultsWithMinimalCaching.

2020-10-27 Thread GitBox


aajisaka commented on a change in pull request #2406:
URL: https://github.com/apache/hadoop/pull/2406#discussion_r512570552



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
##
@@ -273,10 +272,17 @@ public void testServerDefaultsWithMinimalCaching()
   defaults.getDefaultStoragePolicyId());
   doReturn(newDefaults).when(spyNamesystem).getServerDefaults();
 
-  Thread.sleep(1);
-  defaults = dfsClient.getServerDefaults();
-  // Value is updated correctly
-  assertEquals(updatedDefaultBlockSize, defaults.getBlockSize());
+  // Verify that the value is updated correctly
+  GenericTestUtils.waitFor(()->{
+try {
+  FsServerDefaults currDef = dfsClient.getServerDefaults();
+  return (currDef.getBlockSize() == updatedDefaultBlockSize);
+} catch (IOException e) {
+  // do nothing;
+  return false;
+}
+  }, 1, 1000);

Review comment:
   Thank you @amahussein. How about increasing the timeout to 2 or 3 
seconds?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on pull request #2309: HDFS-15580. [JDK 12] DFSTestUtil#addDataNodeLayoutVersion fails

2020-10-27 Thread GitBox


aajisaka commented on pull request #2309:
URL: https://github.com/apache/hadoop/pull/2309#issuecomment-717137923


   Thank you @tasanuma !



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka merged pull request #2309: HDFS-15580. [JDK 12] DFSTestUtil#addDataNodeLayoutVersion fails

2020-10-27 Thread GitBox


aajisaka merged pull request #2309:
URL: https://github.com/apache/hadoop/pull/2309


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] surendralilhore edited a comment on pull request #2390: YARN-10442. RM should make sure node label file highly available.

2020-10-27 Thread GitBox


surendralilhore edited a comment on pull request #2390:
URL: https://github.com/apache/hadoop/pull/2390#issuecomment-717104684


   Thanks @bibinchundatt for Review.
   Updated patch based on you suggestion. Changed property name to 
"**yarn.fs-store.file.replication**"



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] surendralilhore commented on pull request #2390: YARN-10442. RM should make sure node label file highly available.

2020-10-27 Thread GitBox


surendralilhore commented on pull request #2390:
URL: https://github.com/apache/hadoop/pull/2390#issuecomment-717104684


   Thanks @bibinchundatt for Review.
   Updated patch based on you suggestion. Change property name to 
"**yarn.fs-store.file.replication**"



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2377: HDFS-15624. fix the function of setting quota by storage type

2020-10-27 Thread GitBox


hadoop-yetus commented on pull request #2377:
URL: https://github.com/apache/hadoop/pull/2377#issuecomment-717063081


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  4s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  11m 38s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 30s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 51s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  17m  7s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 55s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 52s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 23s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   3m  1s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   4m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 25s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   6m 50s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 12s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  19m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 14s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  17m 14s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 56s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2377/15/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 1 new + 734 unchanged - 1 fixed = 735 total (was 
735)  |
   | +1 :green_heart: |  mvnsite  |   3m 52s |  |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s | 
[/whitespace-tabs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2377/15/artifact/out/whitespace-tabs.txt)
 |  The patch 1 line(s) with tabs.  |
   | +1 :green_heart: |  shadedclient  |  15m  1s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   3m  0s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   4m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   7m 15s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m 14s |  |  hadoop-common in the patch 
passed.  |
   | -1 :x: |  unit  | 121m 49s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2377/15/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  unit  |   9m 33s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2377/15/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  7s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 330m 57s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
   |   | hadoop.hdfs.server.balancer.TestBalancer |
   |   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
   |   | hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2377/15/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2377 |
   | Optional Tests | dupname asflicense 

[GitHub] [hadoop] aajisaka merged pull request #2404: HDFS-15461. TestDFSClientRetries testGetFileChecksum fails.

2020-10-27 Thread GitBox


aajisaka merged pull request #2404:
URL: https://github.com/apache/hadoop/pull/2404


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2377: HDFS-15624. fix the function of setting quota by storage type

2020-10-27 Thread GitBox


hadoop-yetus commented on pull request #2377:
URL: https://github.com/apache/hadoop/pull/2377#issuecomment-717056107


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  9s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  11m 42s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m  4s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  18m 13s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 57s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 38s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 34s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 59s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 19s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   6m 50s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 48s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  20m 48s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  18m 18s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   3m  0s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2377/14/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 1 new + 734 unchanged - 1 fixed = 735 total (was 
735)  |
   | +1 :green_heart: |  mvnsite  |   3m 35s |  |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s | 
[/whitespace-tabs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2377/14/artifact/out/whitespace-tabs.txt)
 |  The patch 1 line(s) with tabs.  |
   | +1 :green_heart: |  shadedclient  |  16m 45s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 38s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   4m  1s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   7m 22s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   9m 49s | 
[/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2377/14/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch passed.  |
   | -1 :x: |  unit  |  93m 52s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2377/14/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  unit  |  16m  6s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2377/14/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch passed.  |
   | +0 :ok: |  asflicense  |   0m 58s |  |  ASF License check generated no 
output?  |
   |  |   | 318m  8s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.security.TestRaceWhenRelogin |
   |   | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots |
   |   | hadoop.hdfs.server.namenode.TestINodeFile |
   |   | hadoop.hdfs.server.namenode.TestCommitBlockWithInvalidGenStamp |
   |   | hadoop.hdfs.server.namenode.snapshot.TestSnapshot |
   |   | hadoop.hdfs.server.namenode.snapshot.TestSnapshottableDirListing |
   |   | hadoop.hdfs.TestDFSUpgrade |
   |   | hadoop.hdfs.server.namenode.TestAddStripedBlocks |
   |   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
   |   | 

[jira] [Commented] (HADOOP-17329) mvn site fails due to MetricsSystemImpl class

2020-10-27 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17221220#comment-17221220
 ] 

Hadoop QA commented on HADOOP-17329:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m  
3s{color} |  | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} |  | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} |  | {color:green} The patch does not contain any @author tags. 
{color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
21m 17s{color} |  | {color:green} branch has no errors when building and 
testing our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} blanks {color} | {color:green}  0m  
0s{color} |  | {color:green} The patch has no blanks issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 43s{color} |  | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} || ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} |  | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m 12s{color} | 
 | {color:black}{color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/105/artifact/out/Dockerfile
 |
| JIRA Issue | HADOOP-17329 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13014174/HADOOP-17329.002.patch
 |
| Optional Tests | dupname asflicense |
| uname | Linux 92bb8d692af3 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 15a5f5367366fdd76933d0ff6499363fcbc8873e |
| Max. process+thread count | 309 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/105/console |
| versions | git=2.17.1 maven=3.6.0 |
| Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |


This message was automatically generated.



> mvn site fails due to MetricsSystemImpl class
> -
>
> Key: HADOOP-17329
> URL: https://issues.apache.org/jira/browse/HADOOP-17329
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
> Attachments: HADOOP-17329.001.patch, HADOOP-17329.002.patch
>
>
> When prepare for branch-3.2.2 release, i found there is one issue while 
> create release. And it also exist in trunk.
> command line: mvn install site site:stage -DskipTests -DskipShade -Pdist,src 
> -Preleasedocs,docs
> failed log show as the following: 
> {quote}[ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-site-plugin:3.6:site (default-site) on project 
> hadoop-common: failed to get report for 
> org.apache.maven.plugins:maven-dependency-plugin: Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) 
> on project hadoop-common: Compilation failure
> [ERROR] 
> /Users/hexiaoqiao/Source/hadoop-common/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java:[298,5]
>  method register(java.lang.String,java.lang.String,T) is already defined 
> in class org.apache.hadoop.metrics2.impl.MetricsSystemImpl{quote}
> I am not sure why source code of class MetricsSystemImpl will be changed 
> while building, I try to revert HADOOP-17081 everything seems OK.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2377: HDFS-15624. fix the function of setting quota by storage type

2020-10-27 Thread GitBox


hadoop-yetus commented on pull request #2377:
URL: https://github.com/apache/hadoop/pull/2377#issuecomment-717054021


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 16s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  11m 23s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 48s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  18m 10s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   3m  1s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 34s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 29s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 57s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 19s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   6m 48s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 43s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  20m 43s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 57s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  17m 57s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 58s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2377/13/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 2 new + 734 unchanged - 1 fixed = 736 total (was 
735)  |
   | +1 :green_heart: |  mvnsite  |   3m 33s |  |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s | 
[/whitespace-tabs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2377/13/artifact/out/whitespace-tabs.txt)
 |  The patch 3 line(s) with tabs.  |
   | +1 :green_heart: |  shadedclient  |  16m 48s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 41s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   4m  2s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   7m 14s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m  7s |  |  hadoop-common in the patch 
passed.  |
   | -1 :x: |  unit  | 113m 40s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2377/13/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  unit  |   9m 40s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2377/13/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 53s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 329m 39s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.server.federation.router.TestRouterRpc |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2377/13/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2377 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 08cfbf3977f7 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 

[GitHub] [hadoop] aajisaka commented on pull request #2408: HDFS-15643. TestFileChecksumCompositeCrc is flaky

2020-10-27 Thread GitBox


aajisaka commented on pull request #2408:
URL: https://github.com/apache/hadoop/pull/2408#issuecomment-717052104


   hadoop.hdfs.TestFileChecksumCompositeCrc and hadoop.hdfs.TestFileChecksum 
are still failing, are the failures related to the patch?
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bibinchundatt commented on pull request #2390: YARN-10442. RM should make sure node label file highly available.

2020-10-27 Thread GitBox


bibinchundatt commented on pull request #2390:
URL: https://github.com/apache/hadoop/pull/2390#issuecomment-717041585


   - AbstractFSNodeStore is used for both Nodelabel and node attribute file 
system store  implementation.We  have to either define separate configuration 
for both in impl classes or use a common configuration with appropriate name to 
be used for both.
   - For logging use slf4j pattern.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ferhui commented on pull request #2416: HDFS-15652. Make block size from NNThroughputBenchmark configurable

2020-10-27 Thread GitBox


ferhui commented on pull request #2416:
URL: https://github.com/apache/hadoop/pull/2416#issuecomment-717039054


   Failed Tests are unrelated and They are tracked.
   @jojochuang  Could you please take a look again? Thanks



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2416: HDFS-15652. Make block size from NNThroughputBenchmark configurable

2020-10-27 Thread GitBox


hadoop-yetus commented on pull request #2416:
URL: https://github.com/apache/hadoop/pull/2416#issuecomment-717037295


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 46s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m  8s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   1m 22s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 58s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 27s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 40s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 36s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 33s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 13s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   1m 13s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 49s | 
[/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2416/3/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 7 new + 136 unchanged 
- 11 fixed = 143 total (was 147)  |
   | +1 :green_heart: |  mvnsite  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 31s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m  1s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  97m 58s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2416/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 45s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 198m 58s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.TestGetFileChecksum |
   |   | hadoop.hdfs.TestFileChecksum |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2416/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2416 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 1e61c2ce4e2f 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 15a5f536736 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2416/3/testReport/ |
   | Max. process+thread count | 4031 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 

[jira] [Commented] (HADOOP-17329) mvn site fails due to MetricsSystemImpl class

2020-10-27 Thread Xiaoqiao He (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17221199#comment-17221199
 ] 

Xiaoqiao He commented on HADOOP-17329:
--

Thanks [~sunilg] for your reviews. v002 try to fix blanks.
{quote}Also you have removed a code snippet. is that safe to remove and needed 
to be removed for this issue?{quote}
it seems that `git diff` generate the wrong format. Please give another check 
for v002 if have bandwidth. Thanks.

> mvn site fails due to MetricsSystemImpl class
> -
>
> Key: HADOOP-17329
> URL: https://issues.apache.org/jira/browse/HADOOP-17329
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
> Attachments: HADOOP-17329.001.patch, HADOOP-17329.002.patch
>
>
> When prepare for branch-3.2.2 release, i found there is one issue while 
> create release. And it also exist in trunk.
> command line: mvn install site site:stage -DskipTests -DskipShade -Pdist,src 
> -Preleasedocs,docs
> failed log show as the following: 
> {quote}[ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-site-plugin:3.6:site (default-site) on project 
> hadoop-common: failed to get report for 
> org.apache.maven.plugins:maven-dependency-plugin: Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) 
> on project hadoop-common: Compilation failure
> [ERROR] 
> /Users/hexiaoqiao/Source/hadoop-common/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java:[298,5]
>  method register(java.lang.String,java.lang.String,T) is already defined 
> in class org.apache.hadoop.metrics2.impl.MetricsSystemImpl{quote}
> I am not sure why source code of class MetricsSystemImpl will be changed 
> while building, I try to revert HADOOP-17081 everything seems OK.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17329) mvn site fails due to MetricsSystemImpl class

2020-10-27 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He updated HADOOP-17329:
-
Attachment: HADOOP-17329.002.patch

> mvn site fails due to MetricsSystemImpl class
> -
>
> Key: HADOOP-17329
> URL: https://issues.apache.org/jira/browse/HADOOP-17329
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
> Attachments: HADOOP-17329.001.patch, HADOOP-17329.002.patch
>
>
> When prepare for branch-3.2.2 release, i found there is one issue while 
> create release. And it also exist in trunk.
> command line: mvn install site site:stage -DskipTests -DskipShade -Pdist,src 
> -Preleasedocs,docs
> failed log show as the following: 
> {quote}[ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-site-plugin:3.6:site (default-site) on project 
> hadoop-common: failed to get report for 
> org.apache.maven.plugins:maven-dependency-plugin: Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) 
> on project hadoop-common: Compilation failure
> [ERROR] 
> /Users/hexiaoqiao/Source/hadoop-common/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java:[298,5]
>  method register(java.lang.String,java.lang.String,T) is already defined 
> in class org.apache.hadoop.metrics2.impl.MetricsSystemImpl{quote}
> I am not sure why source code of class MetricsSystemImpl will be changed 
> while building, I try to revert HADOOP-17081 everything seems OK.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17236) Bump up snakeyaml to 1.26 to mitigate CVE-2017-18640

2020-10-27 Thread Brahma Reddy Battula (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17221178#comment-17221178
 ] 

Brahma Reddy Battula commented on HADOOP-17236:
---

[~xgong] waiting for review.

> Bump up snakeyaml to 1.26 to mitigate CVE-2017-18640
> 
>
> Key: HADOOP-17236
> URL: https://issues.apache.org/jira/browse/HADOOP-17236
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP-17236-001-tempToRun.patch, HADOOP-17236-001.patch
>
>
> Bump up snakeyaml to 1.26 to mitigate CVE-2017-18640



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17329) mvn site fails due to MetricsSystemImpl class

2020-10-27 Thread Sunil G (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17221166#comment-17221166
 ] 

Sunil G commented on HADOOP-17329:
--

Also you have removed a code snippet. is that safe to remove and needed to be 
removed for this issue?

> mvn site fails due to MetricsSystemImpl class
> -
>
> Key: HADOOP-17329
> URL: https://issues.apache.org/jira/browse/HADOOP-17329
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
> Attachments: HADOOP-17329.001.patch
>
>
> When prepare for branch-3.2.2 release, i found there is one issue while 
> create release. And it also exist in trunk.
> command line: mvn install site site:stage -DskipTests -DskipShade -Pdist,src 
> -Preleasedocs,docs
> failed log show as the following: 
> {quote}[ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-site-plugin:3.6:site (default-site) on project 
> hadoop-common: failed to get report for 
> org.apache.maven.plugins:maven-dependency-plugin: Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) 
> on project hadoop-common: Compilation failure
> [ERROR] 
> /Users/hexiaoqiao/Source/hadoop-common/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java:[298,5]
>  method register(java.lang.String,java.lang.String,T) is already defined 
> in class org.apache.hadoop.metrics2.impl.MetricsSystemImpl{quote}
> I am not sure why source code of class MetricsSystemImpl will be changed 
> while building, I try to revert HADOOP-17081 everything seems OK.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17329) mvn site fails due to MetricsSystemImpl class

2020-10-27 Thread Sunil G (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17221164#comment-17221164
 ] 

Sunil G commented on HADOOP-17329:
--

[~hexiaoqiao] cud u pls attach a new path after doing "The patch has 6 line(s) 
that end in blanks. Use git apply --blanks=fix <>. "

> mvn site fails due to MetricsSystemImpl class
> -
>
> Key: HADOOP-17329
> URL: https://issues.apache.org/jira/browse/HADOOP-17329
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
> Attachments: HADOOP-17329.001.patch
>
>
> When prepare for branch-3.2.2 release, i found there is one issue while 
> create release. And it also exist in trunk.
> command line: mvn install site site:stage -DskipTests -DskipShade -Pdist,src 
> -Preleasedocs,docs
> failed log show as the following: 
> {quote}[ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-site-plugin:3.6:site (default-site) on project 
> hadoop-common: failed to get report for 
> org.apache.maven.plugins:maven-dependency-plugin: Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) 
> on project hadoop-common: Compilation failure
> [ERROR] 
> /Users/hexiaoqiao/Source/hadoop-common/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java:[298,5]
>  method register(java.lang.String,java.lang.String,T) is already defined 
> in class org.apache.hadoop.metrics2.impl.MetricsSystemImpl{quote}
> I am not sure why source code of class MetricsSystemImpl will be changed 
> while building, I try to revert HADOOP-17081 everything seems OK.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org