[jira] [Created] (HADOOP-15740) ABFS: Check variable names during initialization of AbfsClientThrottlingIntercept

2018-09-10 Thread Sneha Varma (JIRA)
Sneha Varma created HADOOP-15740:


 Summary: ABFS: Check variable names during initialization of 
AbfsClientThrottlingIntercept 
 Key: HADOOP-15740
 URL: https://issues.apache.org/jira/browse/HADOOP-15740
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sneha Varma
Assignee: Sneha Varma






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15725) FileSystem.deleteOnExit should check user permissions

2018-09-10 Thread Oleksandr Shevchenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr Shevchenko resolved HADOOP-15725.
---
Resolution: Invalid

> FileSystem.deleteOnExit should check user permissions
> -
>
> Key: HADOOP-15725
> URL: https://issues.apache.org/jira/browse/HADOOP-15725
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Oleksandr Shevchenko
>Priority: Major
>  Labels: Security
> Attachments: deleteOnExitReproduce
>
>
> For now, we able to add any file to FileSystem deleteOnExit list. It leads to 
> security problems. Some user (Intruder) can get file system instance which 
> was created by another user (Owner) and mark any files to delete even if 
> "Intruder" doesn't have any access to this files. Later when "Owner" invoke 
> close method (or JVM is shut down since we have ShutdownHook which able to 
> close all file systems) marked files will be deleted successfully since 
> deleting was do behalf of "Owner" (or behalf of a user who ran a program).
> I attached the patch [^deleteOnExitReproduce] which reproduces this 
> possibility and also I able to reproduce it on a cluster with both Local and 
> Distributed file systems:
> {code:java}
> public class Main {
> public static void main(String[] args) throws Exception {
> final FileSystem fs;
>  Configuration conf = new Configuration();
>  conf.set("fs.default.name", "hdfs://node:9000");
>  conf.set("fs.hdfs.impl",
>  org.apache.hadoop.hdfs.DistributedFileSystem.class.getName()
>  );
>  fs = FileSystem.get(conf);
>  System.out.println(fs);
> Path f = new Path("/user/root/testfile");
>  System.out.println(f);
> UserGroupInformation hive = UserGroupInformation.createRemoteUser("hive");
> hive.doAs((PrivilegedExceptionAction) () -> fs.deleteOnExit(f));
> fs.close();
>  }
> {code}
> Result:
> {noformat}
> root@node:/# hadoop fs -put testfile /user/root
> root@node:/# hadoop fs -chmod 700 /user/root/testfile
> root@node:/# hadoop fs -ls /user/root
> Found 1 items
> -rw--- 1 root supergroup 0 2018-09-06 18:07 /user/root/testfile
> root@node:/# java -jar testDeleteOther.jar 
> log4j:WARN No appenders could be found for logger 
> (org.apache.hadoop.conf.Configuration.deprecation).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
> info.
> DFS[DFSClient[clientName=DFSClient_NONMAPREDUCE_309539034_1, ugi=root 
> (auth:SIMPLE)]]
> /user/root/testfile
> []
> root@node:/# hadoop fs -ls /user/root
> root@node:/# 
> {noformat}
> We should add a check user permissions before mark a file to delete. 
>  Could someone evaluate this? And if no one objects I would like to start 
> working on this.
>  Thanks a lot for any comments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.8.5 (RC0)

2018-09-10 Thread Jason Lowe
Thanks for driving the release, Junping!

+1 (binding)

- Verified signatures and digests
- Successfully performed a native build from source
- Successfully deployed a single-node cluster with the timeline server
- Ran some sample jobs and examined the web UI and job logs

Jason

On Mon, Sep 10, 2018 at 7:00 AM, 俊平堵  wrote:

> Hi all,
>
>  I've created the first release candidate (RC0) for Apache
> Hadoop 2.8.5. This is our next point release to follow up 2.8.4. It
> includes 33 important fixes and improvements.
>
>
> The RC artifacts are available at:
> http://home.apache.org/~junping_du/hadoop-2.8.5-RC0
>
>
> The RC tag in git is: release-2.8.5-RC0
>
>
>
> The maven artifacts are available via repository.apache.org<
> http://repository.apache.org> at:
>
> https://repository.apache.org/content/repositories/orgapachehadoop-1140
>
>
> Please try the release and vote; the vote will run for the usual 5
> working
> days, ending on 9/15/2018 PST time.
>
>
> Thanks,
>
>
> Junping
>


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-09-10 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/892/

[Sep 9, 2018 5:07:30 AM] (msingh) HDDS-410. ozone scmcli list is not working 
properly. Contributed by
[Sep 9, 2018 5:50:26 PM] (brahma) HDFS-13862. RBF: Router logs are not 
capturing few of the dfsrouteradmin
[Sep 10, 2018 12:45:31 AM] (wangda) YARN-8698. [Submarine] Failed to reset 
Hadoop home environment when
[Sep 10, 2018 3:40:51 AM] (vinayakumarb) HDFS-13806. EC: No error message for 
unsetting EC policy of the
[Sep 10, 2018 3:52:59 AM] (vinayakumarb) HDFS-13895. EC: Fix Intermittent 
Failure in




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine
 
   Unread field:FSBasedSubmarineStorageImpl.java:[line 39] 
   Found reliance on default encoding in 
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component):in 
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component): new java.io.FileWriter(File) At 
YarnServiceJobSubmitter.java:[line 195] 
   
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component) may fail to clean up java.io.Writer on checked exception 
Obligation to clean up resource created at YarnServiceJobSubmitter.java:to 
clean up java.io.Writer on checked exception Obligation to clean up resource 
created at YarnServiceJobSubmitter.java:[line 195] is not discharged 
   
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceUtils.getComponentArrayJson(String,
 int, String) concatenates strings using + in a loop At 
YarnServiceUtils.java:using + in a loop At YarnServiceUtils.java:[line 72] 

Failed CTEST tests :

   test_test_libhdfs_threaded_hdfs_static 
   test_libhdfs_threaded_hdfspp_test_shim_static 

Failed junit tests :

   hadoop.hdfs.TestLeaseRecovery2 
   hadoop.hdfs.client.impl.TestBlockReaderLocal 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/892/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/892/artifact/out/diff-compile-javac-root.txt
  [304K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/892/artifact/out/diff-checkstyle-root.txt
  [17M]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/892/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/892/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/892/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/892/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/892/artifact/out/whitespace-eol.txt
  [9.4M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/892/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/892/artifact/out/xml.txt
  [4.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/892/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-submarine-warnings.html
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/892/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/892/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/892/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/892/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/892/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [4.0K]
   

[jira] [Resolved] (HADOOP-15738) MRAppBenchmark.benchmark1() fails with NullPointerException

2018-09-10 Thread Jason Lowe (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe resolved HADOOP-15738.
-
Resolution: Duplicate

> MRAppBenchmark.benchmark1() fails with NullPointerException
> ---
>
> Key: HADOOP-15738
> URL: https://issues.apache.org/jira/browse/HADOOP-15738
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Oleksandr Shevchenko
>Priority: Minor
>
> MRAppBenchmark.benchmark1() fails with NullPointerException:
> 1. We do not set any queue for this test. As the result we got the following 
> exception:
> {noformat}
> 2018-09-10 17:04:23,486 ERROR [Thread-0] rm.RMCommunicator 
> (RMCommunicator.java:register(177)) - Exception while registering
> java.lang.NullPointerException
> at org.apache.avro.util.Utf8$2.toUtf8(Utf8.java:123)
> at org.apache.avro.util.Utf8.getBytesFor(Utf8.java:172)
> at org.apache.avro.util.Utf8.(Utf8.java:39)
> at 
> org.apache.hadoop.mapreduce.jobhistory.JobQueueChangeEvent.(JobQueueChangeEvent.java:35)
> at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.setQueueName(JobImpl.java:1167)
> at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.register(RMCommunicator.java:174)
> at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.serviceStart(RMCommunicator.java:122)
> at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.serviceStart(RMContainerAllocator.java:280)
> at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
> at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1293)
> at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
> at org.apache.hadoop.mapreduce.v2.app.MRApp.submit(MRApp.java:301)
> at org.apache.hadoop.mapreduce.v2.app.MRApp.submit(MRApp.java:285)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppBenchmark.run(MRAppBenchmark.java:72)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppBenchmark.benchmark1(MRAppBenchmark.java:194)
> {noformat}
> 2. We override createSchedulerProxy method and do not set application 
> priority that was added later by MAPREDUCE-6515. We got the following error:
> {noformat}
> java.lang.NullPointerException
>  at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.handleJobPriorityChange(RMContainerAllocator.java:1025)
>  at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.getResources(RMContainerAllocator.java:880)
>  at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.heartbeat(RMContainerAllocator.java:286)
>  at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator$AllocatorRunnable.run(RMCommunicator.java:280)
>  at java.lang.Thread.run(Thread.java:748)
> {noformat}
> In both cases, the job never will be run and the test stuck and will not be 
> finished.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-15738) MRAppBenchmark.benchmark1() fails with NullPointerException

2018-09-10 Thread Jason Lowe (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe reopened HADOOP-15738:
-

> MRAppBenchmark.benchmark1() fails with NullPointerException
> ---
>
> Key: HADOOP-15738
> URL: https://issues.apache.org/jira/browse/HADOOP-15738
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Oleksandr Shevchenko
>Priority: Minor
>
> MRAppBenchmark.benchmark1() fails with NullPointerException:
> 1. We do not set any queue for this test. As the result we got the following 
> exception:
> {noformat}
> 2018-09-10 17:04:23,486 ERROR [Thread-0] rm.RMCommunicator 
> (RMCommunicator.java:register(177)) - Exception while registering
> java.lang.NullPointerException
> at org.apache.avro.util.Utf8$2.toUtf8(Utf8.java:123)
> at org.apache.avro.util.Utf8.getBytesFor(Utf8.java:172)
> at org.apache.avro.util.Utf8.(Utf8.java:39)
> at 
> org.apache.hadoop.mapreduce.jobhistory.JobQueueChangeEvent.(JobQueueChangeEvent.java:35)
> at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.setQueueName(JobImpl.java:1167)
> at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.register(RMCommunicator.java:174)
> at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.serviceStart(RMCommunicator.java:122)
> at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.serviceStart(RMContainerAllocator.java:280)
> at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
> at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1293)
> at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
> at org.apache.hadoop.mapreduce.v2.app.MRApp.submit(MRApp.java:301)
> at org.apache.hadoop.mapreduce.v2.app.MRApp.submit(MRApp.java:285)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppBenchmark.run(MRAppBenchmark.java:72)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppBenchmark.benchmark1(MRAppBenchmark.java:194)
> {noformat}
> 2. We override createSchedulerProxy method and do not set application 
> priority that was added later by MAPREDUCE-6515. We got the following error:
> {noformat}
> java.lang.NullPointerException
>  at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.handleJobPriorityChange(RMContainerAllocator.java:1025)
>  at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.getResources(RMContainerAllocator.java:880)
>  at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.heartbeat(RMContainerAllocator.java:286)
>  at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator$AllocatorRunnable.run(RMCommunicator.java:280)
>  at java.lang.Thread.run(Thread.java:748)
> {noformat}
> In both cases, the job never will be run and the test stuck and will not be 
> finished.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15738) MRAppBenchmark.benchmark1() fails with NullPointerException

2018-09-10 Thread Oleksandr Shevchenko (JIRA)
Oleksandr Shevchenko created HADOOP-15738:
-

 Summary: MRAppBenchmark.benchmark1() fails with 
NullPointerException
 Key: HADOOP-15738
 URL: https://issues.apache.org/jira/browse/HADOOP-15738
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Oleksandr Shevchenko


MRAppBenchmark.benchmark1() fails with NullPointerException:
1. We do not set any queue for this test. As the result we got the following 
exception:

{noformat}
2018-09-10 17:04:23,486 ERROR [Thread-0] rm.RMCommunicator 
(RMCommunicator.java:register(177)) - Exception while registering
java.lang.NullPointerException
at org.apache.avro.util.Utf8$2.toUtf8(Utf8.java:123)
at org.apache.avro.util.Utf8.getBytesFor(Utf8.java:172)
at org.apache.avro.util.Utf8.(Utf8.java:39)
at 
org.apache.hadoop.mapreduce.jobhistory.JobQueueChangeEvent.(JobQueueChangeEvent.java:35)
at 
org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.setQueueName(JobImpl.java:1167)
at 
org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.register(RMCommunicator.java:174)
at 
org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.serviceStart(RMCommunicator.java:122)
at 
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.serviceStart(RMContainerAllocator.java:280)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
at 
org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1293)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
at org.apache.hadoop.mapreduce.v2.app.MRApp.submit(MRApp.java:301)
at org.apache.hadoop.mapreduce.v2.app.MRApp.submit(MRApp.java:285)
at org.apache.hadoop.mapreduce.v2.app.MRAppBenchmark.run(MRAppBenchmark.java:72)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppBenchmark.benchmark1(MRAppBenchmark.java:194)

{noformat}

2. We override createSchedulerProxy method and do not set application priority 
that was added later by MAPREDUCE-6515. We got the following error:

{noformat}

java.lang.NullPointerException
 at 
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.handleJobPriorityChange(RMContainerAllocator.java:1025)
 at 
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.getResources(RMContainerAllocator.java:880)
 at 
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.heartbeat(RMContainerAllocator.java:286)
 at 
org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator$AllocatorRunnable.run(RMCommunicator.java:280)
 at java.lang.Thread.run(Thread.java:748)
{noformat}

In both cases, the job never will be run and the test stuck and will not be 
finished.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15737) intermittent failure of ITestS3GuardListConsistency.testInconsistentS3ClientDeletes in parallel runs

2018-09-10 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15737:
---

 Summary: intermittent failure of 
ITestS3GuardListConsistency.testInconsistentS3ClientDeletes in parallel runs
 Key: HADOOP-15737
 URL: https://issues.apache.org/jira/browse/HADOOP-15737
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.2.0
Reporter: Steve Loughran


Test {{}} sometimes fails in parallel test runs, yet it works when standalone. 
Either there is some real failure here, thread blocking in an overloaded test 
system is breaking the test, or parallel test runs are corrupting the state of 
the FS as observed by the test

{code}
[ERROR] 
testInconsistentS3ClientDeletes(org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency)
  Time elapsed: 11.975 s  <<< FAILURE!
java.lang.AssertionError: 
InconsistentAmazonS3Client added back prefixes incorrectly in a non-recursive 
listing
Expected:fork-0008/test/testInconsistentClientDELAY_LISTING_ME/dir0/
fork-0008/test/testInconsistentClientDELAY_LISTING_ME/dir1/
fork-0008/test/testInconsistentClientDELAY_LISTING_ME/dir2/
---
Actual:fork-0008/test/testInconsistentClientDELAY_LISTING_ME/dir2/
fork-0008/test/testInconsistentClientDELAY_LISTING_ME/dir0
fork-0008/test/testInconsistentClientDELAY_LISTING_ME/dir1
fork-0008/test/testInconsistentClientDELAY_LISTING_ME/dir2 expected:<3> but 
was:<4>
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[VOTE] Release Apache Hadoop 2.8.5 (RC0)

2018-09-10 Thread 俊平堵
Hi all,

 I've created the first release candidate (RC0) for Apache
Hadoop 2.8.5. This is our next point release to follow up 2.8.4. It
includes 33 important fixes and improvements.


The RC artifacts are available at:
http://home.apache.org/~junping_du/hadoop-2.8.5-RC0


The RC tag in git is: release-2.8.5-RC0



The maven artifacts are available via repository.apache.org<
http://repository.apache.org> at:

https://repository.apache.org/content/repositories/orgapachehadoop-1140


Please try the release and vote; the vote will run for the usual 5 working
days, ending on 9/15/2018 PST time.


Thanks,


Junping