Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64

2018-06-07 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-trunk-win/491/

[Jun 6, 2018 3:39:18 PM] (aengineer) HDDS-107. TestOzoneConfigurationFields is 
failing. Contributed by LiXin
[Jun 6, 2018 5:28:14 PM] (stevel) HADOOP-15506. Upgrade Azure Storage Sdk 
version to 7.0.0 and update
[Jun 6, 2018 6:44:17 PM] (inigoiri) HADOOP-15513. Add additional test cases to 
cover some corner cases for
[Jun 7, 2018 4:46:02 AM] (brahma) HDFS-12950. [oiv] ls will fail in secure 
cluster. Contributed by
[Jun 7, 2018 4:55:56 AM] (rohithsharmaks) YARN-8399. NodeManager is giving 403 
GSS exception post upgrade to 3.1




-1 overall


The following subsystems voted -1:
compile mvninstall pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h 00m 00s)
unit


Specific tests:

Failed junit tests :

   hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec 
   hadoop.fs.contract.rawlocal.TestRawlocalContractAppend 
   hadoop.fs.TestFileUtil 
   hadoop.fs.TestFsShellCopy 
   hadoop.fs.TestFsShellList 
   hadoop.fs.TestLocalFileSystem 
   hadoop.http.TestHttpServer 
   hadoop.http.TestHttpServerLogs 
   hadoop.io.compress.TestCodec 
   hadoop.io.nativeio.TestNativeIO 
   hadoop.ipc.TestSocketFactory 
   hadoop.metrics2.impl.TestStatsDMetrics 
   hadoop.security.TestSecurityUtil 
   hadoop.security.TestShellBasedUnixGroupsMapping 
   hadoop.security.token.TestDtUtilShell 
   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.util.TestNativeCodeLoader 
   hadoop.util.TestNodeHealthScriptRunner 
   hadoop.hdfs.qjournal.server.TestJournalNode 
   hadoop.hdfs.qjournal.server.TestJournalNodeSync 
   hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages 
   hadoop.hdfs.server.blockmanagement.TestPendingReconstruction 
   hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestProvidedImpl 
   hadoop.hdfs.server.datanode.TestBlockPoolSliceStorage 
   hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics 
   hadoop.hdfs.server.datanode.TestDataNodeFaultInjector 
   hadoop.hdfs.server.datanode.TestDataNodeUUID 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand 
   hadoop.hdfs.server.diskbalancer.TestDiskBalancerRPC 
   hadoop.hdfs.server.mover.TestMover 
   hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA 
   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA 
   hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics 
   hadoop.hdfs.server.namenode.snapshot.TestSnapshot 
   hadoop.hdfs.server.namenode.TestEditLogRace 
   hadoop.hdfs.server.namenode.TestStartup 
   hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs 
   hadoop.hdfs.TestDFSShell 
   hadoop.hdfs.TestDFSStripedInputStream 
   hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy 
   hadoop.hdfs.TestDFSUpgradeFromImage 
   hadoop.hdfs.TestFetchImage 
   hadoop.hdfs.TestHDFSFileSystemContract 
   hadoop.hdfs.TestPread 
   hadoop.hdfs.TestReconstructStripedFile 
   hadoop.hdfs.TestSecureEncryptionZoneWithKMS 
   hadoop.hdfs.TestTrashWithSecureEncryptionZones 
   hadoop.hdfs.tools.TestDFSAdmin 
   hadoop.hdfs.tools.TestDFSAdminWithHA 
   hadoop.hdfs.web.TestWebHDFS 
   hadoop.hdfs.web.TestWebHdfsUrl 
   hadoop.fs.http.server.TestHttpFSServerWebServer 
   
hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch 
   
hadoop.yarn.server.nodemanager.containermanager.linux.privileged.TestPrivilegedOperationExecutor
 
   
hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestCGroupElasticMemoryController
 
   
hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestCGroupsHandlerImpl
 
   
hadoop.yarn.server.nodemanager.containermanager.linux.runtime.TestDockerContainerRuntime
 
   
hadoop.yarn.server.nodemanager.containermanager.linux.runtime.TestJavaSandboxLinuxContainerRuntime
 
   
hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestAppLogAggregatorImpl
 
   hadoop.yarn.server.nodemanager.containermanager.TestAuxServices 
   hadoop.yarn.server.nodemanager.containermanager.TestContainerManager 
   hadoop.yarn.server.nodemanager.TestContainerExecutor 
   hadoop.yarn.server.nodemanager.TestLocalDirsHandlerService 
   hadoop.yarn.server.nodemanager.TestNodeManagerResync 
   hadoop.yarn.server.webproxy.amfilter.TestAmFilter 
   

Re: [VOTE] Release Apache Hadoop 3.0.3 (RC0)

2018-06-07 Thread Yongjun Zhang
BTW, thanks Allen and Steve for discussing and suggestion about the site
build problem I hit earlier, I did the following step

mvn install -DskipTests

before doing the steps Nanda listed helped to solve the problems.

--Yongjun




On Thu, Jun 7, 2018 at 6:15 PM, Yongjun Zhang  wrote:

> Thank you all very much for the testing, feedback and discussion!
>
> I was able to build outside docker, by following the steps Nanda
> described, I saw the same problem; then I tried 3.0.2 released a while
> back, it has the same issue.
>
> As Allen pointed out, it seems the steps to build site are not correct. I
> have not figured out the correct steps yet.
>
> At this point, I think this issue should not block the 3.0.3 issue. While
> at the same time we need to figure out the right steps to build the site.
> Would you please let me know if you think differently?
>
> We only have the site build issue reported so far. And we don't have
> enough PMC votes yet. So need some more PMCs to help.
>
> Thanks again, and best regards,
>
> --Yongjun
>
>
> On Thu, Jun 7, 2018 at 4:15 PM, Allen Wittenauer  > wrote:
>
>> > On Jun 7, 2018, at 11:47 AM, Steve Loughran 
>> wrote:
>> >
>> > Actually, Yongjun has been really good at helping me get set up for a
>> 2.7.7 release, including "things you need to do to get GPG working in the
>> docker image”
>>
>> *shrugs* I use a different release script after some changes
>> broke the in-tree version for building on OS X and I couldn’t get the fixes
>> committed upstream.  So not sure what the problems are that you are hitting.
>>
>> > On Jun 7, 2018, at 1:08 PM, Nandakumar Vadivelu <
>> nvadiv...@hortonworks.com> wrote:
>> >
>> > It will be helpful if we can get the correct steps, and also update the
>> wiki.
>> > https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+
>> Release+Validation
>>
>> Yup. Looking forward to seeing it.
>> -
>> To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
>> For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
>>
>>
>


[jira] [Created] (HDFS-13664) Refactor ConfiguredFailoverProxyProvider to make inheritance easier

2018-06-07 Thread Chao Sun (JIRA)
Chao Sun created HDFS-13664:
---

 Summary: Refactor ConfiguredFailoverProxyProvider to make 
inheritance easier
 Key: HDFS-13664
 URL: https://issues.apache.org/jira/browse/HDFS-13664
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Reporter: Chao Sun
Assignee: Chao Sun


In HDFS-12943 we'd like to introduce a new proxy provider that inherits 
{{ConfiguredFailoverProvider}}. Some refactoring is necessary to allow easier 
code sharing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.3 (RC0)

2018-06-07 Thread Yongjun Zhang
Thank you all very much for the testing, feedback and discussion!

I was able to build outside docker, by following the steps Nanda described,
I saw the same problem; then I tried 3.0.2 released a while back, it has
the same issue.

As Allen pointed out, it seems the steps to build site are not correct. I
have not figured out the correct steps yet.

At this point, I think this issue should not block the 3.0.3 issue. While
at the same time we need to figure out the right steps to build the site.
Would you please let me know if you think differently?

We only have the site build issue reported so far. And we don't have enough
PMC votes yet. So need some more PMCs to help.

Thanks again, and best regards,

--Yongjun


On Thu, Jun 7, 2018 at 4:15 PM, Allen Wittenauer 
wrote:

> > On Jun 7, 2018, at 11:47 AM, Steve Loughran 
> wrote:
> >
> > Actually, Yongjun has been really good at helping me get set up for a
> 2.7.7 release, including "things you need to do to get GPG working in the
> docker image”
>
> *shrugs* I use a different release script after some changes broke
> the in-tree version for building on OS X and I couldn’t get the fixes
> committed upstream.  So not sure what the problems are that you are hitting.
>
> > On Jun 7, 2018, at 1:08 PM, Nandakumar Vadivelu <
> nvadiv...@hortonworks.com> wrote:
> >
> > It will be helpful if we can get the correct steps, and also update the
> wiki.
> > https://cwiki.apache.org/confluence/display/HADOOP/
> Hadoop+Release+Validation
>
> Yup. Looking forward to seeing it.
> -
> To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
>
>


[jira] [Created] (HDFS-13663) Should throw exception when incorrect block size is set

2018-06-07 Thread Yongjun Zhang (JIRA)
Yongjun Zhang created HDFS-13663:


 Summary: Should throw exception when incorrect block size is set
 Key: HDFS-13663
 URL: https://issues.apache.org/jira/browse/HDFS-13663
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Yongjun Zhang


See

./hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java

{code}
void syncBlock(List syncList) throws IOException {


   newBlock.setNumBytes(finalizedLength);
break;
  case RBW:
  case RWR:
long minLength = Long.MAX_VALUE;
for(BlockRecord r : syncList) {
  ReplicaState rState = r.rInfo.getOriginalReplicaState();
  if(rState == bestState) {
minLength = Math.min(minLength, r.rInfo.getNumBytes());
participatingList.add(r);
  }
  if (LOG.isDebugEnabled()) {
LOG.debug("syncBlock replicaInfo: block=" + block +
", from datanode " + r.id + ", receivedState=" + rState.name() +
", receivedLength=" + r.rInfo.getNumBytes() + ", bestState=" +
bestState.name());
  }
}
// recover() guarantees syncList will have at least one replica with RWR
// or better state.
assert minLength != Long.MAX_VALUE : "wrong minLength"; <= should throw 
exception 
newBlock.setNumBytes(minLength);
break;
  case RUR:
  case TEMPORARY:
assert false : "bad replica state: " + bestState;
  default:
break; // we have 'case' all enum values
  }
{code}

when minLength is Long.MAX_VALUE, it should throw exception.

There might be other places like this.

Otherwise, we would see the following WARN in datanode log
{code}
WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Can't replicate block xyz 
because on-disk length 11852203 is shorter than NameNode recorded length 
9223372036854775807
{code}
where 9223372036854775807 is Long.MAX_VALUE.





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.3 (RC0)

2018-06-07 Thread Allen Wittenauer
> On Jun 7, 2018, at 11:47 AM, Steve Loughran  wrote:
> 
> Actually, Yongjun has been really good at helping me get set up for a 2.7.7 
> release, including "things you need to do to get GPG working in the docker 
> image”

*shrugs* I use a different release script after some changes broke the 
in-tree version for building on OS X and I couldn’t get the fixes committed 
upstream.  So not sure what the problems are that you are hitting.

> On Jun 7, 2018, at 1:08 PM, Nandakumar Vadivelu  
> wrote:
> 
> It will be helpful if we can get the correct steps, and also update the wiki.
> https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+Release+Validation

Yup. Looking forward to seeing it. 
-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-13638) DataNode Can't replicate block because NameNode thinks the length is 9223372036854775807

2018-06-07 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HDFS-13638.

Resolution: Duplicate

Okay I think this is fixed by HDFS-10453. Resolve this jira. Thanks 
[~hexiaoqiao]!

> DataNode Can't replicate block because NameNode thinks the length is 
> 9223372036854775807
> 
>
> Key: HDFS-13638
> URL: https://issues.apache.org/jira/browse/HDFS-13638
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> I occasionally find the following warning in CDH clusters, but haven't 
> figured out why. Thought I should better raise the issue anyway.
> {quote}
> 2018-05-29 09:15:58,092 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Can't replicate block 
> BP-725378529-10.0.0.8-1410027444173:blk_13276745777_1112363330268 because 
> on-disk length 175085 is shorter than NameNode recorded length 
> 9223372036854775807
> {quote}
> Infact, 9223372036854775807 = Long.MAX_VALUE.
> Chasing in the HDFS codebase but didn't find where this length could come from



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.3 (RC0)

2018-06-07 Thread Nandakumar Vadivelu
Hi Allen,
It will be helpful if we can get the correct steps, and also update the wiki.
https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+Release+Validation

Thanks,
Nanda

On 6/8/18, 12:17 AM, "Steve Loughran"  wrote:



> On 7 Jun 2018, at 18:01, Allen Wittenauer  
wrote:
> 
> 
>> On Jun 7, 2018, at 3:46 AM, Lokesh Jain  wrote:
>> 
>> Hi Yongjun
>> 
>> I followed Nanda’s steps and I see the same issues as reported by Nanda.
> 
> 
> This situation is looking like an excellent opportunity for PMC members 
to mentor people on how the build works since it’s apparent that three days 
later, no one has mentioned that those steps aren’t the ones to build the 
complete website and haven’t been since at least 2.4.

Actually, Yongjun has been really good at helping me get set up for a 2.7.7 
release, including "things you need to do to get GPG working in the docker 
image"

But yes, I would like to know those complete steps too

> 
> 
> 
> -
> To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org
> 


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org




Re: [VOTE] Release Apache Hadoop 3.0.3 (RC0)

2018-06-07 Thread Steve Loughran


> On 7 Jun 2018, at 18:01, Allen Wittenauer  wrote:
> 
> 
>> On Jun 7, 2018, at 3:46 AM, Lokesh Jain  wrote:
>> 
>> Hi Yongjun
>> 
>> I followed Nanda’s steps and I see the same issues as reported by Nanda.
> 
> 
> This situation is looking like an excellent opportunity for PMC members to 
> mentor people on how the build works since it’s apparent that three days 
> later, no one has mentioned that those steps aren’t the ones to build the 
> complete website and haven’t been since at least 2.4.

Actually, Yongjun has been really good at helping me get set up for a 2.7.7 
release, including "things you need to do to get GPG working in the docker 
image"

But yes, I would like to know those complete steps too

> 
> 
> 
> -
> To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org
> 



Re: [VOTE] Release Apache Hadoop 3.0.3 (RC0)

2018-06-07 Thread Allen Wittenauer


> On Jun 7, 2018, at 3:46 AM, Lokesh Jain  wrote:
> 
> Hi Yongjun
> 
> I followed Nanda’s steps and I see the same issues as reported by Nanda.


This situation is looking like an excellent opportunity for PMC members to 
mentor people on how the build works since it’s apparent that three days later, 
no one has mentioned that those steps aren’t the ones to build the complete 
website and haven’t been since at least 2.4.



-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13662) TestBlockReaderLocal#testStatisticsForErasureCodingRead is flaky

2018-06-07 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-13662:
--

 Summary: TestBlockReaderLocal#testStatisticsForErasureCodingRead 
is flaky
 Key: HDFS-13662
 URL: https://issues.apache.org/jira/browse/HDFS-13662
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: erasure-coding, test
Reporter: Wei-Chiu Chuang


The test failed in this precommit for a patch that only modifies an unrelated 
test.
https://builds.apache.org/job/PreCommit-HDFS-Build/24401/testReport/org.apache.hadoop.hdfs.client.impl/TestBlockReaderLocal/testStatisticsForErasureCodingRead/

This test also failed occasionally in our internal test.

{noformat}
Stacktrace
java.lang.AssertionError
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.hadoop.hdfs.client.impl.TestBlockReaderLocal.testStatisticsForErasureCodingRead(TestBlockReaderLocal.java:842)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-06-07 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/804/

[Jun 6, 2018 4:18:31 AM] (xiao) HADOOP-15217. FsUrlConnection does not handle 
paths with spaces.
[Jun 6, 2018 4:25:08 AM] (xiao) HDFS-13511. Provide specialized exception when 
block length cannot be
[Jun 6, 2018 8:04:55 AM] (sunilg) HADOOP-15514. NoClassDefFoundError for 
TimelineCollectorManager when
[Jun 6, 2018 3:39:18 PM] (aengineer) HDDS-107. TestOzoneConfigurationFields is 
failing. Contributed by LiXin
[Jun 6, 2018 5:28:14 PM] (stevel) HADOOP-15506. Upgrade Azure Storage Sdk 
version to 7.0.0 and update
[Jun 6, 2018 6:44:17 PM] (inigoiri) HADOOP-15513. Add additional test cases to 
cover some corner cases for




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 
   Inconsistent synchronization of 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService.reloadListener;
 locked 75% of time Unsynchronized access at 
AllocationFileLoaderService.java:75% of time Unsynchronized access at 
AllocationFileLoaderService.java:[line 117] 

Failed junit tests :

   hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageEntities 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity 
   hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun 
   hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageApps 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction
 
   
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageSchema 
   
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageDomain 
   
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/804/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/804/artifact/out/diff-compile-javac-root.txt
  [336K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/804/artifact/out/diff-checkstyle-root.txt
  [17M]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/804/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/804/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/804/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/804/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/804/artifact/out/whitespace-eol.txt
  [9.4M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/804/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/804/artifact/out/xml.txt
  [4.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/804/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/804/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/804/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/804/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/804/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/804/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/804/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [24K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/804/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
  [4.0K]
   

[jira] [Created] (HDFS-13661) Ls command with e option fails when the filesystem is not HDFS

2018-06-07 Thread Takanobu Asanuma (JIRA)
Takanobu Asanuma created HDFS-13661:
---

 Summary: Ls command with e option fails when the filesystem is not 
HDFS
 Key: HDFS-13661
 URL: https://issues.apache.org/jira/browse/HDFS-13661
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: erasure-coding, tools
Affects Versions: 3.1.0, 3.0.3
Reporter: Takanobu Asanuma
Assignee: Takanobu Asanuma


{noformat}
$ hadoop fs -ls -e file://
Found 10 items
-ls: Fatal internal error
java.lang.NullPointerException
at org.apache.hadoop.fs.shell.Ls.adjustColumnWidths(Ls.java:308)
at org.apache.hadoop.fs.shell.Ls.processPaths(Ls.java:242)
at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:387)
at org.apache.hadoop.fs.shell.Ls.processPathArgument(Ls.java:226)
at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:285)
at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:269)
at 
org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:120)
at org.apache.hadoop.fs.shell.Command.run(Command.java:176)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:328)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:391)
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.0.3 (RC0)

2018-06-07 Thread Lokesh Jain
Hi Yongjun

I followed Nanda’s steps and I see the same issues as reported by Nanda.

Thanks
Lokesh
-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13660) DistCp job fails when new data is appended in the file while the distCp copy job is running

2018-06-07 Thread Mukund Thakur (JIRA)
Mukund Thakur created HDFS-13660:


 Summary: DistCp job fails when new data is appended in the file 
while the distCp copy job is running
 Key: HDFS-13660
 URL: https://issues.apache.org/jira/browse/HDFS-13660
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: distcp
Reporter: Mukund Thakur
 Attachments: distcp_failure_when_file_append.log

Steps to reproduce: 

Suppose distcp MR job is copying the file /tmp/web_returns_merged/data-m-002 
and 

we append some more data to this file using command 

hadoop fs -appendToFile xaa  /tmp/web_returns_merged/data-m-002

the job fails with exception 

 Mismatch in length of 
source:hdfs://mycluster0/tmp/web_returns_merged/data-m-002 and target.

Attached the logs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org