Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64

2018-05-17 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-trunk-win/470/

[May 16, 2018 2:23:49 PM] (jlowe) YARN-8284. get_docker_command refactoring. 
Contributed by Eric Badger
[May 16, 2018 4:31:46 PM] (inigoiri) HDFS-13557. TestDFSAdmin#testListOpenFiles 
fails on Windows. Contributed
[May 16, 2018 4:38:26 PM] (eyang) YARN-8300.  Fixed NPE in 
DefaultUpgradeComponentsFinder.
[May 16, 2018 5:08:49 PM] (inigoiri) 
HDFS-13550.TestDebugAdmin#testComputeMetaCommand fails on Windows.
[May 16, 2018 6:28:39 PM] (arp) HDFS-13512. WebHdfs getFileStatus doesn't 
return ecPolicy. Contributed
[May 16, 2018 8:00:01 PM] (haibochen) YARN-7933. [atsv2 read acls] Add 
TimelineWriter#writeDomain. (Rohith
[May 16, 2018 9:17:28 PM] (jlowe) YARN-8071. Add ability to specify nodemanager 
environment variables
[May 17, 2018 2:23:02 AM] (inigoiri) HDFS-13559. TestBlockScanner does not 
close TestContext properly.
[May 17, 2018 10:54:51 AM] (rohithsharmaks) YARN-8297. Incorrect ATS Url used 
for Wire encrypted cluster.




-1 overall


The following subsystems voted -1:
compile mvninstall unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h 00m 00s)
unit


Specific tests:

Failed junit tests :

   hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec 
   hadoop.fs.contract.rawlocal.TestRawlocalContractAppend 
   hadoop.fs.TestFsShellCopy 
   hadoop.fs.TestFsShellList 
   hadoop.fs.TestLocalFileSystem 
   hadoop.fs.TestRawLocalFileSystemContract 
   hadoop.fs.TestTrash 
   hadoop.http.TestHttpServer 
   hadoop.http.TestHttpServerLogs 
   hadoop.io.nativeio.TestNativeIO 
   hadoop.ipc.TestIPC 
   hadoop.ipc.TestSocketFactory 
   hadoop.metrics2.impl.TestStatsDMetrics 
   hadoop.security.TestGroupsCaching 
   hadoop.security.TestSecurityUtil 
   hadoop.security.TestShellBasedUnixGroupsMapping 
   hadoop.security.token.TestDtUtilShell 
   hadoop.util.TestNativeCodeLoader 
   hadoop.util.TestNodeHealthScriptRunner 
   hadoop.hdfs.client.impl.TestBlockReaderLocalLegacy 
   hadoop.hdfs.crypto.TestHdfsCryptoStreams 
   hadoop.hdfs.qjournal.client.TestQuorumJournalManager 
   hadoop.hdfs.qjournal.server.TestJournalNode 
   hadoop.hdfs.qjournal.server.TestJournalNodeSync 
   hadoop.hdfs.security.TestDelegationTokenForProxyUser 
   hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks 
   hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages 
   hadoop.hdfs.server.blockmanagement.TestOverReplicatedBlocks 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistPolicy 
   
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement 
   
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestProvidedImpl 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation 
   hadoop.hdfs.server.datanode.TestBlockPoolSliceStorage 
   hadoop.hdfs.server.datanode.TestBlockRecovery 
   hadoop.hdfs.server.datanode.TestBlockScanner 
   hadoop.hdfs.server.datanode.TestDataNodeFaultInjector 
   hadoop.hdfs.server.datanode.TestDataNodeMetrics 
   hadoop.hdfs.server.datanode.TestDataNodeUUID 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.server.datanode.TestHSync 
   hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame 
   hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand 
   hadoop.hdfs.server.diskbalancer.TestDiskBalancerRPC 
   hadoop.hdfs.server.mover.TestMover 
   hadoop.hdfs.server.mover.TestStorageMover 
   hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA 
   hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits 
   hadoop.hdfs.server.namenode.ha.TestHAMetrics 
   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA 
   hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics 
   
hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstructionWithSnapshot 
   hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot 
   hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots 
   hadoop.hdfs.server.namenode.snapshot.TestSnapRootDescendantDiff 
   hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport 
   hadoop.hdfs.server.namenode.TestAddBlock 
   hadoop.hdfs.server.namenode.TestAuditLoggerWithCommands 
   hadoop.hdfs.server.namenode.TestCheckpoint 

[jira] [Created] (HDFS-13590) Backport HDFS-12378 to branch-2

2018-05-17 Thread Lukas Majercak (JIRA)
Lukas Majercak created HDFS-13590:
-

 Summary: Backport HDFS-12378 to branch-2
 Key: HDFS-13590
 URL: https://issues.apache.org/jira/browse/HDFS-13590
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, hdfs, test
Reporter: Lukas Majercak
Assignee: Lukas Majercak






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13589) Add dfsAdmin command to query if "upgrade" is finalized

2018-05-17 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDFS-13589:
-

 Summary: Add dfsAdmin command to query if "upgrade" is finalized
 Key: HDFS-13589
 URL: https://issues.apache.org/jira/browse/HDFS-13589
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru


When we do upgrade on a Namenode (non rollingUpgrade), we should be able to 
query whether the upgrade has been finalized or not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13588) Fix TestFsDatasetImpl test failures on Windows

2018-05-17 Thread Xiao Liang (JIRA)
Xiao Liang created HDFS-13588:
-

 Summary: Fix TestFsDatasetImpl test failures on Windows
 Key: HDFS-13588
 URL: https://issues.apache.org/jira/browse/HDFS-13588
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Xiao Liang
Assignee: Xiao Liang


Some test cases of TestFsDatasetImpl failed on Windows due to:
 # using File#setWritable interface;
 # test directory conflict between test cases (details in HDFS-13408);

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13587) TestQuorumJournalManager fails on Windows

2018-05-17 Thread Anbang Hu (JIRA)
Anbang Hu created HDFS-13587:


 Summary: TestQuorumJournalManager fails on Windows
 Key: HDFS-13587
 URL: https://issues.apache.org/jira/browse/HDFS-13587
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Anbang Hu
Assignee: Anbang Hu






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13586) Fsync fails on directories on Windows

2018-05-17 Thread Lukas Majercak (JIRA)
Lukas Majercak created HDFS-13586:
-

 Summary: Fsync fails on directories on Windows
 Key: HDFS-13586
 URL: https://issues.apache.org/jira/browse/HDFS-13586
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, hdfs
Reporter: Lukas Majercak
Assignee: Lukas Majercak


HDFS-11915 added a fsync call on DataNode's rbw directory on the first hsync() 
call. IOUtils.fsync first tries to get a FileChannel on the directory using 
FileChannel.open(READ). This call fails on Windows for any directory and throws 
an AccessDeniedException, see discussion here: 
[http://mail.openjdk.java.net/pipermail/nio-dev/2015-May/003140.html]. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13585) libhdfs SIGSEGV during shutdown of Java application.

2018-05-17 Thread Nalini Ganapati (JIRA)
Nalini Ganapati created HDFS-13585:
--

 Summary: libhdfs SIGSEGV during shutdown of Java application.
 Key: HDFS-13585
 URL: https://issues.apache.org/jira/browse/HDFS-13585
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: native
Affects Versions: 2.7.5
 Environment: Centos 7
Reporter: Nalini Ganapati


We are using libhdfs for hdfs support from our native library. This has been 
working mostly fine with Java/Spark applications, but some of them throw a 
SIGSEGV in hdfsThreadDestructor(). We tried to dynamically load and unload 
libhdfs.so using dlopen/dlclose but to no avail and we still see the seg fault. 
Is this a known issue? Looks like thread local storage is involved, are there 
workarounds? 

 

Here is a call stack from gdb java 
(gdb) bt
#0 0x7fad21f7 in raise () from /usr/lib64/libc.so.6
#1 0x7fad38e8 in abort () from /usr/lib64/libc.so.6
#2 0x7f380259 in os::abort(bool) () from 
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64/jre/lib/amd64/server/libjvm.so
#3 0x7f585986 in VMError::report_and_die() () from 
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64/jre/lib/amd64/server/libjvm.so
#4 0x7f389ec7 in JVM_handle_linux_signal () from 
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64/jre/lib/amd64/server/libjvm.so
#5 0x7f37d678 in signalHandler(int, siginfo_t*, void*) () from 
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64/jre/lib/amd64/server/libjvm.so
#6 
#7 0x7f341e66 in Monitor::ILock(Thread*) () from 
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64/jre/lib/amd64/server/libjvm.so
#8 0x7f3428f6 in Monitor::lock_without_safepoint_check() () from 
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64/jre/lib/amd64/server/libjvm.so
#9 0x7f58bc21 in VM_Exit::wait_if_vm_exited() () from 
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64/jre/lib/amd64/server/libjvm.so
#10 0x7f14fee5 in jni_DetachCurrentThread () from 
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64/jre/lib/amd64/server/libjvm.so
#11 0x7f32f2645f15 in hdfsThreadDestructor (v=0x7f332c018bc8)
 at 
/home/kshvachk/Work/Hadoop/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/os/posix/thread_local_storage.c:49
#12 0x7f3334490c22 in __nptl_deallocate_tsd () from 
/usr/lib64/libpthread.so.0
#13 0x7f3334490e33 in start_thread () from /usr/lib64/libpthread.so.0
#14 0x7fb9534d in clone () from /usr/lib64/libc.so.6



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-84) The root directory of ozone.tar.gz should contain the version string

2018-05-17 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-84:


 Summary: The root directory of ozone.tar.gz should contain the 
version string
 Key: HDDS-84
 URL: https://issues.apache.org/jira/browse/HDDS-84
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Elek, Marton
Assignee: Elek, Marton
 Fix For: Acadia


The root directory inside ozone.tar.gz is 'ozone' currently instead of 
'ozone-0.2.1'. It should contain the version number to make it easier to handle 
multiple version and follow the convention of Hadoop.

(Thanks for [~nandakumar131], who found the problem).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: Broken build env

2018-05-17 Thread Miklos Szegedi
Kihwal,

This appeared recently. These fixed my build inside docker.
echo export PATH=/opt/cmake/bin:/opt/protobuf/bin:$PATH >>/etc/profile
echo export CPLUS_INCLUDE_PATH=/opt/protobuf/include >>/etc/profile
echo export C_INCLUDE_PATH=/opt/protobuf/include >>/etc/profile
echo export LIBRARY_PATH=/opt/protobuf/lib >>/etc/profile
echo export LD_LIBRARY_PATH=/opt/protobuf/lib >>/etc/profile
echo export PROTOBUF_LIBRARY >>/etc/profile
echo export PROTOBUF_INCLUDE_DIR >>/etc/profile

Thank you,
Miklos Szegedi


On Thu, May 17, 2018 at 8:00 AM, Kihwal Lee  wrote:

> Simple commit builds are failing often
>
> https://builds.apache.org/job/Hadoop-trunk-Commit/
>
> Many trunk builds are failing on H19.
> "protoc version is 'libprotoc 2.6.1', expected version is '2.5.0' "
>
> On H4, a cmake version problem was seen.
>
> The commit builds don't seem to be running in a docker container (i.e.
> non-yetus env), so the env is not consistent. What will it take to fix
> this?
>
> Kihwal
>


[jira] [Created] (HDFS-13584) Fix broken unit tests on Windows

2018-05-17 Thread Anbang Hu (JIRA)
Anbang Hu created HDFS-13584:


 Summary: Fix broken unit tests on Windows
 Key: HDFS-13584
 URL: https://issues.apache.org/jira/browse/HDFS-13584
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Anbang Hu
Assignee: Anbang Hu






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Broken build env

2018-05-17 Thread Kihwal Lee
Simple commit builds are failing often

https://builds.apache.org/job/Hadoop-trunk-Commit/

Many trunk builds are failing on H19.
"protoc version is 'libprotoc 2.6.1', expected version is '2.5.0' "

On H4, a cmake version problem was seen.

The commit builds don't seem to be running in a docker container (i.e.
non-yetus env), so the env is not consistent. What will it take to fix
this?

Kihwal


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-05-17 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/783/

[May 15, 2018 6:31:28 AM] (sunilg) YARN-8266. [UI2] Clicking on application 
from cluster view should
[May 15, 2018 6:43:04 AM] (sunilg) YARN-8166. [UI2] Service page header links 
are broken. Contributed by
[May 15, 2018 6:47:35 AM] (sunilg) YARN-8236. Invalid kerberos principal file 
name cause NPE in native
[May 15, 2018 9:28:19 AM] (wwei) YARN-8278. DistributedScheduling is not 
working in HA. Contributed by
[May 15, 2018 3:13:56 PM] (stevel) HADOOP-15442. 
ITestS3AMetrics.testMetricsRegister can't know metrics
[May 15, 2018 3:19:03 PM] (stevel) HADOOP-15466. Correct units in 
adl.http.timeout. Contributed by Sean
[May 15, 2018 5:21:42 PM] (inigoiri) HDFS-13551. 
TestMiniDFSCluster#testClusterSetStorageCapacity does not
[May 15, 2018 5:27:36 PM] (inigoiri) HDFS-11700. 
TestHDFSServerPorts#testBackupNodePorts doesn't pass on
[May 15, 2018 6:20:32 PM] (inigoiri) HDFS-13548. 
TestResolveHdfsSymlink#testFcResolveAfs fails on Windows.
[May 15, 2018 10:34:54 PM] (inigoiri) HDFS-13567.
[May 16, 2018 12:40:39 AM] (eyang) YARN-8081.  Add support to upgrade a 
component. Contributed
[May 16, 2018 8:25:31 AM] (aajisaka) YARN-8123. Skip compiling old hamlet 
package when the Java version is 10
[May 16, 2018 2:23:49 PM] (jlowe) YARN-8284. get_docker_command refactoring. 
Contributed by Eric Badger
[May 16, 2018 4:31:46 PM] (inigoiri) HDFS-13557. TestDFSAdmin#testListOpenFiles 
fails on Windows. Contributed
[May 16, 2018 4:38:26 PM] (eyang) YARN-8300.  Fixed NPE in 
DefaultUpgradeComponentsFinder.
[May 16, 2018 5:08:49 PM] (inigoiri) 
HDFS-13550.TestDebugAdmin#testComputeMetaCommand fails on Windows.
[May 16, 2018 6:28:39 PM] (arp) HDFS-13512. WebHdfs getFileStatus doesn't 
return ecPolicy. Contributed
[May 16, 2018 8:00:01 PM] (haibochen) YARN-7933. [atsv2 read acls] Add 
TimelineWriter#writeDomain. (Rohith
[May 16, 2018 9:17:28 PM] (jlowe) YARN-8071. Add ability to specify nodemanager 
environment variables




-1 overall


The following subsystems voted -1:
asflicense findbugs unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-hdds/common 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$CloseContainerRequestProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 18039] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$CloseContainerResponseProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 18601] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$CopyContainerRequestProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 35184] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$CopyContainerResponseProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 36053] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$CreateContainerResponseProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 13089] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$DatanodeBlockID$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 1126] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$DeleteChunkResponseProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 30491] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$DeleteContainerRequestProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 15748] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$DeleteContainerResponseProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 16224] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$DeleteKeyResponseProto$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 23421] 
   Useless control flow in 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$KeyValue$Builder.maybeForceBuilderInitialization()
 At ContainerProtos.java: At ContainerProtos.java:[line 1767] 
   Useless control flow in 

[jira] [Created] (HDFS-13583) RBF: Router admin clrQuota is not synchronized with nameservice

2018-05-17 Thread Dibyendu Karmakar (JIRA)
Dibyendu Karmakar created HDFS-13583:


 Summary: RBF: Router admin clrQuota is not synchronized with 
nameservice
 Key: HDFS-13583
 URL: https://issues.apache.org/jira/browse/HDFS-13583
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Dibyendu Karmakar
Assignee: Dibyendu Karmakar


Router admin -clrQuota command is removing the quota from the mount table only, 
it is not getting synchronized with nameservice.

we should remove this  QUOTA_DONT_SET check from 
RouterAdminServer#synchronizeQuota

 
{code:java}
if (nsQuota != HdfsConstants.QUOTA_DONT_SET
|| ssQuota != HdfsConstants.QUOTA_DONT_SET) {
  this.router.getRpcServer().getQuotaModule().setQuota(path, nsQuota, ssQuota, 
null);
}
{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13582) Improve backward compatibility for HDFS-13176 (WebHdfs file path gets truncated when having semicolon (;) inside)

2018-05-17 Thread Zsolt Venczel (JIRA)
Zsolt Venczel created HDFS-13582:


 Summary: Improve backward compatibility for HDFS-13176 (WebHdfs 
file path gets truncated when having semicolon (;) inside)
 Key: HDFS-13582
 URL: https://issues.apache.org/jira/browse/HDFS-13582
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Zsolt Venczel
Assignee: Zsolt Venczel
 Fix For: 3.2.0


Encode special character only if necessary in order to improve backward 
compatibility in the following scenario:

new (having HDFS-13176) WebHdfs client - > old (not having HDFS-13176) WebHdfs 
server 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-83) Rename StorageLocationReport class to VolumeInfo

2018-05-17 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-83?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee resolved HDDS-83.
-
   Resolution: Not A Problem
Fix Version/s: 0.2.1

Resolving this as this change is not required.

> Rename StorageLocationReport class to VolumeInfo
> 
>
> Key: HDDS-83
> URL: https://issues.apache.org/jira/browse/HDDS-83
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Minor
> Fix For: 0.2.1
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-83) Rename StorageLocationReport class to VolumeInfo

2018-05-17 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-83:
---

 Summary: Rename StorageLocationReport class to VolumeInfo
 Key: HDDS-83
 URL: https://issues.apache.org/jira/browse/HDDS-83
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13581) On clicking DN UI logs link it uses http protocol for Wire encrypted cluster

2018-05-17 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDFS-13581:
--

 Summary: On clicking DN UI logs link it uses http protocol for 
Wire encrypted cluster
 Key: HDFS-13581
 URL: https://issues.apache.org/jira/browse/HDFS-13581
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee


On clicking DN UI logs link, it uses http protocol for Wire encrypted 
cluster.When the link's address is changed to https, it throws proper expected 
error message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13580) FailOnTimeout error in TestDataNodeVolumeFailure$testVolumeFailure

2018-05-17 Thread Ewan Higgs (JIRA)
Ewan Higgs created HDFS-13580:
-

 Summary: FailOnTimeout error in 
TestDataNodeVolumeFailure$testVolumeFailure
 Key: HDFS-13580
 URL: https://issues.apache.org/jira/browse/HDFS-13580
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ewan Higgs


testVolumeFailure is flaky. If we run it 50 times, it will fail about twice 
with the following backtrace:

 
{code:java}
java.lang.Exception: test timed out after 12 milliseconds

    at java.lang.Object.wait(Native Method)
    at java.lang.Thread.join(Thread.java:1253)
    at 
org.junit.internal.runners.statements.FailOnTimeout.evaluateStatement(FailOnTimeout.java:26)
    at 
org.junit.internal.runners.statements.FailOnTimeout.evaluate(FailOnTimeout.java:17)
    at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
    at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){code}
The second error (immediately after) is probably due to an issue with cleaning 
up a timed out test:
{code:java}
java.io.IOException: Cannot remove data directory: 
/Users/ehiggs/src/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/datapath
 
'/Users/ehiggs/src/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data':
 
   
absolute:/Users/ehiggs/src/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data
   permissions: drwx
path 
'/Users/ehiggs/src/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs':
 
   
absolute:/Users/ehiggs/src/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs
   permissions: drwx
path 
'/Users/ehiggs/src/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data': 
   
absolute:/Users/ehiggs/src/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data
   permissions: drwx
path '/Users/ehiggs/src/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test': 
   absolute:/Users/ehiggs/src/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test
   permissions: drwx
path '/Users/ehiggs/src/hadoop/hadoop-hdfs-project/hadoop-hdfs/target': 
   absolute:/Users/ehiggs/src/hadoop/hadoop-hdfs-project/hadoop-hdfs/target
   permissions: drwx
path '/Users/ehiggs/src/hadoop/hadoop-hdfs-project/hadoop-hdfs': 
   absolute:/Users/ehiggs/src/hadoop/hadoop-hdfs-project/hadoop-hdfs
   permissions: drwx
path '/Users/ehiggs/src/hadoop/hadoop-hdfs-project': 
   absolute:/Users/ehiggs/src/hadoop/hadoop-hdfs-project
   permissions: drwx
path '/Users/ehiggs/src/hadoop': 
   absolute:/Users/ehiggs/src/hadoop
   permissions: drwx
path '/Users/ehiggs/src': 
   absolute:/Users/ehiggs/src
   permissions: drwx
path '/Users/ehiggs': 
   absolute:/Users/ehiggs
   permissions: drwx
path '/Users': 
   absolute:/Users
   permissions: dr-x
path '/': 
   absolute:/
   permissions: dr-x


   at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:896)
   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:517)
   at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:476)
   at 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure.setUp(TestDataNodeVolumeFailure.java:125)
   at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source)
   at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:498)
   at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
   at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
   at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
   at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
   at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13579) Out of memory when running TestDFSStripedOutputStreamWithFailure testCloseWithExceptionsInStreamer

2018-05-17 Thread Ewan Higgs (JIRA)
Ewan Higgs created HDFS-13579:
-

 Summary: Out of memory when running 
TestDFSStripedOutputStreamWithFailure testCloseWithExceptionsInStreamer
 Key: HDFS-13579
 URL: https://issues.apache.org/jira/browse/HDFS-13579
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ewan Higgs


When running  TestDFSStripedOutputStreamWithFailure 
testCloseWithExceptionsInStreamer we often get OOM errors. It's not every time, 
but it occurs frequently. We have reproduced this on a few different machines. 
This seems to have been introduced in f83716b7f2e5b63e4c2302c374982755233d4dd6 
by HDFS-13251.

Output from the test:
{code:java}
java.lang.OutOfMemoryError: unable to create new native thread

    at java.lang.Thread.start0(Native Method)
    at java.lang.Thread.start(Thread.java:714)
    at 
io.netty.util.concurrent.SingleThreadEventExecutor.shutdownGracefully(SingleThreadEventExecutor.java:578)
    at 
io.netty.util.concurrent.MultithreadEventExecutorGroup.shutdownGracefully(MultithreadEventExecutorGroup.java:146)
    at 
io.netty.util.concurrent.AbstractEventExecutorGroup.shutdownGracefully(AbstractEventExecutorGroup.java:69)
    at 
org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.close(DatanodeHttpServer.java:270)
    at 
org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:2023)
    at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdownDataNode(MiniDFSCluster.java:2023)
    at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdownDataNodes(MiniDFSCluster.java:2013)
    at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1992)
    at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1966)
    at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1959)
    at 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailureBase.tearDown(TestDFSStripedOutputStreamWithFailureBase.java:222)
    at 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.testCloseWithExceptionsInStreamer(TestDFSStripedOutputStreamWithFailure.java:266)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
    at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
    at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
    at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
    at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
    at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
    at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
    at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
    at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
    at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
    at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
    at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
    at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
    at 
com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:54)
    at 
com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
    at 
com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org