[jira] [Created] (HDDS-320) Failed to start container with apache/hadoop-runner image.

2018-08-02 Thread Junjie Chen (JIRA)
Junjie Chen created HDDS-320:


 Summary: Failed to start container with apache/hadoop-runner image.
 Key: HDDS-320
 URL: https://issues.apache.org/jira/browse/HDDS-320
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: document
 Environment: centos 7.4
Reporter: Junjie Chen


Following the doc in hadoop-ozone/doc/content/GettingStarted.md, the 
docker-compose up -d step failed, the error list list below:
[root@VM_16_5_centos ozone]# docker-compose logs
Attaching to ozone_scm_1, ozone_datanode_1, ozone_ozoneManager_1
datanode_1  | Traceback (most recent call last):
datanode_1  |   File "/opt/envtoconf.py", line 104, in 
datanode_1  | Simple(sys.argv[1:]).main()
datanode_1  |   File "/opt/envtoconf.py", line 93, in main
datanode_1  | self.process_envs()
datanode_1  |   File "/opt/envtoconf.py", line 67, in process_envs
datanode_1  | with open(self.destination_file_path(name, extension) + 
".raw", "w") as myfile:
datanode_1  | IOError: [Errno 13] Permission denied: 
'/opt/hadoop/etc/hadoop/log4j.properties.raw'
datanode_1  | Traceback (most recent call last):
datanode_1  |   File "/opt/envtoconf.py", line 104, in 
datanode_1  | Simple(sys.argv[1:]).main()
datanode_1  |   File "/opt/envtoconf.py", line 93, in main
datanode_1  | self.process_envs()
datanode_1  |   File "/opt/envtoconf.py", line 67, in process_envs
datanode_1  | with open(self.destination_file_path(name, extension) + 
".raw", "w") as myfile:

ozoneManager_1  | with open(self.destination_file_path(name, extension) + 
".raw", "w") as myfile:
ozoneManager_1  | IOError: [Errno 13] Permission denied: 
'/opt/hadoop/etc/hadoop/log4j.properties.raw'
ozoneManager_1  | Traceback (most recent call last):
ozoneManager_1  |   File "/opt/envtoconf.py", line 104, in 
ozoneManager_1  | Simple(sys.argv[1:]).main()
ozoneManager_1  |   File "/opt/envtoconf.py", line 93, in main
ozoneManager_1  | self.process_envs()
ozoneManager_1  |   File "/opt/envtoconf.py", line 67, in process_envs  
   
ozoneManager_1  | with open(self.destination_file_path(name, extension) + 
".raw", "w") as myfile:  
ozoneManager_1  | IOError: [Errno 13] Permission denied: 
'/opt/hadoop/etc/hadoop/log4j.properties.raw' 
scm_1   | Traceback (most recent call last):
scm_1   |   File "/opt/envtoconf.py", line 104, in  
   
scm_1   | Simple(sys.argv[1:]).main()
scm_1   |   File "/opt/envtoconf.py", line 93, in main
scm_1   | self.process_envs()
scm_1   |   File "/opt/envtoconf.py", line 67, in process_envs  
   
scm_1   | with open(self.destination_file_path(name, extension) + 
".raw", "w") as myfile:  
scm_1   | IOError: [Errno 13] Permission denied: 
'/opt/hadoop/etc/hadoop/log4j.properties.raw' 
scm_1   | Traceback (most recent call last):
scm_1   |   File "/opt/envtoconf.py", line 104, in  
   
scm_1   | Simple(sys.argv[1:]).main()
scm_1   |   File "/opt/envtoconf.py", line 93, in main
scm_1   | self.process_envs()
scm_1   |   File "/opt/envtoconf.py", line 67, in process_envs  
   
scm_1   | with open(self.destination_file_path(name, extension) + 
".raw", "w") as myfile:  
scm_1   | IOError: [Errno 13] Permission denied: 
'/opt/hadoop/etc/hadoop/log4j.properties.raw' 
scm_1   | Traceback (most recent call last):
scm_1   |   File "/opt/envtoconf.py", line 104, in  
   
scm_1   | Simple(sys.argv[1:]).main()
scm_1   |   File "/opt/envtoconf.py", line 93, in main
scm_1   | self.process_envs()
scm_1   |   File "/opt/envtoconf.py", line 67, in process_envs  
   
scm_1   | with open(self.destination_file_path(name, extension) + 
".raw", "w") as myfile:  
scm_1   | IOError: [Errno 13] Permission denied: 
'/opt/hadoop/etc/hadoop/log4j.properties.raw'   

my docker-compose version is:
docker-compose version 1.22.0, build f46880fe

docker images:
apache/hadoop-runner   latest  569314fd9a735 weeks ago  
   646MB

>From the Dockerfile, we can see " chown hadoop /opt" command. It looks like we 
>need a "-R " here?





--
This 

Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64

2018-08-02 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-trunk-win/546/

[Aug 1, 2018 3:51:40 PM] (billie) YARN-8403. Change the log level for fail to 
download resource from INFO
[Aug 1, 2018 6:22:01 PM] (skumpf) YARN-8600. RegistryDNS hang when remote 
lookup does not reply.
[Aug 1, 2018 7:32:01 PM] (arp) HADOOP-15476. fix logging for split-dns 
multihome . Contributed by Ajay
[Aug 2, 2018 12:41:43 AM] (eyang) YARN-8610.  Fixed initiate upgrade error 
message.
[Aug 2, 2018 3:04:09 AM] (sunilg) YARN-8593. Add RM web service endpoint to get 
user information.
[Aug 2, 2018 6:05:22 AM] (nanda) HDDS-310. VolumeSet shutdown hook fails on 
datanode restart. Contributed
[Aug 2, 2018 7:11:06 AM] (sunilg) YARN-8594. [UI2] Display current logged in 
user. Contributed by Akhil
[Aug 2, 2018 10:40:54 AM] (sunilg) YARN-8592. [UI2] rmip:port/ui2 endpoint 
shows a blank page in windows OS
[Aug 2, 2018 12:04:17 PM] (msingh) HDDS-304. Process ContainerAction from 
datanode heartbeat in SCM.




-1 overall


The following subsystems voted -1:
compile mvninstall pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h 00m 00s)
unit


Specific tests:

Failed junit tests :

   hadoop.crypto.key.kms.server.TestKMS 
   hadoop.cli.TestAclCLI 
   hadoop.cli.TestAclCLIWithPosixAclInheritance 
   hadoop.cli.TestCacheAdminCLI 
   hadoop.cli.TestCryptoAdminCLI 
   hadoop.cli.TestDeleteCLI 
   hadoop.cli.TestErasureCodingCLI 
   hadoop.cli.TestHDFSCLI 
   hadoop.cli.TestXAttrCLI 
   hadoop.fs.contract.hdfs.TestHDFSContractAppend 
   hadoop.fs.contract.hdfs.TestHDFSContractConcat 
   hadoop.fs.contract.hdfs.TestHDFSContractCreate 
   hadoop.fs.contract.hdfs.TestHDFSContractDelete 
   hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus 
   hadoop.fs.contract.hdfs.TestHDFSContractMkdir 
   hadoop.fs.contract.hdfs.TestHDFSContractOpen 
   hadoop.fs.contract.hdfs.TestHDFSContractPathHandle 
   hadoop.fs.contract.hdfs.TestHDFSContractRename 
   hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory 
   hadoop.fs.contract.hdfs.TestHDFSContractSeek 
   hadoop.fs.contract.hdfs.TestHDFSContractSetTimes 
   hadoop.fs.loadGenerator.TestLoadGenerator 
   hadoop.fs.permission.TestStickyBit 
   hadoop.fs.shell.TestHdfsTextCommand 
   hadoop.fs.TestEnhancedByteBufferAccess 
   hadoop.fs.TestFcHdfsCreateMkdir 
   hadoop.fs.TestFcHdfsPermission 
   hadoop.fs.TestFcHdfsSetUMask 
   hadoop.fs.TestGlobPaths 
   hadoop.fs.TestHDFSFileContextMainOperations 
   hadoop.fs.TestHDFSMultipartUploader 
   hadoop.fs.TestHdfsNativeCodeLoader 
   hadoop.fs.TestResolveHdfsSymlink 
   hadoop.fs.TestSWebHdfsFileContextMainOperations 
   hadoop.fs.TestSymlinkHdfsDisable 
   hadoop.fs.TestSymlinkHdfsFileContext 
   hadoop.fs.TestSymlinkHdfsFileSystem 
   hadoop.fs.TestUnbuffer 
   hadoop.fs.TestUrlStreamHandler 
   hadoop.fs.TestWebHdfsFileContextMainOperations 
   hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot 
   hadoop.fs.viewfs.TestViewFileSystemHdfs 
   hadoop.fs.viewfs.TestViewFileSystemLinkFallback 
   hadoop.fs.viewfs.TestViewFileSystemLinkMergeSlash 
   hadoop.fs.viewfs.TestViewFileSystemWithAcls 
   hadoop.fs.viewfs.TestViewFileSystemWithTruncate 
   hadoop.fs.viewfs.TestViewFileSystemWithXAttrs 
   hadoop.fs.viewfs.TestViewFsAtHdfsRoot 
   hadoop.fs.viewfs.TestViewFsDefaultValue 
   hadoop.fs.viewfs.TestViewFsFileStatusHdfs 
   hadoop.fs.viewfs.TestViewFsHdfs 
   hadoop.fs.viewfs.TestViewFsWithAcls 
   hadoop.fs.viewfs.TestViewFsWithXAttrs 
   hadoop.hdfs.client.impl.TestBlockReaderLocal 
   hadoop.hdfs.client.impl.TestBlockReaderLocalLegacy 
   hadoop.hdfs.client.impl.TestBlockReaderRemote 
   hadoop.hdfs.client.impl.TestClientBlockVerification 
   hadoop.hdfs.crypto.TestHdfsCryptoStreams 
   hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer 
   hadoop.hdfs.qjournal.client.TestEpochsAreUnique 
   hadoop.hdfs.qjournal.client.TestQJMWithFaults 
   hadoop.hdfs.qjournal.client.TestQuorumJournalManager 
   hadoop.hdfs.qjournal.server.TestJournal 
   hadoop.hdfs.qjournal.server.TestJournalNode 
   hadoop.hdfs.qjournal.server.TestJournalNodeMXBean 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.qjournal.server.TestJournalNodeSync 
   hadoop.hdfs.qjournal.TestMiniJournalCluster 
   hadoop.hdfs.qjournal.TestNNWithQJM 
   hadoop.hdfs.qjournal.TestSecureNNWithQJM 
   hadoop.hdfs.security.TestDelegationToken 
   hadoop.hdfs.security.TestDelegationTokenForProxyUser 
   hadoop.hdfs.security.token.block.TestBlockToken 
   hadoop.hdfs.server.balancer.TestBalancer 
   

[VOTE] Release Apache Hadoop 3.1.1 - RC0

2018-08-02 Thread Wangda Tan
Hi folks,

I've created RC0 for Apache Hadoop 3.1.1. The artifacts are available here:

http://people.apache.org/~wangda/hadoop-3.1.1-RC0/

The RC tag in git is release-3.1.1-RC0:
https://github.com/apache/hadoop/commits/release-3.1.1-RC0

The maven artifacts are available via repository.apache.org at
https://repository.apache.org/content/repositories/orgapachehadoop-1139/

You can find my public key at
http://svn.apache.org/repos/asf/hadoop/common/dist/KEYS

This vote will run 5 days from now.

3.1.1 contains 435 [1] fixed JIRA issues since 3.1.0.

I have done testing with a pseudo cluster and distributed shell job. My +1
to start.

Best,
Wangda Tan

[1] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in (3.1.1)
ORDER BY priority DESC


[jira] [Created] (HDDS-319) Add a test for node catchup through readStateMachineData api

2018-08-02 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-319:
--

 Summary: Add a test for node catchup through readStateMachineData 
api
 Key: HDDS-319
 URL: https://issues.apache.org/jira/browse/HDDS-319
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Reporter: Mukul Kumar Singh


This jira proposes to add a new test, to test for code catchup because of a 
slow/failed node using the readStateMachineData api



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13790) RBF: Move ClientProtocol APIs to its own module

2018-08-02 Thread JIRA
Íñigo Goiri created HDFS-13790:
--

 Summary: RBF: Move ClientProtocol APIs to its own module
 Key: HDFS-13790
 URL: https://issues.apache.org/jira/browse/HDFS-13790
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Íñigo Goiri






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13789) Reduce logging frequency of QuorumJournalManager#selectInputStreams

2018-08-02 Thread Erik Krogen (JIRA)
Erik Krogen created HDFS-13789:
--

 Summary: Reduce logging frequency of 
QuorumJournalManager#selectInputStreams
 Key: HDFS-13789
 URL: https://issues.apache.org/jira/browse/HDFS-13789
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode, qjm
Affects Versions: HDFS-12943
Reporter: Erik Krogen
Assignee: Erik Krogen


As part of HDFS-13150, a logging statement was added to indicate whenever an 
edit tail is performed via the RPC mechanism. To enable low latency tailing, 
the tail frequency must be set very low, so this log statement gets printed 
much too frequently at an INFO level. We should decrease to DEBUG. Note that if 
there are actually edits available to tail, other log messages will get 
printed; this is just targeting the case when it attempts to tail and there are 
no new edits.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13788) Update EC documentation about rack fault tolerance

2018-08-02 Thread Xiao Chen (JIRA)
Xiao Chen created HDFS-13788:


 Summary: Update EC documentation about rack fault tolerance
 Key: HDFS-13788
 URL: https://issues.apache.org/jira/browse/HDFS-13788
 Project: Hadoop HDFS
  Issue Type: Task
  Components: documentation, erasure-coding
Affects Versions: 3.0.0
Reporter: Xiao Chen
Assignee: Kitti Nanasi


>From 
>http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html:
{quote}
For rack fault-tolerance, it is also important to have at least as many racks 
as the configured EC stripe width. For EC policy RS (6,3), this means minimally 
9 racks, and ideally 10 or 11 to handle planned and unplanned outages. For 
clusters with fewer racks than the stripe width, HDFS cannot maintain rack 
fault-tolerance, but will still attempt to spread a striped file across 
multiple nodes to preserve node-level fault-tolerance.
{quote}
Theoretical minimum is 3 racks, and ideally 9 or more, so the document should 
be updated.

(I didn't check timestamps, but this is probably due to 
{{BlockPlacementPolicyRackFaultTolerant}} isn't completely done when HDFS-9088 
introduced this doc. Later there's also examples in 
{{TestErasureCodingMultipleRacks}} to test this explicitly.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-08-02 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/856/

[Aug 1, 2018 1:44:49 AM] (fabbri) HDFS-13322 fuse dfs - uid persists when 
switching between ticket caches.
[Aug 1, 2018 3:03:30 AM] (sunilg) YARN-8397. Potential thread leak in 
ActivitiesManager. Contributed by
[Aug 1, 2018 3:33:00 AM] (msingh) HDDS-226. Client should update block length 
in OM while committing the
[Aug 1, 2018 5:34:53 AM] (wangda) YARN-8522. Application fails with 
InvalidResourceRequestException. (Zian
[Aug 1, 2018 6:47:18 AM] (sunilg) YARN-8606. Opportunistic scheduling does not 
work post RM failover.
[Aug 1, 2018 8:57:54 AM] (sunilg) YARN-8595. [UI2] Container diagnostic 
information is missing from
[Aug 1, 2018 3:51:40 PM] (billie) YARN-8403. Change the log level for fail to 
download resource from INFO
[Aug 1, 2018 6:22:01 PM] (skumpf) YARN-8600. RegistryDNS hang when remote 
lookup does not reply.
[Aug 1, 2018 7:32:01 PM] (arp) HADOOP-15476. fix logging for split-dns 
multihome . Contributed by Ajay
[Aug 2, 2018 12:41:43 AM] (eyang) YARN-8610.  Fixed initiate upgrade error 
message.




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed CTEST tests :

   test_test_libhdfs_threaded_hdfs_static 
   test_libhdfs_threaded_hdfspp_test_shim_static 

Failed junit tests :

   hadoop.security.TestRaceWhenRelogin 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.mapred.TestMRTimelineEventHandling 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/856/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/856/artifact/out/diff-compile-javac-root.txt
  [332K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/856/artifact/out/diff-checkstyle-root.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/856/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/856/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/856/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/856/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/856/artifact/out/whitespace-eol.txt
  [9.4M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/856/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/856/artifact/out/xml.txt
  [4.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/856/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/856/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/856/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/856/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/856/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/856/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/856/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [28K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/856/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/856/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/856/artifact/out/branch-findbugs-hadoop-ozone_ozonefs.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/856/artifact/out/branch-findbugs-hadoop-ozone_tools.txt
  [4.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/856/artifact/out/diff-javadoc-javadoc-root.txt
  [760K]

   CTEST:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/856/artifact/out/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-ctest.txt
  [116K]

   unit:

   

[jira] [Created] (HDFS-13787) Add Snapshot related APIs

2018-08-02 Thread Ranith Sardar (JIRA)
Ranith Sardar created HDFS-13787:


 Summary: Add Snapshot related APIs
 Key: HDFS-13787
 URL: https://issues.apache.org/jira/browse/HDFS-13787
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Ranith Sardar






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-318) ratis INFO logs should not shown during ozoneFs command-line execution

2018-08-02 Thread Nilotpal Nandi (JIRA)
Nilotpal Nandi created HDDS-318:
---

 Summary: ratis INFO logs should not shown during ozoneFs 
command-line execution
 Key: HDDS-318
 URL: https://issues.apache.org/jira/browse/HDDS-318
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Filesystem
Reporter: Nilotpal Nandi
 Fix For: 0.2.1


ratis INFOs should not be shown during ozoneFS CLI execution.

Please find the snippet from one othe execution :

 
{noformat}
hadoop@08315aa4b367:~/bin$ ./ozone fs -put /etc/passwd /p2
2018-08-02 12:17:18 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
2018-08-02 12:17:19 INFO ConfUtils:41 - raft.rpc.type = GRPC (default)
2018-08-02 12:17:19 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
(custom)
2018-08-02 12:17:19 INFO ConfUtils:41 - raft.client.rpc.retryInterval = 300 ms 
(default)
2018-08-02 12:17:19 INFO ConfUtils:41 - 
raft.client.async.outstanding-requests.max = 100 (default)
2018-08-02 12:17:19 INFO ConfUtils:41 - raft.client.async.scheduler-threads = 3 
(default)
2018-08-02 12:17:19 INFO ConfUtils:41 - raft.grpc.flow.control.window = 1MB 
(=1048576) (default)
2018-08-02 12:17:19 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
(custom)
2018-08-02 12:17:20 INFO ConfUtils:41 - raft.client.rpc.request.timeout = 3000 
ms (default)
Aug 02, 2018 12:17:20 PM 
org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl detectProxy
WARNING: Failed to construct URI for proxy lookup, proceeding without proxy
..
..
..
 
{noformat}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-317) Use new StorageSize API for reading ozone.scm.container.size.gb

2018-08-02 Thread Nanda kumar (JIRA)
Nanda kumar created HDDS-317:


 Summary: Use new StorageSize API for reading 
ozone.scm.container.size.gb
 Key: HDDS-317
 URL: https://issues.apache.org/jira/browse/HDDS-317
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: SCM
Reporter: Nanda kumar


Container size is configured using property {{ozone.scm.container.size.gb}}. 
This can be renamed to {{ozone.scm.container.size}} and use new StorageSize API 
to read the value.

The property is defined in
 1. ozone-default.xml
 2. ScmConfigKeys#OZONE_SCM_CONTAINER_SIZE_GB

The default value is defined in
 1. ozone-default.xml
 2. {{ScmConfigKeys#OZONE_SCM_CONTAINER_SIZE_DEFAULT}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-316) End to end testcase to test container lifecycle

2018-08-02 Thread Nanda kumar (JIRA)
Nanda kumar created HDDS-316:


 Summary: End to end testcase to test container lifecycle
 Key: HDDS-316
 URL: https://issues.apache.org/jira/browse/HDDS-316
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Datanode, SCM
Reporter: Nanda kumar


This jira aims to add end-to-end test-cases to test the transition of container 
lifecycle in HDDS.

Container lifecycle:
{noformat}
   

[ALLOCATED]--->[CREATING]->[OPEN]-->[CLOSING]->[CLOSED]
 (CREATE) |(CREATED)  (FINALIZE)
 (CLOSE)|
   |
   |
   |
   |
   |(TIMEOUT)   
 (DELETE)|
   |
   |
  +--> 
[DELETING] <+

 |

 |
  
(CLEANUP)|

 |

[DELETED]
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-315) ozoneShell infoKey does not work for directories created as key and throws 'KEY_NOT_FOUND' error

2018-08-02 Thread Nilotpal Nandi (JIRA)
Nilotpal Nandi created HDDS-315:
---

 Summary: ozoneShell infoKey does not work for directories created 
as key and throws 'KEY_NOT_FOUND' error
 Key: HDDS-315
 URL: https://issues.apache.org/jira/browse/HDDS-315
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Nilotpal Nandi
 Fix For: 0.2.1


infoKey for directories created using ozoneFs does not work and throws 
'KEY_NOT_FOUND' error. However, it shows up in the 'listKey' command.

Here in this example, 'dir1' was created using ozoneFS , infoKey for the 
directory throws error.

 

 
{noformat}
hadoop@08315aa4b367:~/bin./ozone oz -infoKey /root-volume/root-bucket/dir1
2018-08-02 11:34:06 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Command Failed : Lookup key failed, error:KEY_NOT_FOUND
hadoop@08315aa4b367:~/bin$ ./ozone oz -infoKey /root-volume/root-bucket/dir1/
2018-08-02 11:34:16 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Command Failed : Lookup key failed, error:KEY_NOT_FOUND
hadoop@08315aa4b367:~/bin$ ./ozone oz -listKey /root-volume/root-bucket/
2018-08-02 11:34:21 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
[ {
 "version" : 0,
 "md5hash" : null,
 "createdOn" : "Wed, 07 May +50555 12:44:16 GMT",
 "modifiedOn" : "Wed, 07 May +50555 12:44:30 GMT",
 "size" : 0,
 "keyName" : "dir1/"
}, {
 "version" : 0,
 "md5hash" : null,
 "createdOn" : "Wed, 07 May +50555 14:14:06 GMT",
 "modifiedOn" : "Wed, 07 May +50555 14:14:19 GMT",
 "size" : 0,
 "keyName" : "dir2/"
}, {
 "version" : 0,
 "md5hash" : null,
 "createdOn" : "Thu, 08 May +50555 21:40:55 GMT",
 "modifiedOn" : "Thu, 08 May +50555 21:40:59 GMT",
 "size" : 0,
 "keyName" : "dir2/b1/"{noformat}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13786) EC: Display erasure coding policy for sub-directories is not working

2018-08-02 Thread Souryakanta Dwivedy (JIRA)
Souryakanta Dwivedy created HDFS-13786:
--

 Summary: EC: Display erasure coding policy for sub-directories is 
not working
 Key: HDFS-13786
 URL: https://issues.apache.org/jira/browse/HDFS-13786
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: erasure-coding
Affects Versions: 3.0.0
 Environment: 3 Node SUSE Linux Cluster
Reporter: Souryakanta Dwivedy
 Attachments: Display_EC_Policy_Missing_Sub_Dir.png

EC: Display erasure coding policy for sub-directories is not working

- Create a Directory 
 - Set EC policy for the Directory
 - Create a file in-side that Directory 
 - Create a sub-directory inside the parent directory
 - Check the EC policy set for the files and sub-folders of the parent 
directory with command 
 "hadoop fs -ls -e /ecdir" 
 EC policy will be displayed only for files and missing for 
sub-directories,which is wrong behavior
 - But if you check the EC policy set of sub-directory with "hdfs ec -getPolicy 
" ,it will show
 the ec policy
 
 Actual ouput :-
 
 Display erasure coding policy for sub-directories is not working with command 
"hadoop fs -ls -e "

Expected output :-

It should display erasure coding policy for sub-directories also with command 
"hadoop fs -ls -e "



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13785) EC: "removePolicy" is not working for built-in/system Erasure Code policies

2018-08-02 Thread Souryakanta Dwivedy (JIRA)
Souryakanta Dwivedy created HDFS-13785:
--

 Summary: EC: "removePolicy" is not working for built-in/system 
Erasure Code policies
 Key: HDFS-13785
 URL: https://issues.apache.org/jira/browse/HDFS-13785
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: erasure-coding
Affects Versions: 3.0.0
 Environment: 3 Node SUSE Linux Cluster
Reporter: Souryakanta Dwivedy


EC: "removePolicy" is not working for built-in/system Erasure Code policies

- Check the existing built-in EC policies with command "hdfs ec -listPolicies"
- try to remove any of the EC policies,it will throw error message as 
"RemoteException: System erasure coding policy RS-3-2-1024k cannot be removed"
- add user-defined EC policies 
- Try to remove any user-defined policy,it will be removed successfully
- But in help option it is specified as :
 vm1:/opt/client/install/hadoop/namenode/bin> ./hdfs ec -help removePolicy
[-removePolicy -policy ]

Remove an erasure coding policy.
 The name of the erasure coding policy
vm1:/opt/client/install/hadoop/namenode/bin>

Actual result :-
 hdfs ec -removePolicy not working for built-in/system EC policies ,where as 
usage description 
 provided as "Remove an erasure coding policy".throwing exception as : 
"RemoteException: System erasure coding policy RS-3-2-1024k cannot be removed"

Expected output : Either EC "removePolicy" option should be applicable for all 
type of EC policies 
 Or it has to be specified in usage that EC "removePolicy" will be applicable 
to remove
 only user-defined EC policies, not applicable for system EC coding policies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-314) ozoneShell putKey command overwrites the existing key having same name

2018-08-02 Thread Nilotpal Nandi (JIRA)
Nilotpal Nandi created HDDS-314:
---

 Summary: ozoneShell putKey command overwrites the existing key 
having same name
 Key: HDDS-314
 URL: https://issues.apache.org/jira/browse/HDDS-314
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client
Reporter: Nilotpal Nandi
 Fix For: 0.2.1


steps taken : 

1) created a volume root-volume and a bucket root-bucket.

2)  Ran following command to put a key with name 'passwd'

 
{noformat}
hadoop@08315aa4b367:~/bin$ ./ozone oz -putKey /root-volume/root-bucket/passwd 
-file /etc/services -v
2018-08-02 09:20:17 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Volume Name : root-volume
Bucket Name : root-bucket
Key Name : passwd
File Hash : 567c100888518c1163b3462993de7d47
2018-08-02 09:20:18 INFO ConfUtils:41 - raft.rpc.type = GRPC (default)
2018-08-02 09:20:18 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
(custom)
2018-08-02 09:20:18 INFO ConfUtils:41 - raft.client.rpc.retryInterval = 300 ms 
(default)
2018-08-02 09:20:18 INFO ConfUtils:41 - 
raft.client.async.outstanding-requests.max = 100 (default)
2018-08-02 09:20:18 INFO ConfUtils:41 - raft.client.async.scheduler-threads = 3 
(default)
2018-08-02 09:20:18 INFO ConfUtils:41 - raft.grpc.flow.control.window = 1MB 
(=1048576) (default)
2018-08-02 09:20:18 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
(custom)
2018-08-02 09:20:18 INFO ConfUtils:41 - raft.client.rpc.request.timeout = 3000 
ms (default)
Aug 02, 2018 9:20:18 AM 
org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl detectProxy
 
{noformat}
3) Ran following command to put a key with name 'passwd' again.
{noformat}
hadoop@08315aa4b367:~/bin$ ./ozone oz -putKey /root-volume/root-bucket/passwd 
-file /etc/passwd -v
2018-08-02 09:20:41 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Volume Name : root-volume
Bucket Name : root-bucket
Key Name : passwd
File Hash : b056233571cc80d6879212911cb8e500
2018-08-02 09:20:41 INFO ConfUtils:41 - raft.rpc.type = GRPC (default)
2018-08-02 09:20:42 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
(custom)
2018-08-02 09:20:42 INFO ConfUtils:41 - raft.client.rpc.retryInterval = 300 ms 
(default)
2018-08-02 09:20:42 INFO ConfUtils:41 - 
raft.client.async.outstanding-requests.max = 100 (default)
2018-08-02 09:20:42 INFO ConfUtils:41 - raft.client.async.scheduler-threads = 3 
(default)
2018-08-02 09:20:42 INFO ConfUtils:41 - raft.grpc.flow.control.window = 1MB 
(=1048576) (default)
2018-08-02 09:20:42 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
(custom)
2018-08-02 09:20:42 INFO ConfUtils:41 - raft.client.rpc.request.timeout = 3000 
ms (default)
Aug 02, 2018 9:20:42 AM 
org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl detectProxy{noformat}
 

key 'passwd' was overwritten with new content and it did not throw any saying 
that the key is already present.

Expectation :

---

key overwrite with same name should not be allowed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org