[jira] [Updated] (HDFS-12716) 'dfs.datanode.failed.volumes.tolerated' to support minimum number of volumes to be available

2018-08-02 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-12716:
-
Attachment: HDFS-12716-branch-2.patch

>  'dfs.datanode.failed.volumes.tolerated' to support minimum number of volumes 
> to be available
> -
>
> Key: HDFS-12716
> URL: https://issues.apache.org/jira/browse/HDFS-12716
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: usharani
>Assignee: Ranith Sardar
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2
>
> Attachments: HDFS-12716-branch-2.patch, HDFS-12716.002.patch, 
> HDFS-12716.003.patch, HDFS-12716.004.patch, HDFS-12716.005.patch, 
> HDFS-12716.006.patch, HDFS-12716.patch, HDFS-12716_branch-2.patch
>
>
>   Currently 'dfs.datanode.failed.volumes.tolerated' supports number of 
> tolerated failed volumes to be mentioned. This configuration change requires 
> restart of datanode. Since datanode volumes can be changed dynamically, 
> keeping this configuration same for all may not be good idea.
> Support 'dfs.datanode.failed.volumes.tolerated' to accept special 
> 'negative value 'x' to tolerate failures of upto "n-x"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12716) 'dfs.datanode.failed.volumes.tolerated' to support minimum number of volumes to be available

2018-08-02 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16567815#comment-16567815
 ] 

genericqa commented on HDFS-12716:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HDFS-12716 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-12716 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12934214/HDFS-12716_branch-2.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24689/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



>  'dfs.datanode.failed.volumes.tolerated' to support minimum number of volumes 
> to be available
> -
>
> Key: HDFS-12716
> URL: https://issues.apache.org/jira/browse/HDFS-12716
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: usharani
>Assignee: Ranith Sardar
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2
>
> Attachments: HDFS-12716.002.patch, HDFS-12716.003.patch, 
> HDFS-12716.004.patch, HDFS-12716.005.patch, HDFS-12716.006.patch, 
> HDFS-12716.patch, HDFS-12716_branch-2.patch
>
>
>   Currently 'dfs.datanode.failed.volumes.tolerated' supports number of 
> tolerated failed volumes to be mentioned. This configuration change requires 
> restart of datanode. Since datanode volumes can be changed dynamically, 
> keeping this configuration same for all may not be good idea.
> Support 'dfs.datanode.failed.volumes.tolerated' to accept special 
> 'negative value 'x' to tolerate failures of upto "n-x"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12716) 'dfs.datanode.failed.volumes.tolerated' to support minimum number of volumes to be available

2018-08-02 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16567817#comment-16567817
 ] 

Ranith Sardar commented on HDFS-12716:
--

Hi [~brahmareddy] and [~linyiqun], attached new patch for branch-2. please 
review it once.

>  'dfs.datanode.failed.volumes.tolerated' to support minimum number of volumes 
> to be available
> -
>
> Key: HDFS-12716
> URL: https://issues.apache.org/jira/browse/HDFS-12716
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: usharani
>Assignee: Ranith Sardar
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2
>
> Attachments: HDFS-12716.002.patch, HDFS-12716.003.patch, 
> HDFS-12716.004.patch, HDFS-12716.005.patch, HDFS-12716.006.patch, 
> HDFS-12716.patch, HDFS-12716_branch-2.patch
>
>
>   Currently 'dfs.datanode.failed.volumes.tolerated' supports number of 
> tolerated failed volumes to be mentioned. This configuration change requires 
> restart of datanode. Since datanode volumes can be changed dynamically, 
> keeping this configuration same for all may not be good idea.
> Support 'dfs.datanode.failed.volumes.tolerated' to accept special 
> 'negative value 'x' to tolerate failures of upto "n-x"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12716) 'dfs.datanode.failed.volumes.tolerated' to support minimum number of volumes to be available

2018-08-02 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-12716:
-
Attachment: HDFS-12716_branch-2.patch

>  'dfs.datanode.failed.volumes.tolerated' to support minimum number of volumes 
> to be available
> -
>
> Key: HDFS-12716
> URL: https://issues.apache.org/jira/browse/HDFS-12716
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: usharani
>Assignee: Ranith Sardar
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2
>
> Attachments: HDFS-12716.002.patch, HDFS-12716.003.patch, 
> HDFS-12716.004.patch, HDFS-12716.005.patch, HDFS-12716.006.patch, 
> HDFS-12716.patch, HDFS-12716_branch-2.patch
>
>
>   Currently 'dfs.datanode.failed.volumes.tolerated' supports number of 
> tolerated failed volumes to be mentioned. This configuration change requires 
> restart of datanode. Since datanode volumes can be changed dynamically, 
> keeping this configuration same for all may not be good idea.
> Support 'dfs.datanode.failed.volumes.tolerated' to accept special 
> 'negative value 'x' to tolerate failures of upto "n-x"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-230) ContainerStateMachine should provide readStateMachineData api to read data if Containers with required during replication

2018-08-02 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16567779#comment-16567779
 ] 

Mukul Kumar Singh commented on HDDS-230:


Thanks for the review [~szetszwo].

I feel that we should reconstruct LogEntryProto as this is already done as part 
of startTransaction in ContainerStateMachine. It also helps because it is the 
responsibility of the statemachine to construct the new LogEntry proto and that 
can be done easily in the state machine because it had earlier constructed the 
LogEntryProto Object in startTransaction.

Also in writeStateMachineData right now, the LogEntryProto proto object is 
being passed in place of just the stateMachineData.


> ContainerStateMachine should provide readStateMachineData api to read data if 
> Containers with required during replication
> -
>
> Key: HDDS-230
> URL: https://issues.apache.org/jira/browse/HDDS-230
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Critical
> Fix For: 0.2.1
>
> Attachments: HDDS-230.001.patch, HDDS-230.002.patch, 
> HDDS-230.003.patch
>
>
> Ozone datanode exits during data write with the following exception.
> {code}
> 2018-07-05 14:10:01,605 INFO org.apache.ratis.server.storage.RaftLogWorker: 
> Rolling segment:40356aa1-741f-499c-aad1-b500f2620a3d_9858-RaftLogWorker index 
> to:4565
> 2018-07-05 14:10:01,607 ERROR 
> org.apache.ratis.server.impl.StateMachineUpdater: Terminating with exit 
> status 2: StateMachineUpdater-40356aa1-741f-499c-aad1-b500f2620a3d_9858: the 
> StateMachineUpdater hits Throwable
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.applyTransaction(ContainerStateMachine.java:272)
> at 
> org.apache.ratis.server.impl.RaftServerImpl.applyLogToStateMachine(RaftServerImpl.java:1058)
> at 
> org.apache.ratis.server.impl.StateMachineUpdater.run(StateMachineUpdater.java:154)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> This might be as a result of a ratis transaction which was not written 
> through the "writeStateMachineData" phase, however it was added to the raft 
> log. This implied that stateMachineUpdater now applies a transaction without 
> the corresponding entry being added to the stateMachine.
> I am raising this jira to track the issue and will also raise a Ratis jira if 
> required.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-320) Failed to start container with apache/hadoop-runner image.

2018-08-02 Thread Junjie Chen (JIRA)
Junjie Chen created HDDS-320:


 Summary: Failed to start container with apache/hadoop-runner image.
 Key: HDDS-320
 URL: https://issues.apache.org/jira/browse/HDDS-320
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: document
 Environment: centos 7.4
Reporter: Junjie Chen


Following the doc in hadoop-ozone/doc/content/GettingStarted.md, the 
docker-compose up -d step failed, the error list list below:
[root@VM_16_5_centos ozone]# docker-compose logs
Attaching to ozone_scm_1, ozone_datanode_1, ozone_ozoneManager_1
datanode_1  | Traceback (most recent call last):
datanode_1  |   File "/opt/envtoconf.py", line 104, in 
datanode_1  | Simple(sys.argv[1:]).main()
datanode_1  |   File "/opt/envtoconf.py", line 93, in main
datanode_1  | self.process_envs()
datanode_1  |   File "/opt/envtoconf.py", line 67, in process_envs
datanode_1  | with open(self.destination_file_path(name, extension) + 
".raw", "w") as myfile:
datanode_1  | IOError: [Errno 13] Permission denied: 
'/opt/hadoop/etc/hadoop/log4j.properties.raw'
datanode_1  | Traceback (most recent call last):
datanode_1  |   File "/opt/envtoconf.py", line 104, in 
datanode_1  | Simple(sys.argv[1:]).main()
datanode_1  |   File "/opt/envtoconf.py", line 93, in main
datanode_1  | self.process_envs()
datanode_1  |   File "/opt/envtoconf.py", line 67, in process_envs
datanode_1  | with open(self.destination_file_path(name, extension) + 
".raw", "w") as myfile:

ozoneManager_1  | with open(self.destination_file_path(name, extension) + 
".raw", "w") as myfile:
ozoneManager_1  | IOError: [Errno 13] Permission denied: 
'/opt/hadoop/etc/hadoop/log4j.properties.raw'
ozoneManager_1  | Traceback (most recent call last):
ozoneManager_1  |   File "/opt/envtoconf.py", line 104, in 
ozoneManager_1  | Simple(sys.argv[1:]).main()
ozoneManager_1  |   File "/opt/envtoconf.py", line 93, in main
ozoneManager_1  | self.process_envs()
ozoneManager_1  |   File "/opt/envtoconf.py", line 67, in process_envs  
   
ozoneManager_1  | with open(self.destination_file_path(name, extension) + 
".raw", "w") as myfile:  
ozoneManager_1  | IOError: [Errno 13] Permission denied: 
'/opt/hadoop/etc/hadoop/log4j.properties.raw' 
scm_1   | Traceback (most recent call last):
scm_1   |   File "/opt/envtoconf.py", line 104, in  
   
scm_1   | Simple(sys.argv[1:]).main()
scm_1   |   File "/opt/envtoconf.py", line 93, in main
scm_1   | self.process_envs()
scm_1   |   File "/opt/envtoconf.py", line 67, in process_envs  
   
scm_1   | with open(self.destination_file_path(name, extension) + 
".raw", "w") as myfile:  
scm_1   | IOError: [Errno 13] Permission denied: 
'/opt/hadoop/etc/hadoop/log4j.properties.raw' 
scm_1   | Traceback (most recent call last):
scm_1   |   File "/opt/envtoconf.py", line 104, in  
   
scm_1   | Simple(sys.argv[1:]).main()
scm_1   |   File "/opt/envtoconf.py", line 93, in main
scm_1   | self.process_envs()
scm_1   |   File "/opt/envtoconf.py", line 67, in process_envs  
   
scm_1   | with open(self.destination_file_path(name, extension) + 
".raw", "w") as myfile:  
scm_1   | IOError: [Errno 13] Permission denied: 
'/opt/hadoop/etc/hadoop/log4j.properties.raw' 
scm_1   | Traceback (most recent call last):
scm_1   |   File "/opt/envtoconf.py", line 104, in  
   
scm_1   | Simple(sys.argv[1:]).main()
scm_1   |   File "/opt/envtoconf.py", line 93, in main
scm_1   | self.process_envs()
scm_1   |   File "/opt/envtoconf.py", line 67, in process_envs  
   
scm_1   | with open(self.destination_file_path(name, extension) + 
".raw", "w") as myfile:  
scm_1   | IOError: [Errno 13] Permission denied: 
'/opt/hadoop/etc/hadoop/log4j.properties.raw'   

my docker-compose version is:
docker-compose version 1.22.0, build f46880fe

docker images:
apache/hadoop-runner   latest  569314fd9a735 weeks ago  
   646MB

>From the Dockerfile, we can see " chown hadoop /opt" command. It looks like we 
>need a "-R " here?





--
This 

[jira] [Commented] (HDFS-13269) After too many open file exception occurred, the standby NN never do checkpoint

2018-08-02 Thread maobaolong (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16567653#comment-16567653
 ] 

maobaolong commented on HDFS-13269:
---

It is indeed a to be improve item.

> After too many open file exception occurred, the standby NN never do 
> checkpoint
> ---
>
> Key: HDFS-13269
> URL: https://issues.apache.org/jira/browse/HDFS-13269
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.0
>Reporter: maobaolong
>Priority: Major
>
> do saveNameSpace in dfsadmin.
> The output as following:
>  
> {code:java}
> saveNamespace: No image directories available!
> {code}
> The Namenode log show:
>  
>  
> {code:java}
> [2018-01-13T10:32:19.903+08:00] [INFO] [Standby State Checkpointer] : 
> Triggering checkpoint because there have been 10159265 txns since the last 
> checkpoint, which exceeds the configured threshold 1000
> [2018-01-13T10:32:19.903+08:00] [INFO] [Standby State Checkpointer] : Save 
> namespace ...
> ...
> [2018-01-13T10:37:10.539+08:00] [WARN] [1985938863@qtp-61073295-1 - Acceptor0 
> HttpServer2$SelectChannelConnectorWithSafeStartup@HOST_A:50070] : EXCEPTION 
> java.io.IOException: Too many open files
> at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
> at 
> sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422)
> at 
> sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250)
> at 
> org.mortbay.jetty.nio.SelectChannelConnector$1.acceptChannel(SelectChannelConnector.java:75)
> at 
> org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:686)
> at 
> org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
> at 
> org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
> at 
> org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
> at 
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
> [2018-01-13T10:37:15.421+08:00] [ERROR] [FSImageSaver for /data0/nn of type 
> IMAGE_AND_EDITS] : Unable to save image for /data0/nn
> java.io.FileNotFoundException: 
> /data0/nn/current/fsimage_40247283317.md5.tmp (Too many open files)
> at java.io.FileOutputStream.open0(Native Method)
> at java.io.FileOutputStream.open(FileOutputStream.java:270)
> at java.io.FileOutputStream.(FileOutputStream.java:213)
> at java.io.FileOutputStream.(FileOutputStream.java:162)
> at 
> org.apache.hadoop.hdfs.util.AtomicFileOutputStream.(AtomicFileOutputStream.java:58)
> at 
> org.apache.hadoop.hdfs.util.MD5FileUtils.saveMD5File(MD5FileUtils.java:157)
> at 
> org.apache.hadoop.hdfs.util.MD5FileUtils.saveMD5File(MD5FileUtils.java:149)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImage(FSImage.java:990)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage$FSImageSaver.run(FSImage.java:1039)
> at java.lang.Thread.run(Thread.java:745)
> [2018-01-13T10:37:15.421+08:00] [ERROR] [Standby State Checkpointer] : Error 
> reported on storage directory Storage Directory /data0/nn
> [2018-01-13T10:37:15.421+08:00] [WARN] [Standby State Checkpointer] : About 
> to remove corresponding storage: /data0/nn
> [2018-01-13T10:37:15.429+08:00] [ERROR] [Standby State Checkpointer] : 
> Exception in doCheckpoint
> java.io.IOException: Failed to save in any storage directories while saving 
> namespace.
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1176)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveNamespace(FSImage.java:1107)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer.doCheckpoint(StandbyCheckpointer.java:185)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer.access$1400(StandbyCheckpointer.java:62)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$CheckpointerThread.doWork(StandbyCheckpointer.java:353)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$CheckpointerThread.access$700(StandbyCheckpointer.java:260)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$CheckpointerThread$1.run(StandbyCheckpointer.java:280)
> at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$CheckpointerThread.run(StandbyCheckpointer.java:276)
> ...
> [2018-01-13T15:52:33.783+08:00] [INFO] [Standby State Checkpointer] : Save 
> namespace ...
> [2018-01-13T15:52:33.783+08:00] [ERROR] [Standby State Checkpointer] 

[jira] [Commented] (HDFS-13749) Implement a new client protocol method to get NameNode state

2018-08-02 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16567641#comment-16567641
 ] 

genericqa commented on HDFS-13749:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-12943 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
5s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
12s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
51s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
3s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
9s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} HDFS-12943 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 16m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 55s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
29s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs-client generated 5 new 
+ 0 unchanged - 0 fixed = 5 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 30s{color} 
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}101m 11s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 45s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}206m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.protocol.TestReadOnly |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.namenode.ha.TestHASafeMode |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:9b55946 |
| JIRA Issue | HDFS-13749 |
| JIRA Patch URL | 

[jira] [Commented] (HDDS-312) Add blockIterator to Container

2018-08-02 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16567621#comment-16567621
 ] 

Hudson commented on HDDS-312:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14701 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14701/])
HDDS-312. Add blockIterator to Container. Contributed by Bharat (xyao: rev 
40ab8ee597d730fa2a8a386ef25b0dbecd4e839c)
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/interfaces/Container.java
* (edit) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/keyvalue/TestKeyValueContainer.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainer.java


> Add blockIterator to Container
> --
>
> Key: HDDS-312
> URL: https://issues.apache.org/jira/browse/HDDS-312
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Minor
> Fix For: 0.2.1
>
> Attachments: HDDS-312.00.patch
>
>
> This Jira is to add newly added blockIterator to Container and its 
> implemented class KeyValueContainer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-312) Add blockIterator to Container

2018-08-02 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-312:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks [~bharatviswa] for the contribution. I've commit the patch to trunk. 

> Add blockIterator to Container
> --
>
> Key: HDDS-312
> URL: https://issues.apache.org/jira/browse/HDDS-312
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Minor
> Fix For: 0.2.1
>
> Attachments: HDDS-312.00.patch
>
>
> This Jira is to add newly added blockIterator to Container and its 
> implemented class KeyValueContainer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-312) Add blockIterator to Container

2018-08-02 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16567594#comment-16567594
 ] 

Xiaoyu Yao commented on HDDS-312:
-

Thanks [~bharatviswa] for working on this, patch v0 LGTM, +1. I will commit it 
shortly.

> Add blockIterator to Container
> --
>
> Key: HDDS-312
> URL: https://issues.apache.org/jira/browse/HDDS-312
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Minor
> Fix For: 0.2.1
>
> Attachments: HDDS-312.00.patch
>
>
> This Jira is to add newly added blockIterator to Container and its 
> implemented class KeyValueContainer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-268) Add SCM close container watcher

2018-08-02 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16567583#comment-16567583
 ] 

Xiaoyu Yao commented on HDDS-268:
-

[~ajayydv], thanks for working on this. Can you rebase the patch to trunk as it 
does not apply any more.

> Add SCM close container watcher
> ---
>
> Key: HDDS-268
> URL: https://issues.apache.org/jira/browse/HDDS-268
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-268.00.patch, HDDS-268.01.patch, HDDS-268.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13785) EC: "removePolicy" is not working for built-in/system Erasure Code policies

2018-08-02 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-13785:
---
Component/s: documentation

> EC: "removePolicy" is not working for built-in/system Erasure Code policies
> ---
>
> Key: HDFS-13785
> URL: https://issues.apache.org/jira/browse/HDFS-13785
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, erasure-coding
>Affects Versions: 3.0.0
> Environment: 3 Node SUSE Linux Cluster
>Reporter: Souryakanta Dwivedy
>Priority: Minor
>
> EC: "removePolicy" is not working for built-in/system Erasure Code policies
> - Check the existing built-in EC policies with command "hdfs ec -listPolicies"
> - try to remove any of the EC policies,it will throw error message as 
> "RemoteException: System erasure coding policy RS-3-2-1024k cannot be removed"
> - add user-defined EC policies 
> - Try to remove any user-defined policy,it will be removed successfully
> - But in help option it is specified as :
>  vm1:/opt/client/install/hadoop/namenode/bin> ./hdfs ec -help removePolicy
> [-removePolicy -policy ]
> Remove an erasure coding policy.
>  The name of the erasure coding policy
> vm1:/opt/client/install/hadoop/namenode/bin>
> Actual result :-
>  hdfs ec -removePolicy not working for built-in/system EC policies ,where as 
> usage description 
>  provided as "Remove an erasure coding policy".throwing exception as : 
> "RemoteException: System erasure coding policy RS-3-2-1024k cannot be removed"
> Expected output : Either EC "removePolicy" option should be applicable for 
> all type of EC policies 
>  Or it has to be specified in usage that EC "removePolicy" will be applicable 
> to remove
>  only user-defined EC policies, not applicable for system EC coding policies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13785) EC: "removePolicy" is not working for built-in/system Erasure Code policies

2018-08-02 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16567549#comment-16567549
 ] 

Xiao Chen commented on HDFS-13785:
--

Thanks for the report [~SouryakantaDwivedy] and [~jojochuang] for the comment.

Yes add and remove should be only for custom EC policies. We should do an 
documentation / message update here.

> EC: "removePolicy" is not working for built-in/system Erasure Code policies
> ---
>
> Key: HDFS-13785
> URL: https://issues.apache.org/jira/browse/HDFS-13785
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
> Environment: 3 Node SUSE Linux Cluster
>Reporter: Souryakanta Dwivedy
>Priority: Minor
>
> EC: "removePolicy" is not working for built-in/system Erasure Code policies
> - Check the existing built-in EC policies with command "hdfs ec -listPolicies"
> - try to remove any of the EC policies,it will throw error message as 
> "RemoteException: System erasure coding policy RS-3-2-1024k cannot be removed"
> - add user-defined EC policies 
> - Try to remove any user-defined policy,it will be removed successfully
> - But in help option it is specified as :
>  vm1:/opt/client/install/hadoop/namenode/bin> ./hdfs ec -help removePolicy
> [-removePolicy -policy ]
> Remove an erasure coding policy.
>  The name of the erasure coding policy
> vm1:/opt/client/install/hadoop/namenode/bin>
> Actual result :-
>  hdfs ec -removePolicy not working for built-in/system EC policies ,where as 
> usage description 
>  provided as "Remove an erasure coding policy".throwing exception as : 
> "RemoteException: System erasure coding policy RS-3-2-1024k cannot be removed"
> Expected output : Either EC "removePolicy" option should be applicable for 
> all type of EC policies 
>  Or it has to be specified in usage that EC "removePolicy" will be applicable 
> to remove
>  only user-defined EC policies, not applicable for system EC coding policies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13697) DFSClient should instantiate and cache KMSClientProvider using UGI at creation time for consistent UGI handling

2018-08-02 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16567537#comment-16567537
 ] 

Xiao Chen commented on HDFS-13697:
--

Hi [~zvenczel], thanks for revving. I tried to explore a few options this week 
but still have only done a partial review. Posting here and looking forward to 
discuss this with you offline later when you're back from vacation.

- Ideally we want to do the same as DFSClient, where a ugi of 
{{UGI#getCurrentUser}} is just cached at construction time, and used for later 
auths. I tried that but it caused test failures in TestKMS with the 
{{doWebHDFSProxyUserTest}} tests and {{testTGTRenewal}} - for the sake of 
compatibility I think we can do something like this to allow the tests to pass.
{code}
// in KMSCP ctor
ugi = UserGroupInformation.getCurrentUser().getRealUser() == null ?
 UserGroupInformation.getCurrentUser() : 
 UserGroupInformation.getCurrentUser().getRealUser();
{code}
[~daryn] [~xyao] [~jnp] what do you think?

Other smaller review comments:
- We don't need {{cachedProxyUgi}}, and {{getDoAsUser}} can figure things out 
from the ugi cached if we do the above
- {{ugiToUse}} doesn't seem necessary
- Could you explain why the {{setLoginUser}} lines were removed in TestKMS? I'd 
like to make sure existing tests pass as-is, if possible. 

DFSClient:
- thanks for the explanation above! Good to learn about the guava Suppliers. I 
think your previous patch was fine. I was hoping we don't have to cache the 
Supplier object in my last comment, but it simplifies the code so let's go with 
it.
- the new com.google imports should be placed next to other existing imports of 
that module.
- I would not call the KeyProvider variable {{testKeyProvider}} - it's used for 
all purposes. Just the {{VisibleForTesting}} annotation on {{setKeyProvider}} 
would be enough, which you already have.
- The new patch's {{KeyProviderSupplier#isKeyProviderCreated}} doesn't seem 
necessary. We can't prevent the caller calling {{getKeyProvider}} after calling 
{{close}} here from that check. (We probably can add a guard in DFSClient to 
prevent all API calls after {{close}}, but that's separate from this jira.)
- Although callers seem to have check about nullity of the provider, if 
DFSClient failed to create a key provider, it's preferred to throw immediately. 

> DFSClient should instantiate and cache KMSClientProvider using UGI at 
> creation time for consistent UGI handling
> ---
>
> Key: HDFS-13697
> URL: https://issues.apache.org/jira/browse/HDFS-13697
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>Priority: Major
> Attachments: HDFS-13697.01.patch, HDFS-13697.02.patch, 
> HDFS-13697.03.patch, HDFS-13697.04.patch, HDFS-13697.05.patch, 
> HDFS-13697.06.patch
>
>
> While calling KeyProviderCryptoExtension decryptEncryptedKey the call stack 
> might not have doAs privileged execution call (in the DFSClient for example). 
> This results in loosing the proxy user from UGI as UGI.getCurrentUser finds 
> no AccessControllerContext and does a re-login for the login user only.
> This can cause the following for example: if we have set up the oozie user to 
> be entitled to perform actions on behalf of example_user but oozie is 
> forbidden to decrypt any EDEK (for security reasons), due to the above issue, 
> example_user entitlements are lost from UGI and the following error is 
> reported:
> {code}
> [0] 
> SERVER[xxx] USER[example_user] GROUP[-] TOKEN[] APP[Test_EAR] 
> JOB[0020905-180313191552532-oozie-oozi-W] 
> ACTION[0020905-180313191552532-oozie-oozi-W@polling_dir_path] Error starting 
> action [polling_dir_path]. ErrorType [ERROR], ErrorCode [FS014], Message 
> [FS014: User [oozie] is not authorized to perform [DECRYPT_EEK] on key with 
> ACL name [encrypted_key]!!]
> org.apache.oozie.action.ActionExecutorException: FS014: User [oozie] is not 
> authorized to perform [DECRYPT_EEK] on key with ACL name [encrypted_key]!!
>  at 
> org.apache.oozie.action.ActionExecutor.convertExceptionHelper(ActionExecutor.java:463)
>  at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:441)
>  at 
> org.apache.oozie.action.hadoop.FsActionExecutor.touchz(FsActionExecutor.java:523)
>  at 
> org.apache.oozie.action.hadoop.FsActionExecutor.doOperations(FsActionExecutor.java:199)
>  at 
> org.apache.oozie.action.hadoop.FsActionExecutor.start(FsActionExecutor.java:563)
>  at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:232)
>  at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:63)
>  at org.apache.oozie.command.XCommand.call(XCommand.java:286)
>  at 
> 

[jira] [Commented] (HDFS-13767) Add msync server implementation.

2018-08-02 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16567534#comment-16567534
 ] 

genericqa commented on HDFS-13767:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-12943 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  5m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
57s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
24s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
17s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
47s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
34s{color} | {color:green} HDFS-12943 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
22s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
50s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 35s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
52s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}253m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestPread |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSClientRetries |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:9b55946 |
| JIRA Issue | HDFS-13767 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12934160/HDFS-13767-HDFS-12943.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 

[jira] [Commented] (HDFS-13789) Reduce logging frequency of QuorumJournalManager#selectInputStreams

2018-08-02 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16567496#comment-16567496
 ] 

Chao Sun commented on HDFS-13789:
-

Thanks [~xkrogen]. I'm wondering if we should reduce log frequency for some 
other messages too, such as the ones from {{FsImage}} and 
{{EditLogInputStream}}?

> Reduce logging frequency of QuorumJournalManager#selectInputStreams
> ---
>
> Key: HDFS-13789
> URL: https://issues.apache.org/jira/browse/HDFS-13789
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode, qjm
>Affects Versions: HDFS-12943
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Trivial
> Attachments: HDFS-13789-HDFS-12943.000.patch
>
>
> As part of HDFS-13150, a logging statement was added to indicate whenever an 
> edit tail is performed via the RPC mechanism. To enable low latency tailing, 
> the tail frequency must be set very low, so this log statement gets printed 
> much too frequently at an INFO level. We should decrease to DEBUG. Note that 
> if there are actually edits available to tail, other log messages will get 
> printed; this is just targeting the case when it attempts to tail and there 
> are no new edits.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13749) Implement a new client protocol method to get NameNode state

2018-08-02 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-13749:

Attachment: HDFS-13749-HDFS-12943.000.patch

> Implement a new client protocol method to get NameNode state
> 
>
> Key: HDFS-13749
> URL: https://issues.apache.org/jira/browse/HDFS-13749
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13749-HDFS-12943.000.patch
>
>
> Currently {{HAServiceProtocol#getServiceStatus}} requires super user 
> privilege. Therefore, as a temporary solution, in HDFS-12976 we discover 
> NameNode state by calling {{reportBadBlocks}}. Here, we'll properly implement 
> this by adding a new method in client protocol to get the NameNode state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13749) Implement a new client protocol method to get NameNode state

2018-08-02 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-13749:

Status: Patch Available  (was: Open)

> Implement a new client protocol method to get NameNode state
> 
>
> Key: HDFS-13749
> URL: https://issues.apache.org/jira/browse/HDFS-13749
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13749-HDFS-12943.000.patch
>
>
> Currently {{HAServiceProtocol#getServiceStatus}} requires super user 
> privilege. Therefore, as a temporary solution, in HDFS-12976 we discover 
> NameNode state by calling {{reportBadBlocks}}. Here, we'll properly implement 
> this by adding a new method in client protocol to get the NameNode state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-312) Add blockIterator to Container

2018-08-02 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16567481#comment-16567481
 ] 

genericqa commented on HDDS-312:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
36s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDDS-312 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12934170/HDDS-312.00.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 724b7a85ecef 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 889df6f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/686/testReport/ |
| Max. process+thread count | 336 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/container-service U: hadoop-hdds/container-service |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/686/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add blockIterator to Container
> --
>
> Key: HDDS-312
> URL: https://issues.apache.org/jira/browse/HDDS-312
> Project: 

[jira] [Comment Edited] (HDFS-13515) NetUtils#connect should log remote address for NoRouteToHostException

2018-08-02 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471170#comment-16471170
 ] 

Ted Yu edited comment on HDFS-13515 at 8/2/18 8:27 PM:
---

Can you log the remote address in case of exception?

Thanks


was (Author: yuzhih...@gmail.com):
Can you log the remote address in case of exception ?

Thanks

> NetUtils#connect should log remote address for NoRouteToHostException
> -
>
> Key: HDFS-13515
> URL: https://issues.apache.org/jira/browse/HDFS-13515
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ted Yu
>Priority: Minor
>
> {code}
> hdfs.BlockReaderFactory: I/O error constructing remote block reader.
> java.net.NoRouteToHostException: No route to host
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
> at 
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
> at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:2884)
> {code}
> In the above stack trace, the remote host was not logged.
> This makes troubleshooting a bit hard.
> NetUtils#connect should log remote address for NoRouteToHostException .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-230) ContainerStateMachine should provide readStateMachineData api to read data if Containers with required during replication

2018-08-02 Thread Tsz Wo Nicholas Sze (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16567443#comment-16567443
 ] 

Tsz Wo Nicholas Sze commented on HDDS-230:
--

Hi [~msingh], It seems that readStateMachineData should return 
CompletableFuture so that applications like Ozone don't need to 
deal with the logic to add stateMachineData back to LogEntryProto.  What do you 
think?

> ContainerStateMachine should provide readStateMachineData api to read data if 
> Containers with required during replication
> -
>
> Key: HDDS-230
> URL: https://issues.apache.org/jira/browse/HDDS-230
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Critical
> Fix For: 0.2.1
>
> Attachments: HDDS-230.001.patch, HDDS-230.002.patch, 
> HDDS-230.003.patch
>
>
> Ozone datanode exits during data write with the following exception.
> {code}
> 2018-07-05 14:10:01,605 INFO org.apache.ratis.server.storage.RaftLogWorker: 
> Rolling segment:40356aa1-741f-499c-aad1-b500f2620a3d_9858-RaftLogWorker index 
> to:4565
> 2018-07-05 14:10:01,607 ERROR 
> org.apache.ratis.server.impl.StateMachineUpdater: Terminating with exit 
> status 2: StateMachineUpdater-40356aa1-741f-499c-aad1-b500f2620a3d_9858: the 
> StateMachineUpdater hits Throwable
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.applyTransaction(ContainerStateMachine.java:272)
> at 
> org.apache.ratis.server.impl.RaftServerImpl.applyLogToStateMachine(RaftServerImpl.java:1058)
> at 
> org.apache.ratis.server.impl.StateMachineUpdater.run(StateMachineUpdater.java:154)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> This might be as a result of a ratis transaction which was not written 
> through the "writeStateMachineData" phase, however it was added to the raft 
> log. This implied that stateMachineUpdater now applies a transaction without 
> the corresponding entry being added to the stateMachine.
> I am raising this jira to track the issue and will also raise a Ratis jira if 
> required.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-312) Add blockIterator to Container

2018-08-02 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16567418#comment-16567418
 ] 

Bharat Viswanadham commented on HDDS-312:
-

Fixed few checkstyle issues found in the same code file in this patch.

> Add blockIterator to Container
> --
>
> Key: HDDS-312
> URL: https://issues.apache.org/jira/browse/HDDS-312
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Minor
> Fix For: 0.2.1
>
> Attachments: HDDS-312.00.patch
>
>
> This Jira is to add newly added blockIterator to Container and its 
> implemented class KeyValueContainer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-312) Add blockIterator to Container

2018-08-02 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-312:

Status: Patch Available  (was: In Progress)

> Add blockIterator to Container
> --
>
> Key: HDDS-312
> URL: https://issues.apache.org/jira/browse/HDDS-312
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Minor
> Fix For: 0.2.1
>
> Attachments: HDDS-312.00.patch
>
>
> This Jira is to add newly added blockIterator to Container and its 
> implemented class KeyValueContainer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-312) Add blockIterator to Container

2018-08-02 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-312:

Attachment: HDDS-312.00.patch

> Add blockIterator to Container
> --
>
> Key: HDDS-312
> URL: https://issues.apache.org/jira/browse/HDDS-312
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Minor
> Fix For: 0.2.1
>
> Attachments: HDDS-312.00.patch
>
>
> This Jira is to add newly added blockIterator to Container and its 
> implemented class KeyValueContainer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-298) Implement SCMClientProtocolServer.getContainerWithPipeline for closed containers

2018-08-02 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16567416#comment-16567416
 ] 

Nanda kumar commented on HDDS-298:
--

Thanks [~ajayydv] for working on this. The patch looks pretty good to me and 
thanks for creating HDDS-311. One minor comment, we should throw an exception 
explicitly instead of throwing IllegalArgumentException through 
{{Preconditions.checkArgument(dnWithReplicas.size() > 0, "No replicas found for 
given container.")}}. We should probably propagate this to the client, it can 
be done in a separate jira.

> Implement SCMClientProtocolServer.getContainerWithPipeline for closed 
> containers
> 
>
> Key: HDDS-298
> URL: https://issues.apache.org/jira/browse/HDDS-298
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Ajay Kumar
>Priority: Critical
> Fix For: 0.2.1
>
> Attachments: HDDS-298.00.patch, HDDS-298.01.patch, HDDS-298.02.patch, 
> HDDS-298.03.patch
>
>
> As [~ljain] mentioned during the review of HDDS-245 
> SCMClientProtocolServer.getContainerWithPipeline doesn't return with good 
> data for closed containers. For closed containers we are maintaining the 
> datanodes for a containerId in the ContainerStateMap.contReplicaMap. We need 
> to create fake Pipeline object on-request and return it for the client to 
> locate the right datanodes to download data. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-312) Add blockIterator to Container

2018-08-02 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-312:

Priority: Minor  (was: Major)

> Add blockIterator to Container
> --
>
> Key: HDDS-312
> URL: https://issues.apache.org/jira/browse/HDDS-312
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Minor
> Fix For: 0.2.1
>
>
> This Jira is to add newly added blockIterator to Container and its 
> implemented class KeyValueContainer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13789) Reduce logging frequency of QuorumJournalManager#selectInputStreams

2018-08-02 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16567412#comment-16567412
 ] 

genericqa commented on HDFS-13789:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-12943 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
39s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} HDFS-12943 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 32s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}106m 54s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}164m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:9b55946 |
| JIRA Issue | HDFS-13789 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12934101/HDFS-13789-HDFS-12943.000.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2bbfba541c3a 4.4.0-130-generic #156-Ubuntu SMP Thu Jun 14 
08:53:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-12943 / 2dad24f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24686/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24686/testReport/ |
| Max. process+thread count | 3407 (vs. ulimit of 1) |
| modules | C: 

[jira] [Work started] (HDDS-312) Add blockIterator to Container

2018-08-02 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-312 started by Bharat Viswanadham.
---
> Add blockIterator to Container
> --
>
> Key: HDDS-312
> URL: https://issues.apache.org/jira/browse/HDDS-312
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
>
> This Jira is to add newly added blockIterator to Container and its 
> implemented class KeyValueContainer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13655) RBF: Add missing ClientProtocol APIs to RBF

2018-08-02 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16567349#comment-16567349
 ] 

Íñigo Goiri commented on HDFS-13655:


All the tests I mentioned were done through the dfsadmin.
For the safemode everything works as expected.
For the saveNamespace and rollingUpgrade it looks like it did the job but we 
don't use them in the day to day so not sure about corner cases.
To give you an example, we had to do HDFS-13490 to fix safe mode.

> RBF: Add missing ClientProtocol APIs to RBF
> ---
>
> Key: HDFS-13655
> URL: https://issues.apache.org/jira/browse/HDFS-13655
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Xiao Chen
>Priority: Major
>
> As 
> [discussed|https://issues.apache.org/jira/browse/HDFS-12858?focusedCommentId=16500975=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16500975|#comment-16500975]
>  with [~elgoiri], there are some HDFS methods that does not take path as a 
> parameter. We should support these to work with federation.
> The ones missing are:
>  * Snapshots
>  * Storage policies
>  * Encryption zones
>  * Cache pools
> One way to reasonably have them to work with federation is to 'list' each 
> nameservice and concat the results. This can be done pretty much the same as 
> {{refreshNodes()}} and it would be a matter of querying all the subclusters 
> and aggregate the output (e.g., {{getDatanodeReport()}}.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13790) RBF: Move ClientProtocol APIs to its own module

2018-08-02 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun reassigned HDFS-13790:
---

Assignee: Chao Sun

> RBF: Move ClientProtocol APIs to its own module
> ---
>
> Key: HDFS-13790
> URL: https://issues.apache.org/jira/browse/HDFS-13790
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Chao Sun
>Priority: Major
>
> {{RouterRpcServer}} is getting pretty long. {{RouterNamenodeProtocol}} 
> isolates the {{NamenodeProtocol}} in its own module. {{ClientProtocol}} 
> should have its own {{RouterClientProtocol}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13790) RBF: Move ClientProtocol APIs to its own module

2018-08-02 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16567345#comment-16567345
 ] 

Chao Sun commented on HDFS-13790:
-

I'll take this :)

> RBF: Move ClientProtocol APIs to its own module
> ---
>
> Key: HDFS-13790
> URL: https://issues.apache.org/jira/browse/HDFS-13790
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Chao Sun
>Priority: Major
>
> {{RouterRpcServer}} is getting pretty long. {{RouterNamenodeProtocol}} 
> isolates the {{NamenodeProtocol}} in its own module. {{ClientProtocol}} 
> should have its own {{RouterClientProtocol}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13655) RBF: Add missing ClientProtocol APIs to RBF

2018-08-02 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16567343#comment-16567343
 ] 

Xiao Chen commented on HDFS-13655:
--

Thanks very much [~goiri], good to know these all work! It'd be good if we can 
do a pass on the dfsadmin subcommands and confirm they're all covered. :)

> RBF: Add missing ClientProtocol APIs to RBF
> ---
>
> Key: HDFS-13655
> URL: https://issues.apache.org/jira/browse/HDFS-13655
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Xiao Chen
>Priority: Major
>
> As 
> [discussed|https://issues.apache.org/jira/browse/HDFS-12858?focusedCommentId=16500975=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16500975|#comment-16500975]
>  with [~elgoiri], there are some HDFS methods that does not take path as a 
> parameter. We should support these to work with federation.
> The ones missing are:
>  * Snapshots
>  * Storage policies
>  * Encryption zones
>  * Cache pools
> One way to reasonably have them to work with federation is to 'list' each 
> nameservice and concat the results. This can be done pretty much the same as 
> {{refreshNodes()}} and it would be a matter of querying all the subclusters 
> and aggregate the output (e.g., {{getDatanodeReport()}}.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13767) Add msync server implementation.

2018-08-02 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16567299#comment-16567299
 ] 

Chen Liang commented on HDFS-13767:
---

Thanks [~shv] and [~xkrogen] for the review and the comments! Post v002 patch.

bq. It seems it would be better if there was a way for the AlignmentContext, a 
generic/pluggable component, to make the decision about how to handle the call
I agree that this very HDFS-specific. An issue here is that while {{Server}} 
has AlignmentContext instance, AlignmentContext has both server side and client 
side implementation. And it can not be casted to GlobalStateIdContext because 
it is in a different package. So adding {{shouldDeferProcessing(Server.Call 
call)}}, or any method that checks whether to defer a call, means it will need 
to be added in client alignment context as well. Which makes no sense to me. 
The more I look at AlignmentContext, the more I think that it would be better 
if we separate server and client AlignmentContext in some way. Then we can 
introduce this defer check call to server side only. For now, I would prefer to 
leave it like this.

bq. Why did you move setUpCluster() out of TestObserverNode#setUp() 
When it was in {{setUp()}}, it is still effectively being called for each test. 
Then, for the tests such as testMultiObserver, it gets called again in the test 
method with a different parameter. So for these tests, setUpCluster() was being 
called twice. This was a bug there. So I moved it to each test to fix this, 
since {{setUpCluster}} takes a parameter that can be different for each test.

The other comments are addressed. I thought about using fast-tailing but was 
being lazy to figure out how to enable it. Thanks a lot for the sharing Erik :) 
! it does make the tests a lot faster.

> Add msync server implementation.
> 
>
> Key: HDFS-13767
> URL: https://issues.apache.org/jira/browse/HDFS-13767
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-13767-HDFS-12943.001.patch, 
> HDFS-13767-HDFS-12943.002.patch, HDFS-13767.WIP.001.patch, 
> HDFS-13767.WIP.002.patch, HDFS-13767.WIP.003.patch, HDFS-13767.WIP.004.patch
>
>
> This is a followup on HDFS-13688, where msync API is introduced to 
> {{ClientProtocol}} but the server side implementation is missing. This is 
> Jira is to implement the server side logic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13655) RBF: Add missing ClientProtocol APIs to RBF

2018-08-02 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16567288#comment-16567288
 ] 

Íñigo Goiri commented on HDFS-13655:


[~xiaochen], I already implemented those three methods.
However, the state is different for each, let me go over:
* safemode: this one is working and I tested it. We use it in production and so 
far no issues.
* saveNamespace: this one is implemented and I tested it by hand and it seems 
to trigger the fsimage saving. We internally don't use it that much so it could 
have some bugs.
* rollingUpgrade: same as saveNamespace, it is implemented but as we don't use 
it that much, I'm not sure if it covers all the cases.

> RBF: Add missing ClientProtocol APIs to RBF
> ---
>
> Key: HDFS-13655
> URL: https://issues.apache.org/jira/browse/HDFS-13655
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Xiao Chen
>Priority: Major
>
> As 
> [discussed|https://issues.apache.org/jira/browse/HDFS-12858?focusedCommentId=16500975=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16500975|#comment-16500975]
>  with [~elgoiri], there are some HDFS methods that does not take path as a 
> parameter. We should support these to work with federation.
> The ones missing are:
>  * Snapshots
>  * Storage policies
>  * Encryption zones
>  * Cache pools
> One way to reasonably have them to work with federation is to 'list' each 
> nameservice and concat the results. This can be done pretty much the same as 
> {{refreshNodes()}} and it would be a matter of querying all the subclusters 
> and aggregate the output (e.g., {{getDatanodeReport()}}.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13767) Add msync server implementation.

2018-08-02 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-13767:
--
Attachment: HDFS-13767-HDFS-12943.002.patch

> Add msync server implementation.
> 
>
> Key: HDFS-13767
> URL: https://issues.apache.org/jira/browse/HDFS-13767
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-13767-HDFS-12943.001.patch, 
> HDFS-13767-HDFS-12943.002.patch, HDFS-13767.WIP.001.patch, 
> HDFS-13767.WIP.002.patch, HDFS-13767.WIP.003.patch, HDFS-13767.WIP.004.patch
>
>
> This is a followup on HDFS-13688, where msync API is introduced to 
> {{ClientProtocol}} but the server side implementation is missing. This is 
> Jira is to implement the server side logic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13655) RBF: Add missing ClientProtocol APIs to RBF

2018-08-02 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16567135#comment-16567135
 ] 

Xiao Chen commented on HDFS-13655:
--

Thanks all for the work here!

My apologies if this is already done and I missed it - I have only done very 
preliminary search - there are also some dfsadmin commands that need 
integration. For example, -safemode / -saveNamespace / -rollingUpgrade etc. 
This will also be a little different than the listing commands, in that these 
are probably targeted to one of the nameservice. 

> RBF: Add missing ClientProtocol APIs to RBF
> ---
>
> Key: HDFS-13655
> URL: https://issues.apache.org/jira/browse/HDFS-13655
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Xiao Chen
>Priority: Major
>
> As 
> [discussed|https://issues.apache.org/jira/browse/HDFS-12858?focusedCommentId=16500975=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16500975|#comment-16500975]
>  with [~elgoiri], there are some HDFS methods that does not take path as a 
> parameter. We should support these to work with federation.
> The ones missing are:
>  * Snapshots
>  * Storage policies
>  * Encryption zones
>  * Cache pools
> One way to reasonably have them to work with federation is to 'list' each 
> nameservice and concat the results. This can be done pretty much the same as 
> {{refreshNodes()}} and it would be a matter of querying all the subclusters 
> and aggregate the output (e.g., {{getDatanodeReport()}}.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13785) EC: "removePolicy" is not working for built-in/system Erasure Code policies

2018-08-02 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16567132#comment-16567132
 ] 

Wei-Chiu Chuang commented on HDFS-13785:


I don't quite recall the design here (although I believe I reviewed the patch) 
but IIRC we decided to not allow system built-in EC policies to be removed.

We should probably expand the EC doc and command line message to include this.
[~Sammi] [~xiaochen] 

> EC: "removePolicy" is not working for built-in/system Erasure Code policies
> ---
>
> Key: HDFS-13785
> URL: https://issues.apache.org/jira/browse/HDFS-13785
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
> Environment: 3 Node SUSE Linux Cluster
>Reporter: Souryakanta Dwivedy
>Priority: Minor
>
> EC: "removePolicy" is not working for built-in/system Erasure Code policies
> - Check the existing built-in EC policies with command "hdfs ec -listPolicies"
> - try to remove any of the EC policies,it will throw error message as 
> "RemoteException: System erasure coding policy RS-3-2-1024k cannot be removed"
> - add user-defined EC policies 
> - Try to remove any user-defined policy,it will be removed successfully
> - But in help option it is specified as :
>  vm1:/opt/client/install/hadoop/namenode/bin> ./hdfs ec -help removePolicy
> [-removePolicy -policy ]
> Remove an erasure coding policy.
>  The name of the erasure coding policy
> vm1:/opt/client/install/hadoop/namenode/bin>
> Actual result :-
>  hdfs ec -removePolicy not working for built-in/system EC policies ,where as 
> usage description 
>  provided as "Remove an erasure coding policy".throwing exception as : 
> "RemoteException: System erasure coding policy RS-3-2-1024k cannot be removed"
> Expected output : Either EC "removePolicy" option should be applicable for 
> all type of EC policies 
>  Or it has to be specified in usage that EC "removePolicy" will be applicable 
> to remove
>  only user-defined EC policies, not applicable for system EC coding policies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-319) Add a test for node catchup through readStateMachineData api

2018-08-02 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-319:
--

 Summary: Add a test for node catchup through readStateMachineData 
api
 Key: HDDS-319
 URL: https://issues.apache.org/jira/browse/HDDS-319
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Reporter: Mukul Kumar Singh


This jira proposes to add a new test, to test for code catchup because of a 
slow/failed node using the readStateMachineData api



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-230) ContainerStateMachine should provide readStateMachineData api to read data if Containers with required during replication

2018-08-02 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16567129#comment-16567129
 ] 

Mukul Kumar Singh commented on HDDS-230:


Thanks for the review [~shashikant]. I have addressed all the comment in the v3 
patch.

1) For comment 2, please note that statemachine data is appended for 2 
commands, create container and write chunk. Hence the check is for write 
commands.

2) I will raise a followup jira to add a test for this. Currently I have tested 
this manually on a 3 node cluster.


> ContainerStateMachine should provide readStateMachineData api to read data if 
> Containers with required during replication
> -
>
> Key: HDDS-230
> URL: https://issues.apache.org/jira/browse/HDDS-230
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Critical
> Fix For: 0.2.1
>
> Attachments: HDDS-230.001.patch, HDDS-230.002.patch, 
> HDDS-230.003.patch
>
>
> Ozone datanode exits during data write with the following exception.
> {code}
> 2018-07-05 14:10:01,605 INFO org.apache.ratis.server.storage.RaftLogWorker: 
> Rolling segment:40356aa1-741f-499c-aad1-b500f2620a3d_9858-RaftLogWorker index 
> to:4565
> 2018-07-05 14:10:01,607 ERROR 
> org.apache.ratis.server.impl.StateMachineUpdater: Terminating with exit 
> status 2: StateMachineUpdater-40356aa1-741f-499c-aad1-b500f2620a3d_9858: the 
> StateMachineUpdater hits Throwable
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.applyTransaction(ContainerStateMachine.java:272)
> at 
> org.apache.ratis.server.impl.RaftServerImpl.applyLogToStateMachine(RaftServerImpl.java:1058)
> at 
> org.apache.ratis.server.impl.StateMachineUpdater.run(StateMachineUpdater.java:154)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> This might be as a result of a ratis transaction which was not written 
> through the "writeStateMachineData" phase, however it was added to the raft 
> log. This implied that stateMachineUpdater now applies a transaction without 
> the corresponding entry being added to the stateMachine.
> I am raising this jira to track the issue and will also raise a Ratis jira if 
> required.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-230) ContainerStateMachine should provide readStateMachineData api to read data if Containers with required during replication

2018-08-02 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-230:
---
Attachment: HDDS-230.003.patch

> ContainerStateMachine should provide readStateMachineData api to read data if 
> Containers with required during replication
> -
>
> Key: HDDS-230
> URL: https://issues.apache.org/jira/browse/HDDS-230
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Critical
> Fix For: 0.2.1
>
> Attachments: HDDS-230.001.patch, HDDS-230.002.patch, 
> HDDS-230.003.patch
>
>
> Ozone datanode exits during data write with the following exception.
> {code}
> 2018-07-05 14:10:01,605 INFO org.apache.ratis.server.storage.RaftLogWorker: 
> Rolling segment:40356aa1-741f-499c-aad1-b500f2620a3d_9858-RaftLogWorker index 
> to:4565
> 2018-07-05 14:10:01,607 ERROR 
> org.apache.ratis.server.impl.StateMachineUpdater: Terminating with exit 
> status 2: StateMachineUpdater-40356aa1-741f-499c-aad1-b500f2620a3d_9858: the 
> StateMachineUpdater hits Throwable
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.applyTransaction(ContainerStateMachine.java:272)
> at 
> org.apache.ratis.server.impl.RaftServerImpl.applyLogToStateMachine(RaftServerImpl.java:1058)
> at 
> org.apache.ratis.server.impl.StateMachineUpdater.run(StateMachineUpdater.java:154)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> This might be as a result of a ratis transaction which was not written 
> through the "writeStateMachineData" phase, however it was added to the raft 
> log. This implied that stateMachineUpdater now applies a transaction without 
> the corresponding entry being added to the stateMachine.
> I am raising this jira to track the issue and will also raise a Ratis jira if 
> required.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13790) RBF: Move ClientProtocol APIs to its own module

2018-08-02 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16567096#comment-16567096
 ] 

Íñigo Goiri commented on HDFS-13790:


This is mostly refactoring of {{RouterRpcServer}}, if anybody is interested 
please feel free to assign this JIRA to you.
If nobody volunteers, I can take it over eventually.

> RBF: Move ClientProtocol APIs to its own module
> ---
>
> Key: HDFS-13790
> URL: https://issues.apache.org/jira/browse/HDFS-13790
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Priority: Major
>
> {{RouterRpcServer}} is getting pretty long. {{RouterNamenodeProtocol}} 
> isolates the {{NamenodeProtocol}} in its own module. {{ClientProtocol}} 
> should have its own {{RouterClientProtocol}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13655) RBF: Add missing ClientProtocol APIs to RBF

2018-08-02 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16567093#comment-16567093
 ] 

Íñigo Goiri commented on HDFS-13655:


Let's start the work on the new patches but let's hold on committing anything.
Likely we will branch this out.

> RBF: Add missing ClientProtocol APIs to RBF
> ---
>
> Key: HDFS-13655
> URL: https://issues.apache.org/jira/browse/HDFS-13655
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Xiao Chen
>Priority: Major
>
> As 
> [discussed|https://issues.apache.org/jira/browse/HDFS-12858?focusedCommentId=16500975=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16500975|#comment-16500975]
>  with [~elgoiri], there are some HDFS methods that does not take path as a 
> parameter. We should support these to work with federation.
> The ones missing are:
>  * Snapshots
>  * Storage policies
>  * Encryption zones
>  * Cache pools
> One way to reasonably have them to work with federation is to 'list' each 
> nameservice and concat the results. This can be done pretty much the same as 
> {{refreshNodes()}} and it would be a matter of querying all the subclusters 
> and aggregate the output (e.g., {{getDatanodeReport()}}.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13787) RBF: Add Snapshot related ClientProtocol APIs

2018-08-02 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16567091#comment-16567091
 ] 

Íñigo Goiri commented on HDFS-13787:


As I mentioned in HDFS-13790, RouterRpcServer is very long right now.
Let's do what we did for Erasure Coding in HDFS-12919.
Probably call the new module {{RouterSnapshot}} or something similar.

> RBF: Add Snapshot related ClientProtocol APIs
> -
>
> Key: HDFS-13787
> URL: https://issues.apache.org/jira/browse/HDFS-13787
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>
> Currently, allowSnapshot, disallowSnapshot, renameSnapshot, createSnapshot, 
> deleteSnapshot , SnapshottableDirectoryStatus, getSnapshotDiffReport and 
> getSnapshotDiffReportListing are not implemented in RouterRpcServer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13776) RBF: Add Storage policies related ClientProtocol APIs

2018-08-02 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16567089#comment-16567089
 ] 

Íñigo Goiri commented on HDFS-13776:


As I mentioned in HDFS-13790, RouterRpcServer is very long right now.
Let's do what we did for Erasure Coding in HDFS-12919.
Probably call the new module {{RouterStoragePolicy}}.

> RBF: Add Storage policies related ClientProtocol APIs
> -
>
> Key: HDFS-13776
> URL: https://issues.apache.org/jira/browse/HDFS-13776
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Dibyendu Karmakar
>Priority: Major
>
> Currently unsetStoragePolicy and getStoragePolicy are not implemented in 
> RouterRpcServer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13790) RBF: Move ClientProtocol APIs to its own module

2018-08-02 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13790:
---
Description: {{RouterRpcServer}} is getting pretty long. 
{{RouterNamenodeProtocol}} isolates the {{NamenodeProtocol}} in its own module. 
{{ClientProtocol}} should have its own {{RouterClientProtocol}}.

> RBF: Move ClientProtocol APIs to its own module
> ---
>
> Key: HDFS-13790
> URL: https://issues.apache.org/jira/browse/HDFS-13790
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Priority: Major
>
> {{RouterRpcServer}} is getting pretty long. {{RouterNamenodeProtocol}} 
> isolates the {{NamenodeProtocol}} in its own module. {{ClientProtocol}} 
> should have its own {{RouterClientProtocol}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13790) RBF: Move ClientProtocol APIs to its own module

2018-08-02 Thread JIRA
Íñigo Goiri created HDFS-13790:
--

 Summary: RBF: Move ClientProtocol APIs to its own module
 Key: HDFS-13790
 URL: https://issues.apache.org/jira/browse/HDFS-13790
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Íñigo Goiri






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-317) Use new StorageSize API for reading ozone.scm.container.size.gb

2018-08-02 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reassigned HDDS-317:
---

Assignee: Junjie Chen

> Use new StorageSize API for reading ozone.scm.container.size.gb
> ---
>
> Key: HDDS-317
> URL: https://issues.apache.org/jira/browse/HDDS-317
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Junjie Chen
>Priority: Major
>  Labels: newbie
>
> Container size is configured using property {{ozone.scm.container.size.gb}}. 
> This can be renamed to {{ozone.scm.container.size}} and use new StorageSize 
> API to read the value.
> The property is defined in
>  1. ozone-default.xml
>  2. ScmConfigKeys#OZONE_SCM_CONTAINER_SIZE_GB
> The default value is defined in
>  1. ozone-default.xml
>  2. {{ScmConfigKeys#OZONE_SCM_CONTAINER_SIZE_DEFAULT}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13776) RBF: Add Storage policies related ClientProtocol APIs

2018-08-02 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13776:
---
Summary: RBF: Add Storage policies related ClientProtocol APIs  (was: Add 
Storage policies related ClientProtocol methods)

> RBF: Add Storage policies related ClientProtocol APIs
> -
>
> Key: HDFS-13776
> URL: https://issues.apache.org/jira/browse/HDFS-13776
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Dibyendu Karmakar
>Priority: Major
>
> Currently unsetStoragePolicy and getStoragePolicy are not implemented in 
> RouterRpcServer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13655) RBF: Add missing ClientProtocol APIs to RBF

2018-08-02 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13655:
---
Summary: RBF: Add missing ClientProtocol APIs to RBF  (was: RBF: Adding 
missing ClientProtocol methods to RBF)

> RBF: Add missing ClientProtocol APIs to RBF
> ---
>
> Key: HDFS-13655
> URL: https://issues.apache.org/jira/browse/HDFS-13655
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Xiao Chen
>Priority: Major
>
> As 
> [discussed|https://issues.apache.org/jira/browse/HDFS-12858?focusedCommentId=16500975=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16500975|#comment-16500975]
>  with [~elgoiri], there are some HDFS methods that does not take path as a 
> parameter. We should support these to work with federation.
> The ones missing are:
>  * Snapshots
>  * Storage policies
>  * Encryption zones
>  * Cache pools
> One way to reasonably have them to work with federation is to 'list' each 
> nameservice and concat the results. This can be done pretty much the same as 
> {{refreshNodes()}} and it would be a matter of querying all the subclusters 
> and aggregate the output (e.g., {{getDatanodeReport()}}.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13787) RBF: Add Snapshot related ClientProtocol APIs

2018-08-02 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13787:
---
Summary: RBF: Add Snapshot related ClientProtocol APIs  (was: Add Snapshot 
related APIs)

> RBF: Add Snapshot related ClientProtocol APIs
> -
>
> Key: HDFS-13787
> URL: https://issues.apache.org/jira/browse/HDFS-13787
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>
> Currently, allowSnapshot, disallowSnapshot, renameSnapshot, createSnapshot, 
> deleteSnapshot , SnapshottableDirectoryStatus, getSnapshotDiffReport and 
> getSnapshotDiffReportListing are not implemented in RouterRpcServer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-318) ratis INFO logs should not shown during ozoneFs command-line execution

2018-08-02 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-318:

Labels: newbie  (was: )

> ratis INFO logs should not shown during ozoneFs command-line execution
> --
>
> Key: HDDS-318
> URL: https://issues.apache.org/jira/browse/HDDS-318
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Nilotpal Nandi
>Priority: Minor
>  Labels: newbie
> Fix For: 0.2.1
>
>
> ratis INFOs should not be shown during ozoneFS CLI execution.
> Please find the snippet from one othe execution :
>  
> {noformat}
> hadoop@08315aa4b367:~/bin$ ./ozone fs -put /etc/passwd /p2
> 2018-08-02 12:17:18 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 2018-08-02 12:17:19 INFO ConfUtils:41 - raft.rpc.type = GRPC (default)
> 2018-08-02 12:17:19 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-02 12:17:19 INFO ConfUtils:41 - raft.client.rpc.retryInterval = 300 
> ms (default)
> 2018-08-02 12:17:19 INFO ConfUtils:41 - 
> raft.client.async.outstanding-requests.max = 100 (default)
> 2018-08-02 12:17:19 INFO ConfUtils:41 - raft.client.async.scheduler-threads = 
> 3 (default)
> 2018-08-02 12:17:19 INFO ConfUtils:41 - raft.grpc.flow.control.window = 1MB 
> (=1048576) (default)
> 2018-08-02 12:17:19 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-02 12:17:20 INFO ConfUtils:41 - raft.client.rpc.request.timeout = 
> 3000 ms (default)
> Aug 02, 2018 12:17:20 PM 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl detectProxy
> WARNING: Failed to construct URI for proxy lookup, proceeding without proxy
> ..
> ..
> ..
>  
> {noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13789) Reduce logging frequency of QuorumJournalManager#selectInputStreams

2018-08-02 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-13789:
---
Status: Patch Available  (was: In Progress)

> Reduce logging frequency of QuorumJournalManager#selectInputStreams
> ---
>
> Key: HDFS-13789
> URL: https://issues.apache.org/jira/browse/HDFS-13789
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode, qjm
>Affects Versions: HDFS-12943
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Trivial
> Attachments: HDFS-13789-HDFS-12943.000.patch
>
>
> As part of HDFS-13150, a logging statement was added to indicate whenever an 
> edit tail is performed via the RPC mechanism. To enable low latency tailing, 
> the tail frequency must be set very low, so this log statement gets printed 
> much too frequently at an INFO level. We should decrease to DEBUG. Note that 
> if there are actually edits available to tail, other log messages will get 
> printed; this is just targeting the case when it attempts to tail and there 
> are no new edits.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13789) Reduce logging frequency of QuorumJournalManager#selectInputStreams

2018-08-02 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-13789:
---
Attachment: HDFS-13789-HDFS-12943.000.patch

> Reduce logging frequency of QuorumJournalManager#selectInputStreams
> ---
>
> Key: HDFS-13789
> URL: https://issues.apache.org/jira/browse/HDFS-13789
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode, qjm
>Affects Versions: HDFS-12943
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Trivial
> Attachments: HDFS-13789-HDFS-12943.000.patch
>
>
> As part of HDFS-13150, a logging statement was added to indicate whenever an 
> edit tail is performed via the RPC mechanism. To enable low latency tailing, 
> the tail frequency must be set very low, so this log statement gets printed 
> much too frequently at an INFO level. We should decrease to DEBUG. Note that 
> if there are actually edits available to tail, other log messages will get 
> printed; this is just targeting the case when it attempts to tail and there 
> are no new edits.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-13789) Reduce logging frequency of QuorumJournalManager#selectInputStreams

2018-08-02 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-13789 started by Erik Krogen.
--
> Reduce logging frequency of QuorumJournalManager#selectInputStreams
> ---
>
> Key: HDFS-13789
> URL: https://issues.apache.org/jira/browse/HDFS-13789
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode, qjm
>Affects Versions: HDFS-12943
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Trivial
>
> As part of HDFS-13150, a logging statement was added to indicate whenever an 
> edit tail is performed via the RPC mechanism. To enable low latency tailing, 
> the tail frequency must be set very low, so this log statement gets printed 
> much too frequently at an INFO level. We should decrease to DEBUG. Note that 
> if there are actually edits available to tail, other log messages will get 
> printed; this is just targeting the case when it attempts to tail and there 
> are no new edits.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13789) Reduce logging frequency of QuorumJournalManager#selectInputStreams

2018-08-02 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-13789:
---
Issue Type: Sub-task  (was: Improvement)
Parent: HDFS-12943

> Reduce logging frequency of QuorumJournalManager#selectInputStreams
> ---
>
> Key: HDFS-13789
> URL: https://issues.apache.org/jira/browse/HDFS-13789
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode, qjm
>Affects Versions: HDFS-12943
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Trivial
>
> As part of HDFS-13150, a logging statement was added to indicate whenever an 
> edit tail is performed via the RPC mechanism. To enable low latency tailing, 
> the tail frequency must be set very low, so this log statement gets printed 
> much too frequently at an INFO level. We should decrease to DEBUG. Note that 
> if there are actually edits available to tail, other log messages will get 
> printed; this is just targeting the case when it attempts to tail and there 
> are no new edits.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13789) Reduce logging frequency of QuorumJournalManager#selectInputStreams

2018-08-02 Thread Erik Krogen (JIRA)
Erik Krogen created HDFS-13789:
--

 Summary: Reduce logging frequency of 
QuorumJournalManager#selectInputStreams
 Key: HDFS-13789
 URL: https://issues.apache.org/jira/browse/HDFS-13789
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode, qjm
Affects Versions: HDFS-12943
Reporter: Erik Krogen
Assignee: Erik Krogen


As part of HDFS-13150, a logging statement was added to indicate whenever an 
edit tail is performed via the RPC mechanism. To enable low latency tailing, 
the tail frequency must be set very low, so this log statement gets printed 
much too frequently at an INFO level. We should decrease to DEBUG. Note that if 
there are actually edits available to tail, other log messages will get 
printed; this is just targeting the case when it attempts to tail and there are 
no new edits.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13767) Add msync server implementation.

2018-08-02 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16567060#comment-16567060
 ] 

Erik Krogen commented on HDFS-13767:


Cool, thanks for incorporating the feedback Chen! I agree with Konstantin's 
comments and have a few more to add:

* The comment at the top of {{GlobalStateIdContext#receiveRequestState()}} is a 
little confusing. I agree that we don't need to care about the client state in 
this case, but shouldn't we still return {{header.getStateId()}} to meet the 
interface contract? Is there any disadvantage to doing so?
* I'm not a fan of the changes within {{Server#run()}}; they are extremely 
HDFS-specific in a Hadoop-general class. It seems it would be better if there 
was a way for the {{AlignmentContext}}, a generic/pluggable component, to make 
the decision about how to handle the call. Something like a new method 
{{boolean shouldDeferProcessing(Server.Call call)}}
* GlobalStateIdContext L81/82, you shouldn't use a string comparison to compare 
an enum, you can use {{FSNamesystem#getState()}} to retrieve the enum directly.
* In {{TestObserverNode}}, you shouldn't use a star-import for {{Assert}}. 
Also, this is nitpicky, but the {{AtomicBoolean}} import should probably be 
grouped with the other {{java.}} imports.
* Why did you move {{setUpCluster()}} out of {{TestObserverNode#setUp()}} and 
into the individual test methods? It seems like code duplication without any 
advantage. Let me know if I'm missing something.
* You can make the tests much faster by enabling in-progress edit log tailing 
which will use the new fast-path from HDFS-13150 :) Just use configs like:
{code}
conf.setBoolean(DFSConfigKeys.DFS_HA_TAILEDITS_INPROGRESS_KEY, true);
conf.setTimeDuration(DFS_HA_TAILEDITS_PERIOD_KEY, 100, 
TimeUnit.MILLISECONDS);
{code}
Note that you should use the {{setTimeDuration}} for time-related configs these 
days; else you get messages like:
{code}INFO  Configuration.deprecation (Configuration.java:logDeprecation(1395)) 
- No unit for dfs.ha.log-roll.period(60) assuming SECONDS{code}

> Add msync server implementation.
> 
>
> Key: HDFS-13767
> URL: https://issues.apache.org/jira/browse/HDFS-13767
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-13767-HDFS-12943.001.patch, 
> HDFS-13767.WIP.001.patch, HDFS-13767.WIP.002.patch, HDFS-13767.WIP.003.patch, 
> HDFS-13767.WIP.004.patch
>
>
> This is a followup on HDFS-13688, where msync API is introduced to 
> {{ClientProtocol}} but the server side implementation is missing. This is 
> Jira is to implement the server side logic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13788) Update EC documentation about rack fault tolerance

2018-08-02 Thread Xiao Chen (JIRA)
Xiao Chen created HDFS-13788:


 Summary: Update EC documentation about rack fault tolerance
 Key: HDFS-13788
 URL: https://issues.apache.org/jira/browse/HDFS-13788
 Project: Hadoop HDFS
  Issue Type: Task
  Components: documentation, erasure-coding
Affects Versions: 3.0.0
Reporter: Xiao Chen
Assignee: Kitti Nanasi


>From 
>http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html:
{quote}
For rack fault-tolerance, it is also important to have at least as many racks 
as the configured EC stripe width. For EC policy RS (6,3), this means minimally 
9 racks, and ideally 10 or 11 to handle planned and unplanned outages. For 
clusters with fewer racks than the stripe width, HDFS cannot maintain rack 
fault-tolerance, but will still attempt to spread a striped file across 
multiple nodes to preserve node-level fault-tolerance.
{quote}
Theoretical minimum is 3 racks, and ideally 9 or more, so the document should 
be updated.

(I didn't check timestamps, but this is probably due to 
{{BlockPlacementPolicyRackFaultTolerant}} isn't completely done when HDFS-9088 
introduced this doc. Later there's also examples in 
{{TestErasureCodingMultipleRacks}} to test this explicitly.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-230) ContainerStateMachine should provide readStateMachineData api to read data if Containers with required during replication

2018-08-02 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16567050#comment-16567050
 ] 

Shashikant Banerjee commented on HDDS-230:
--

Thanks [~msingh] for reporting and working on this. The patch looks good to me 
overall. Some minor comments inline:

1.  comments for "readStateMachineData" API need to be updated.

2. ContainerStateMachine.java: 274. The precondition here
{code:java}
Preconditions.checkArgument(HddsUtils.isReadOnly(requestProto))
{code}
The conditional check is here for all ReadOnlyCommands. Should the check be 
only specific for 

"ReadChunk" requests only?

3. ContainerStateMachine.java:305 : 
{code:java}
// Preconditions.checkArgument(responseProto.getData() != null);{code}
can be replaced with 
{code:java}
// Preconditions.checkNotNull(responseProto.getData())
{code}
4.  dispatchReadStateMachineCommand && runCommand seem to be very similar 
functionally except for the precondition check. Can we have a single function 
and move the precondition check to the caller of 
dispatchReadStateMachineCommand?

5. Can we add a test for it to verify the behaviour?

> ContainerStateMachine should provide readStateMachineData api to read data if 
> Containers with required during replication
> -
>
> Key: HDDS-230
> URL: https://issues.apache.org/jira/browse/HDDS-230
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Critical
> Fix For: 0.2.1
>
> Attachments: HDDS-230.001.patch, HDDS-230.002.patch
>
>
> Ozone datanode exits during data write with the following exception.
> {code}
> 2018-07-05 14:10:01,605 INFO org.apache.ratis.server.storage.RaftLogWorker: 
> Rolling segment:40356aa1-741f-499c-aad1-b500f2620a3d_9858-RaftLogWorker index 
> to:4565
> 2018-07-05 14:10:01,607 ERROR 
> org.apache.ratis.server.impl.StateMachineUpdater: Terminating with exit 
> status 2: StateMachineUpdater-40356aa1-741f-499c-aad1-b500f2620a3d_9858: the 
> StateMachineUpdater hits Throwable
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.applyTransaction(ContainerStateMachine.java:272)
> at 
> org.apache.ratis.server.impl.RaftServerImpl.applyLogToStateMachine(RaftServerImpl.java:1058)
> at 
> org.apache.ratis.server.impl.StateMachineUpdater.run(StateMachineUpdater.java:154)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> This might be as a result of a ratis transaction which was not written 
> through the "writeStateMachineData" phase, however it was added to the raft 
> log. This implied that stateMachineUpdater now applies a transaction without 
> the corresponding entry being added to the stateMachine.
> I am raising this jira to track the issue and will also raise a Ratis jira if 
> required.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-298) Implement SCMClientProtocolServer.getContainerWithPipeline for closed containers

2018-08-02 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16567035#comment-16567035
 ] 

genericqa commented on HDDS-298:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
37s{color} | {color:red} hadoop-hdds/server-scm in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
24s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 19s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDDS-298 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12934094/HDDS-298.03.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 6b08a2a84d69 4.4.0-130-generic #156-Ubuntu SMP Thu Jun 14 
08:53:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5033d7d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDDS-Build/684/artifact/out/branch-findbugs-hadoop-hdds_server-scm-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/684/testReport/ |
| Max. process+thread count | 407 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/684/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Implement 

[jira] [Commented] (HDDS-298) Implement SCMClientProtocolServer.getContainerWithPipeline for closed containers

2018-08-02 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566941#comment-16566941
 ] 

Ajay Kumar commented on HDDS-298:
-

[~msingh] patch v3 to address both comments.

> Implement SCMClientProtocolServer.getContainerWithPipeline for closed 
> containers
> 
>
> Key: HDDS-298
> URL: https://issues.apache.org/jira/browse/HDDS-298
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Ajay Kumar
>Priority: Critical
> Fix For: 0.2.1
>
> Attachments: HDDS-298.00.patch, HDDS-298.01.patch, HDDS-298.02.patch, 
> HDDS-298.03.patch
>
>
> As [~ljain] mentioned during the review of HDDS-245 
> SCMClientProtocolServer.getContainerWithPipeline doesn't return with good 
> data for closed containers. For closed containers we are maintaining the 
> datanodes for a containerId in the ContainerStateMap.contReplicaMap. We need 
> to create fake Pipeline object on-request and return it for the client to 
> locate the right datanodes to download data. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-298) Implement SCMClientProtocolServer.getContainerWithPipeline for closed containers

2018-08-02 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-298:

Attachment: HDDS-298.03.patch

> Implement SCMClientProtocolServer.getContainerWithPipeline for closed 
> containers
> 
>
> Key: HDDS-298
> URL: https://issues.apache.org/jira/browse/HDDS-298
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Ajay Kumar
>Priority: Critical
> Fix For: 0.2.1
>
> Attachments: HDDS-298.00.patch, HDDS-298.01.patch, HDDS-298.02.patch, 
> HDDS-298.03.patch
>
>
> As [~ljain] mentioned during the review of HDDS-245 
> SCMClientProtocolServer.getContainerWithPipeline doesn't return with good 
> data for closed containers. For closed containers we are maintaining the 
> datanodes for a containerId in the ContainerStateMap.contReplicaMap. We need 
> to create fake Pipeline object on-request and return it for the client to 
> locate the right datanodes to download data. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-247) Handle CLOSED_CONTAINER_IO exception in ozoneClient

2018-08-02 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566901#comment-16566901
 ] 

Shashikant Banerjee commented on HDDS-247:
--

Uploaded patch v1. There are some issues pending still need to be addressed :

 Ozone outputStream flushes the data to dataNodes once it reaches the chunkSize 
limit. In case, whereas the client writes data where the data is partially 
flushed to the Datanode while remaining data resides in the streamBuffer and 
meanwhile the container gets closed. Further writes/flush/close need to 
allocate new blocks and copy the remaining data from the stream buffer and 
write it back again.

> Handle CLOSED_CONTAINER_IO exception in ozoneClient
> ---
>
> Key: HDDS-247
> URL: https://issues.apache.org/jira/browse/HDDS-247
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-247.00.patch, HDDS-247.01.patch
>
>
> In case of ongoing writes by Ozone client to a container, the container might 
> get closed on the Datanodes because of node loss, out of space issues etc. In 
> such cases, the operation will fail with CLOSED_CONTAINER_IO exception. In 
> cases as such, ozone client should try to get the committed length of the 
> block from the Datanodes, and update the KSM. This Jira aims  to address this 
> issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-247) Handle CLOSED_CONTAINER_IO exception in ozoneClient

2018-08-02 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-247 started by Shashikant Banerjee.

> Handle CLOSED_CONTAINER_IO exception in ozoneClient
> ---
>
> Key: HDDS-247
> URL: https://issues.apache.org/jira/browse/HDDS-247
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-247.00.patch, HDDS-247.01.patch
>
>
> In case of ongoing writes by Ozone client to a container, the container might 
> get closed on the Datanodes because of node loss, out of space issues etc. In 
> such cases, the operation will fail with CLOSED_CONTAINER_IO exception. In 
> cases as such, ozone client should try to get the committed length of the 
> block from the Datanodes, and update the KSM. This Jira aims  to address this 
> issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-247) Handle CLOSED_CONTAINER_IO exception in ozoneClient

2018-08-02 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-247:
-
Attachment: HDDS-247.01.patch

> Handle CLOSED_CONTAINER_IO exception in ozoneClient
> ---
>
> Key: HDDS-247
> URL: https://issues.apache.org/jira/browse/HDDS-247
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-247.00.patch, HDDS-247.01.patch
>
>
> In case of ongoing writes by Ozone client to a container, the container might 
> get closed on the Datanodes because of node loss, out of space issues etc. In 
> such cases, the operation will fail with CLOSED_CONTAINER_IO exception. In 
> cases as such, ozone client should try to get the committed length of the 
> block from the Datanodes, and update the KSM. This Jira aims  to address this 
> issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-290) putKey is failing with KEY_ALLOCATION_ERROR

2018-08-02 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566844#comment-16566844
 ] 

Hudson commented on HDDS-290:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14696 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14696/])
HDDS-290. putKey is failing with KEY_ALLOCATION_ERROR. Contributed by (nanda: 
rev e83719c830dd4927c8eef26062c56c0d62b2f04f)
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/Shell.java
* (edit) hadoop-dist/src/main/compose/ozone/docker-config
* (add) 
hadoop-ozone/acceptance-test/src/test/acceptance/ozonefs/ozonesinglenode.robot
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/keys/PutKeyHandler.java


> putKey is failing with KEY_ALLOCATION_ERROR
> ---
>
> Key: HDDS-290
> URL: https://issues.apache.org/jira/browse/HDDS-290
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Xiaoyu Yao
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-290.00.patch
>
>
> 1. List the buckets in Volume /namit
> {code}
> hadoop@288c0999be17:~$ ozone oz -listBucket /namit
> 2018-07-24 18:53:26 WARN  NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> [ {
>   "volumeName" : "namit",
>   "bucketName" : "abc",
>   "createdOn" : "Fri, 29 Jul +50529 22:02:39 GMT",
>   "acls" : [ {
> "type" : "USER",
> "name" : "hadoop",
> "rights" : "READ_WRITE"
>   }, {
> "type" : "GROUP",
> "name" : "users",
> "rights" : "READ_WRITE"
>   } ],
>   "versioning" : "DISABLED",
>   "storageType" : "DISK"
> }, {
>   "volumeName" : "namit",
>   "bucketName" : "hjk",
>   "createdOn" : "Sat, 30 Jul +50529 10:37:24 GMT",
>   "acls" : [ {
> "type" : "USER",
> "name" : "hadoop",
> "rights" : "READ_WRITE"
>   }, {
> "type" : "GROUP",
> "name" : "users",
> "rights" : "READ_WRITE"
>   } ],
>   "versioning" : "DISABLED",
>   "storageType" : "DISK"
> } ]
> {code}
> 2. Now list the keys in bucket /namit/abc
> {code}
> hadoop@288c0999be17:~$ ozone oz -listKey /namit/abc
> 2018-07-24 18:53:56 WARN  NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> [ ]
> {code}
> 3. Now try to put a key to the bucket. It fails as below:
> {code}
> hadoop@288c0999be17:~$ cat aa
> hgfhjljkjhf
> hadoop@288c0999be17:~$ ozone oz -putKey /namit/abc/aa -file aa
> 2018-07-24 18:54:19 WARN  NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Command Failed : Create key failed, error:KEY_ALLOCATION_ERROR
> hadoop@288c0999be17:~$
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-290) putKey is failing with KEY_ALLOCATION_ERROR

2018-08-02 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-290:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> putKey is failing with KEY_ALLOCATION_ERROR
> ---
>
> Key: HDDS-290
> URL: https://issues.apache.org/jira/browse/HDDS-290
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Xiaoyu Yao
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-290.00.patch
>
>
> 1. List the buckets in Volume /namit
> {code}
> hadoop@288c0999be17:~$ ozone oz -listBucket /namit
> 2018-07-24 18:53:26 WARN  NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> [ {
>   "volumeName" : "namit",
>   "bucketName" : "abc",
>   "createdOn" : "Fri, 29 Jul +50529 22:02:39 GMT",
>   "acls" : [ {
> "type" : "USER",
> "name" : "hadoop",
> "rights" : "READ_WRITE"
>   }, {
> "type" : "GROUP",
> "name" : "users",
> "rights" : "READ_WRITE"
>   } ],
>   "versioning" : "DISABLED",
>   "storageType" : "DISK"
> }, {
>   "volumeName" : "namit",
>   "bucketName" : "hjk",
>   "createdOn" : "Sat, 30 Jul +50529 10:37:24 GMT",
>   "acls" : [ {
> "type" : "USER",
> "name" : "hadoop",
> "rights" : "READ_WRITE"
>   }, {
> "type" : "GROUP",
> "name" : "users",
> "rights" : "READ_WRITE"
>   } ],
>   "versioning" : "DISABLED",
>   "storageType" : "DISK"
> } ]
> {code}
> 2. Now list the keys in bucket /namit/abc
> {code}
> hadoop@288c0999be17:~$ ozone oz -listKey /namit/abc
> 2018-07-24 18:53:56 WARN  NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> [ ]
> {code}
> 3. Now try to put a key to the bucket. It fails as below:
> {code}
> hadoop@288c0999be17:~$ cat aa
> hgfhjljkjhf
> hadoop@288c0999be17:~$ ozone oz -putKey /namit/abc/aa -file aa
> 2018-07-24 18:54:19 WARN  NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Command Failed : Create key failed, error:KEY_ALLOCATION_ERROR
> hadoop@288c0999be17:~$
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-290) putKey is failing with KEY_ALLOCATION_ERROR

2018-08-02 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566795#comment-16566795
 ] 

Nanda kumar commented on HDDS-290:
--

Thanks [~xyao] for the contribution and [~nmaheshwari] for reporting this 
issue. I have committed it to trunk

> putKey is failing with KEY_ALLOCATION_ERROR
> ---
>
> Key: HDDS-290
> URL: https://issues.apache.org/jira/browse/HDDS-290
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Xiaoyu Yao
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-290.00.patch
>
>
> 1. List the buckets in Volume /namit
> {code}
> hadoop@288c0999be17:~$ ozone oz -listBucket /namit
> 2018-07-24 18:53:26 WARN  NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> [ {
>   "volumeName" : "namit",
>   "bucketName" : "abc",
>   "createdOn" : "Fri, 29 Jul +50529 22:02:39 GMT",
>   "acls" : [ {
> "type" : "USER",
> "name" : "hadoop",
> "rights" : "READ_WRITE"
>   }, {
> "type" : "GROUP",
> "name" : "users",
> "rights" : "READ_WRITE"
>   } ],
>   "versioning" : "DISABLED",
>   "storageType" : "DISK"
> }, {
>   "volumeName" : "namit",
>   "bucketName" : "hjk",
>   "createdOn" : "Sat, 30 Jul +50529 10:37:24 GMT",
>   "acls" : [ {
> "type" : "USER",
> "name" : "hadoop",
> "rights" : "READ_WRITE"
>   }, {
> "type" : "GROUP",
> "name" : "users",
> "rights" : "READ_WRITE"
>   } ],
>   "versioning" : "DISABLED",
>   "storageType" : "DISK"
> } ]
> {code}
> 2. Now list the keys in bucket /namit/abc
> {code}
> hadoop@288c0999be17:~$ ozone oz -listKey /namit/abc
> 2018-07-24 18:53:56 WARN  NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> [ ]
> {code}
> 3. Now try to put a key to the bucket. It fails as below:
> {code}
> hadoop@288c0999be17:~$ cat aa
> hgfhjljkjhf
> hadoop@288c0999be17:~$ ozone oz -putKey /namit/abc/aa -file aa
> 2018-07-24 18:54:19 WARN  NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Command Failed : Create key failed, error:KEY_ALLOCATION_ERROR
> hadoop@288c0999be17:~$
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-290) putKey is failing with KEY_ALLOCATION_ERROR

2018-08-02 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566792#comment-16566792
 ] 

Nanda kumar commented on HDDS-290:
--

+1, LGTM. I will commit this shortly.

> putKey is failing with KEY_ALLOCATION_ERROR
> ---
>
> Key: HDDS-290
> URL: https://issues.apache.org/jira/browse/HDDS-290
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Xiaoyu Yao
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-290.00.patch
>
>
> 1. List the buckets in Volume /namit
> {code}
> hadoop@288c0999be17:~$ ozone oz -listBucket /namit
> 2018-07-24 18:53:26 WARN  NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> [ {
>   "volumeName" : "namit",
>   "bucketName" : "abc",
>   "createdOn" : "Fri, 29 Jul +50529 22:02:39 GMT",
>   "acls" : [ {
> "type" : "USER",
> "name" : "hadoop",
> "rights" : "READ_WRITE"
>   }, {
> "type" : "GROUP",
> "name" : "users",
> "rights" : "READ_WRITE"
>   } ],
>   "versioning" : "DISABLED",
>   "storageType" : "DISK"
> }, {
>   "volumeName" : "namit",
>   "bucketName" : "hjk",
>   "createdOn" : "Sat, 30 Jul +50529 10:37:24 GMT",
>   "acls" : [ {
> "type" : "USER",
> "name" : "hadoop",
> "rights" : "READ_WRITE"
>   }, {
> "type" : "GROUP",
> "name" : "users",
> "rights" : "READ_WRITE"
>   } ],
>   "versioning" : "DISABLED",
>   "storageType" : "DISK"
> } ]
> {code}
> 2. Now list the keys in bucket /namit/abc
> {code}
> hadoop@288c0999be17:~$ ozone oz -listKey /namit/abc
> 2018-07-24 18:53:56 WARN  NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> [ ]
> {code}
> 3. Now try to put a key to the bucket. It fails as below:
> {code}
> hadoop@288c0999be17:~$ cat aa
> hgfhjljkjhf
> hadoop@288c0999be17:~$ ozone oz -putKey /namit/abc/aa -file aa
> 2018-07-24 18:54:19 WARN  NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Command Failed : Create key failed, error:KEY_ALLOCATION_ERROR
> hadoop@288c0999be17:~$
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13787) Add Snapshot related APIs

2018-08-02 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566741#comment-16566741
 ] 

Ranith Sardar commented on HDFS-13787:
--

I will upload the patch shortly.

> Add Snapshot related APIs
> -
>
> Key: HDFS-13787
> URL: https://issues.apache.org/jira/browse/HDFS-13787
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>
> Currently, allowSnapshot, disallowSnapshot, renameSnapshot, createSnapshot, 
> deleteSnapshot , SnapshottableDirectoryStatus, getSnapshotDiffReport and 
> getSnapshotDiffReportListing are not implemented in RouterRpcServer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13787) Add Snapshot related APIs

2018-08-02 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar reassigned HDFS-13787:


Assignee: Ranith Sardar

> Add Snapshot related APIs
> -
>
> Key: HDFS-13787
> URL: https://issues.apache.org/jira/browse/HDFS-13787
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
>
> Currently, allowSnapshot, disallowSnapshot, renameSnapshot, createSnapshot, 
> deleteSnapshot , SnapshottableDirectoryStatus, getSnapshotDiffReport and 
> getSnapshotDiffReportListing are not implemented in RouterRpcServer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13787) Add Snapshot related APIs

2018-08-02 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-13787:
-
Description: Currently, allowSnapshot, disallowSnapshot, renameSnapshot, 
createSnapshot, deleteSnapshot , SnapshottableDirectoryStatus, 
getSnapshotDiffReport and getSnapshotDiffReportListing are not implemented in 
RouterRpcServer.

> Add Snapshot related APIs
> -
>
> Key: HDFS-13787
> URL: https://issues.apache.org/jira/browse/HDFS-13787
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ranith Sardar
>Priority: Major
>
> Currently, allowSnapshot, disallowSnapshot, renameSnapshot, createSnapshot, 
> deleteSnapshot , SnapshottableDirectoryStatus, getSnapshotDiffReport and 
> getSnapshotDiffReportListing are not implemented in RouterRpcServer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13787) Add Snapshot related APIs

2018-08-02 Thread Ranith Sardar (JIRA)
Ranith Sardar created HDFS-13787:


 Summary: Add Snapshot related APIs
 Key: HDFS-13787
 URL: https://issues.apache.org/jira/browse/HDFS-13787
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Ranith Sardar






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-304) Process ContainerAction from datanode heartbeat in SCM

2018-08-02 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566704#comment-16566704
 ] 

Hudson commented on HDDS-304:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14695 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14695/])
HDDS-304. Process ContainerAction from datanode heartbeat in SCM. (msingh: rev 
7c368575a319f5ba98019418166524bac982086f)
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/events/SCMEvents.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeHeartbeatDispatcher.java
* (add) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestContainerActionsHandler.java
* (add) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerActionsHandler.java


> Process ContainerAction from datanode heartbeat in SCM
> --
>
> Key: HDDS-304
> URL: https://issues.apache.org/jira/browse/HDDS-304
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-304.000.patch
>
>
> Datanodes send ContainerActions as part of heartbeat, we must add logic in 
> SCM to process those ContainerActions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-318) ratis INFO logs should not shown during ozoneFs command-line execution

2018-08-02 Thread Nilotpal Nandi (JIRA)
Nilotpal Nandi created HDDS-318:
---

 Summary: ratis INFO logs should not shown during ozoneFs 
command-line execution
 Key: HDDS-318
 URL: https://issues.apache.org/jira/browse/HDDS-318
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Filesystem
Reporter: Nilotpal Nandi
 Fix For: 0.2.1


ratis INFOs should not be shown during ozoneFS CLI execution.

Please find the snippet from one othe execution :

 
{noformat}
hadoop@08315aa4b367:~/bin$ ./ozone fs -put /etc/passwd /p2
2018-08-02 12:17:18 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
2018-08-02 12:17:19 INFO ConfUtils:41 - raft.rpc.type = GRPC (default)
2018-08-02 12:17:19 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
(custom)
2018-08-02 12:17:19 INFO ConfUtils:41 - raft.client.rpc.retryInterval = 300 ms 
(default)
2018-08-02 12:17:19 INFO ConfUtils:41 - 
raft.client.async.outstanding-requests.max = 100 (default)
2018-08-02 12:17:19 INFO ConfUtils:41 - raft.client.async.scheduler-threads = 3 
(default)
2018-08-02 12:17:19 INFO ConfUtils:41 - raft.grpc.flow.control.window = 1MB 
(=1048576) (default)
2018-08-02 12:17:19 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
(custom)
2018-08-02 12:17:20 INFO ConfUtils:41 - raft.client.rpc.request.timeout = 3000 
ms (default)
Aug 02, 2018 12:17:20 PM 
org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl detectProxy
WARNING: Failed to construct URI for proxy lookup, proceeding without proxy
..
..
..
 
{noformat}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-304) Process ContainerAction from datanode heartbeat in SCM

2018-08-02 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-304:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Process ContainerAction from datanode heartbeat in SCM
> --
>
> Key: HDDS-304
> URL: https://issues.apache.org/jira/browse/HDDS-304
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-304.000.patch
>
>
> Datanodes send ContainerActions as part of heartbeat, we must add logic in 
> SCM to process those ContainerActions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-317) Use new StorageSize API for reading ozone.scm.container.size.gb

2018-08-02 Thread Nanda kumar (JIRA)
Nanda kumar created HDDS-317:


 Summary: Use new StorageSize API for reading 
ozone.scm.container.size.gb
 Key: HDDS-317
 URL: https://issues.apache.org/jira/browse/HDDS-317
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: SCM
Reporter: Nanda kumar


Container size is configured using property {{ozone.scm.container.size.gb}}. 
This can be renamed to {{ozone.scm.container.size}} and use new StorageSize API 
to read the value.

The property is defined in
 1. ozone-default.xml
 2. ScmConfigKeys#OZONE_SCM_CONTAINER_SIZE_GB

The default value is defined in
 1. ozone-default.xml
 2. {{ScmConfigKeys#OZONE_SCM_CONTAINER_SIZE_DEFAULT}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-304) Process ContainerAction from datanode heartbeat in SCM

2018-08-02 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566676#comment-16566676
 ] 

Mukul Kumar Singh commented on HDDS-304:


Thanks for the contribution [~nandakumar131]. I have committed this to trunk.

> Process ContainerAction from datanode heartbeat in SCM
> --
>
> Key: HDDS-304
> URL: https://issues.apache.org/jira/browse/HDDS-304
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-304.000.patch
>
>
> Datanodes send ContainerActions as part of heartbeat, we must add logic in 
> SCM to process those ContainerActions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-316) End to end testcase to test container lifecycle

2018-08-02 Thread Nanda kumar (JIRA)
Nanda kumar created HDDS-316:


 Summary: End to end testcase to test container lifecycle
 Key: HDDS-316
 URL: https://issues.apache.org/jira/browse/HDDS-316
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Datanode, SCM
Reporter: Nanda kumar


This jira aims to add end-to-end test-cases to test the transition of container 
lifecycle in HDDS.

Container lifecycle:
{noformat}
   

[ALLOCATED]--->[CREATING]->[OPEN]-->[CLOSING]->[CLOSED]
 (CREATE) |(CREATED)  (FINALIZE)
 (CLOSE)|
   |
   |
   |
   |
   |(TIMEOUT)   
 (DELETE)|
   |
   |
  +--> 
[DELETING] <+

 |

 |
  
(CLEANUP)|

 |

[DELETED]
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-316) End to end testcase to test container lifecycle

2018-08-02 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-316:
-
Description: 
This jira aims to add end-to-end test-cases to test the transition of container 
lifecycle in HDDS.

Container lifecycle:
{noformat}
   

[ALLOCATED]--->[CREATING]->[OPEN]-->[CLOSING]->[CLOSED]
  (CREATE) |(CREATED)(FINALIZE)   (CLOSE)   
  |
   |
  |
   |
  |
   |
  |
   |(TIMEOUT) 
(DELETE)|
   |
  |
   +---> [DELETING] 
<-+
 |
 |
(CLEANUP)|
 |
 [DELETED]
{noformat}

  was:
This jira aims to add end-to-end test-cases to test the transition of container 
lifecycle in HDDS.

Container lifecycle:
{noformat}
   

[ALLOCATED]--->[CREATING]->[OPEN]-->[CLOSING]->[CLOSED]
 (CREATE) |(CREATED)  (FINALIZE)
 (CLOSE)|
   |
   |
   |
   |
   |(TIMEOUT)   
 (DELETE)|
   |
   |
  +--> 
[DELETING] <+

 |

 |
  
(CLEANUP)|

 |

[DELETED]
{noformat}


> End to end testcase to test container lifecycle
> ---
>
> Key: HDDS-316
> URL: https://issues.apache.org/jira/browse/HDDS-316
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode, SCM
>Reporter: Nanda kumar
>Priority: Major
>
> This jira aims to add end-to-end test-cases to test the transition of 
> container lifecycle in HDDS.
> Container lifecycle:
> {noformat}
>
> 
> [ALLOCATED]--->[CREATING]->[OPEN]-->[CLOSING]->[CLOSED]
>   (CREATE) |(CREATED)(FINALIZE)   (CLOSE) 
> |
>|  
> |
>|  
> |
>|  
> |
>|(TIMEOUT) 
> (DELETE)|
>|  
> |
>+---> [DELETING] 
> <-+
>  |
>  |
> (CLEANUP)|
>  |
>  [DELETED]
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-304) Process ContainerAction from datanode heartbeat in SCM

2018-08-02 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566670#comment-16566670
 ] 

Nanda kumar commented on HDDS-304:
--

Created HDDS-316 for adding end to end test.

> Process ContainerAction from datanode heartbeat in SCM
> --
>
> Key: HDDS-304
> URL: https://issues.apache.org/jira/browse/HDDS-304
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-304.000.patch
>
>
> Datanodes send ContainerActions as part of heartbeat, we must add logic in 
> SCM to process those ContainerActions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-304) Process ContainerAction from datanode heartbeat in SCM

2018-08-02 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1657#comment-1657
 ] 

Mukul Kumar Singh commented on HDDS-304:


Thanks for working on this [~nandakumar131]. 
+1, The patch looks good to me. I will commit this shortly

Can we can add an end to end test for this change, by writing data to a 
container from client.


> Process ContainerAction from datanode heartbeat in SCM
> --
>
> Key: HDDS-304
> URL: https://issues.apache.org/jira/browse/HDDS-304
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-304.000.patch
>
>
> Datanodes send ContainerActions as part of heartbeat, we must add logic in 
> SCM to process those ContainerActions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-315) ozoneShell infoKey does not work for directories created as key and throws 'KEY_NOT_FOUND' error

2018-08-02 Thread Nilotpal Nandi (JIRA)
Nilotpal Nandi created HDDS-315:
---

 Summary: ozoneShell infoKey does not work for directories created 
as key and throws 'KEY_NOT_FOUND' error
 Key: HDDS-315
 URL: https://issues.apache.org/jira/browse/HDDS-315
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Nilotpal Nandi
 Fix For: 0.2.1


infoKey for directories created using ozoneFs does not work and throws 
'KEY_NOT_FOUND' error. However, it shows up in the 'listKey' command.

Here in this example, 'dir1' was created using ozoneFS , infoKey for the 
directory throws error.

 

 
{noformat}
hadoop@08315aa4b367:~/bin./ozone oz -infoKey /root-volume/root-bucket/dir1
2018-08-02 11:34:06 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Command Failed : Lookup key failed, error:KEY_NOT_FOUND
hadoop@08315aa4b367:~/bin$ ./ozone oz -infoKey /root-volume/root-bucket/dir1/
2018-08-02 11:34:16 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Command Failed : Lookup key failed, error:KEY_NOT_FOUND
hadoop@08315aa4b367:~/bin$ ./ozone oz -listKey /root-volume/root-bucket/
2018-08-02 11:34:21 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
[ {
 "version" : 0,
 "md5hash" : null,
 "createdOn" : "Wed, 07 May +50555 12:44:16 GMT",
 "modifiedOn" : "Wed, 07 May +50555 12:44:30 GMT",
 "size" : 0,
 "keyName" : "dir1/"
}, {
 "version" : 0,
 "md5hash" : null,
 "createdOn" : "Wed, 07 May +50555 14:14:06 GMT",
 "modifiedOn" : "Wed, 07 May +50555 14:14:19 GMT",
 "size" : 0,
 "keyName" : "dir2/"
}, {
 "version" : 0,
 "md5hash" : null,
 "createdOn" : "Thu, 08 May +50555 21:40:55 GMT",
 "modifiedOn" : "Thu, 08 May +50555 21:40:59 GMT",
 "size" : 0,
 "keyName" : "dir2/b1/"{noformat}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13772) Erasure coding: Unnecessary NameNode Logs displaying for Enabling/Disabling Erasure coding policies which are already enabled/disabled

2018-08-02 Thread Souryakanta Dwivedy (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Souryakanta Dwivedy updated HDFS-13772:
---
Priority: Trivial  (was: Minor)

> Erasure coding: Unnecessary NameNode Logs displaying for Enabling/Disabling 
> Erasure coding policies which are already enabled/disabled
> --
>
> Key: HDFS-13772
> URL: https://issues.apache.org/jira/browse/HDFS-13772
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
> Environment: 3 Node SuSE Linux cluster 
>Reporter: Souryakanta Dwivedy
>Priority: Trivial
> Attachments: EC_capture1.PNG
>
>
> Unnecessary NameNode Logs displaying for Enabling/Disabling Erasure coding 
> policies which are already enabled/disabled
> - Enable any Erasure coding policy like "RS-LEGACY-6-3-1024k"
> - Check the console log display as "Erasure coding policy RS-LEGACY-6-3-1024k 
> is enabled"
> - Again try to enable the same policy multiple times "hdfs ec -enablePolicy 
> -policy RS-LEGACY-6-3-1024k"
>  instead of throwing error message as ""policy already enabled"" it will 
> display same messages as "Erasure coding policy RS-LEGACY-6-3-1024k is 
> enabled"
> - Also in NameNode log policy enabled logs are displaying multiple times 
> unnecessarily even though the policy is already enabled.
>  like this : 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Disable 
> the erasure coding policy RS-10-4-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Disable 
> the erasure coding policy RS-10-4-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Disable 
> the erasure coding policy RS-10-4-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Enable the 
> erasure coding policy RS-LEGACY-6-3-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Enable the 
> erasure coding policy RS-LEGACY-6-3-1024k
> 2018-07-27 18:50:35,084 INFO 
> org.apache.hadoop.hdfs.server.namenode.ErasureCodingPolicyManager: Enable the 
> erasure coding policy RS-LEGACY-6-3-1024k
> - While executing the Erasure coding policy disable command also same type of 
> logs coming multiple times even though the policy is already 
>  disabled.It should throw error message as ""policy is already disabled"" for 
> already disabled policy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-286) Fix NodeReportPublisher.getReport NPE

2018-08-02 Thread Junjie Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566588#comment-16566588
 ] 

Junjie Chen commented on HDDS-286:
--

Hi Xiaoyu

I can't reproduce this with latest trunk with command " mvn test 
-Dtest=TestKeys -Phdds". Please see logs following: 

[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.hadoop.ozone.web.client.TestKeys
[INFO] Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 79.371 
s - in org.apache.hadoop.ozone.web.client.TestKeys
[INFO]
[INFO] Results:
[INFO]
[INFO] Tests run: 14, Failures: 0, Errors: 0, Skipped: 0

Could you please elaborate?


> Fix NodeReportPublisher.getReport NPE
> -
>
> Key: HDDS-286
> URL: https://issues.apache.org/jira/browse/HDDS-286
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Junjie Chen
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
>
> This can be reproed with TestKeys#testPutKey
> {code}
> 2018-07-23 21:33:55,598 WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(63)) - Caught exception in 
> thread Datanode ReportManager Thread - 0: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeInfo.getScmUsed(VolumeInfo.java:107)
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.getNodeReport(VolumeSet.java:350)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getNodeReport(OzoneContainer.java:260)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:39)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.publishReport(ReportPublisher.java:86)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.run(ReportPublisher.java:73)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-314) ozoneShell putKey command overwrites the existing key having same name

2018-08-02 Thread Nilotpal Nandi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nilotpal Nandi reassigned HDDS-314:
---

Assignee: Nilotpal Nandi

> ozoneShell putKey command overwrites the existing key having same name
> --
>
> Key: HDDS-314
> URL: https://issues.apache.org/jira/browse/HDDS-314
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nilotpal Nandi
>Assignee: Nilotpal Nandi
>Priority: Major
> Fix For: 0.2.1
>
>
> steps taken : 
> 1) created a volume root-volume and a bucket root-bucket.
> 2)  Ran following command to put a key with name 'passwd'
>  
> {noformat}
> hadoop@08315aa4b367:~/bin$ ./ozone oz -putKey /root-volume/root-bucket/passwd 
> -file /etc/services -v
> 2018-08-02 09:20:17 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Volume Name : root-volume
> Bucket Name : root-bucket
> Key Name : passwd
> File Hash : 567c100888518c1163b3462993de7d47
> 2018-08-02 09:20:18 INFO ConfUtils:41 - raft.rpc.type = GRPC (default)
> 2018-08-02 09:20:18 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-02 09:20:18 INFO ConfUtils:41 - raft.client.rpc.retryInterval = 300 
> ms (default)
> 2018-08-02 09:20:18 INFO ConfUtils:41 - 
> raft.client.async.outstanding-requests.max = 100 (default)
> 2018-08-02 09:20:18 INFO ConfUtils:41 - raft.client.async.scheduler-threads = 
> 3 (default)
> 2018-08-02 09:20:18 INFO ConfUtils:41 - raft.grpc.flow.control.window = 1MB 
> (=1048576) (default)
> 2018-08-02 09:20:18 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-02 09:20:18 INFO ConfUtils:41 - raft.client.rpc.request.timeout = 
> 3000 ms (default)
> Aug 02, 2018 9:20:18 AM 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl detectProxy
>  
> {noformat}
> 3) Ran following command to put a key with name 'passwd' again.
> {noformat}
> hadoop@08315aa4b367:~/bin$ ./ozone oz -putKey /root-volume/root-bucket/passwd 
> -file /etc/passwd -v
> 2018-08-02 09:20:41 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Volume Name : root-volume
> Bucket Name : root-bucket
> Key Name : passwd
> File Hash : b056233571cc80d6879212911cb8e500
> 2018-08-02 09:20:41 INFO ConfUtils:41 - raft.rpc.type = GRPC (default)
> 2018-08-02 09:20:42 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-02 09:20:42 INFO ConfUtils:41 - raft.client.rpc.retryInterval = 300 
> ms (default)
> 2018-08-02 09:20:42 INFO ConfUtils:41 - 
> raft.client.async.outstanding-requests.max = 100 (default)
> 2018-08-02 09:20:42 INFO ConfUtils:41 - raft.client.async.scheduler-threads = 
> 3 (default)
> 2018-08-02 09:20:42 INFO ConfUtils:41 - raft.grpc.flow.control.window = 1MB 
> (=1048576) (default)
> 2018-08-02 09:20:42 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-02 09:20:42 INFO ConfUtils:41 - raft.client.rpc.request.timeout = 
> 3000 ms (default)
> Aug 02, 2018 9:20:42 AM 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl 
> detectProxy{noformat}
>  
> key 'passwd' was overwritten with new content and it did not throw any saying 
> that the key is already present.
> Expectation :
> ---
> key overwrite with same name should not be allowed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13786) EC: Display erasure coding policy for sub-directories is not working

2018-08-02 Thread Souryakanta Dwivedy (JIRA)
Souryakanta Dwivedy created HDFS-13786:
--

 Summary: EC: Display erasure coding policy for sub-directories is 
not working
 Key: HDFS-13786
 URL: https://issues.apache.org/jira/browse/HDFS-13786
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: erasure-coding
Affects Versions: 3.0.0
 Environment: 3 Node SUSE Linux Cluster
Reporter: Souryakanta Dwivedy
 Attachments: Display_EC_Policy_Missing_Sub_Dir.png

EC: Display erasure coding policy for sub-directories is not working

- Create a Directory 
 - Set EC policy for the Directory
 - Create a file in-side that Directory 
 - Create a sub-directory inside the parent directory
 - Check the EC policy set for the files and sub-folders of the parent 
directory with command 
 "hadoop fs -ls -e /ecdir" 
 EC policy will be displayed only for files and missing for 
sub-directories,which is wrong behavior
 - But if you check the EC policy set of sub-directory with "hdfs ec -getPolicy 
" ,it will show
 the ec policy
 
 Actual ouput :-
 
 Display erasure coding policy for sub-directories is not working with command 
"hadoop fs -ls -e "

Expected output :-

It should display erasure coding policy for sub-directories also with command 
"hadoop fs -ls -e "



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13785) EC: "removePolicy" is not working for built-in/system Erasure Code policies

2018-08-02 Thread Souryakanta Dwivedy (JIRA)
Souryakanta Dwivedy created HDFS-13785:
--

 Summary: EC: "removePolicy" is not working for built-in/system 
Erasure Code policies
 Key: HDFS-13785
 URL: https://issues.apache.org/jira/browse/HDFS-13785
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: erasure-coding
Affects Versions: 3.0.0
 Environment: 3 Node SUSE Linux Cluster
Reporter: Souryakanta Dwivedy


EC: "removePolicy" is not working for built-in/system Erasure Code policies

- Check the existing built-in EC policies with command "hdfs ec -listPolicies"
- try to remove any of the EC policies,it will throw error message as 
"RemoteException: System erasure coding policy RS-3-2-1024k cannot be removed"
- add user-defined EC policies 
- Try to remove any user-defined policy,it will be removed successfully
- But in help option it is specified as :
 vm1:/opt/client/install/hadoop/namenode/bin> ./hdfs ec -help removePolicy
[-removePolicy -policy ]

Remove an erasure coding policy.
 The name of the erasure coding policy
vm1:/opt/client/install/hadoop/namenode/bin>

Actual result :-
 hdfs ec -removePolicy not working for built-in/system EC policies ,where as 
usage description 
 provided as "Remove an erasure coding policy".throwing exception as : 
"RemoteException: System erasure coding policy RS-3-2-1024k cannot be removed"

Expected output : Either EC "removePolicy" option should be applicable for all 
type of EC policies 
 Or it has to be specified in usage that EC "removePolicy" will be applicable 
to remove
 only user-defined EC policies, not applicable for system EC coding policies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-314) ozoneShell putKey command overwrites the existing key having same name

2018-08-02 Thread Nilotpal Nandi (JIRA)
Nilotpal Nandi created HDDS-314:
---

 Summary: ozoneShell putKey command overwrites the existing key 
having same name
 Key: HDDS-314
 URL: https://issues.apache.org/jira/browse/HDDS-314
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client
Reporter: Nilotpal Nandi
 Fix For: 0.2.1


steps taken : 

1) created a volume root-volume and a bucket root-bucket.

2)  Ran following command to put a key with name 'passwd'

 
{noformat}
hadoop@08315aa4b367:~/bin$ ./ozone oz -putKey /root-volume/root-bucket/passwd 
-file /etc/services -v
2018-08-02 09:20:17 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Volume Name : root-volume
Bucket Name : root-bucket
Key Name : passwd
File Hash : 567c100888518c1163b3462993de7d47
2018-08-02 09:20:18 INFO ConfUtils:41 - raft.rpc.type = GRPC (default)
2018-08-02 09:20:18 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
(custom)
2018-08-02 09:20:18 INFO ConfUtils:41 - raft.client.rpc.retryInterval = 300 ms 
(default)
2018-08-02 09:20:18 INFO ConfUtils:41 - 
raft.client.async.outstanding-requests.max = 100 (default)
2018-08-02 09:20:18 INFO ConfUtils:41 - raft.client.async.scheduler-threads = 3 
(default)
2018-08-02 09:20:18 INFO ConfUtils:41 - raft.grpc.flow.control.window = 1MB 
(=1048576) (default)
2018-08-02 09:20:18 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
(custom)
2018-08-02 09:20:18 INFO ConfUtils:41 - raft.client.rpc.request.timeout = 3000 
ms (default)
Aug 02, 2018 9:20:18 AM 
org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl detectProxy
 
{noformat}
3) Ran following command to put a key with name 'passwd' again.
{noformat}
hadoop@08315aa4b367:~/bin$ ./ozone oz -putKey /root-volume/root-bucket/passwd 
-file /etc/passwd -v
2018-08-02 09:20:41 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Volume Name : root-volume
Bucket Name : root-bucket
Key Name : passwd
File Hash : b056233571cc80d6879212911cb8e500
2018-08-02 09:20:41 INFO ConfUtils:41 - raft.rpc.type = GRPC (default)
2018-08-02 09:20:42 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
(custom)
2018-08-02 09:20:42 INFO ConfUtils:41 - raft.client.rpc.retryInterval = 300 ms 
(default)
2018-08-02 09:20:42 INFO ConfUtils:41 - 
raft.client.async.outstanding-requests.max = 100 (default)
2018-08-02 09:20:42 INFO ConfUtils:41 - raft.client.async.scheduler-threads = 3 
(default)
2018-08-02 09:20:42 INFO ConfUtils:41 - raft.grpc.flow.control.window = 1MB 
(=1048576) (default)
2018-08-02 09:20:42 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
(custom)
2018-08-02 09:20:42 INFO ConfUtils:41 - raft.client.rpc.request.timeout = 3000 
ms (default)
Aug 02, 2018 9:20:42 AM 
org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl detectProxy{noformat}
 

key 'passwd' was overwritten with new content and it did not throw any saying 
that the key is already present.

Expectation :

---

key overwrite with same name should not be allowed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-310) VolumeSet shutdown hook fails on datanode restart

2018-08-02 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566404#comment-16566404
 ] 

Hudson commented on HDDS-310:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14692 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14692/])
HDDS-310. VolumeSet shutdown hook fails on datanode restart. Contributed 
(nanda: rev 41da2050bdec14709a86fa8a5cf7da82415fd989)
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/VolumeSet.java


> VolumeSet shutdown hook fails on datanode restart
> -
>
> Key: HDDS-310
> URL: https://issues.apache.org/jira/browse/HDDS-310
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-310.00.patch, HDDS-310.01.patch
>
>
> {code}
> 2018-08-01 11:01:57,204 WARN org.apache.hadoop.fs.CachingGetSpaceUsed: Thread 
> Interrupted waiting to refresh disk information: sleep interrupted
> 2018-08-01 11:01:57,204 WARN org.apache.hadoop.util.ShutdownHookManager: 
> ShutdownHook 'VolumeSet$$Lambda$13/360062456' failed, 
> java.util.concurrent.ExecutionException: java.lang.IllegalStateException: 
> Shutdown in progress, cannot remove a shutdownHook
> java.util.concurrent.ExecutionException: java.lang.IllegalStateException: 
> Shutdown in progress, cannot remove a shutdownHook
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:206)
> at 
> org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:68)
> Caused by: java.lang.IllegalStateException: Shutdown in progress, cannot 
> remove a shutdownHook
> at 
> org.apache.hadoop.util.ShutdownHookManager.removeShutdownHook(ShutdownHookManager.java:247)
> at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.shutdown(VolumeSet.java:317)
> at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.lambda$initializeVolumeSet$0(VolumeSet.java:170)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-310) VolumeSet shutdown hook fails on datanode restart

2018-08-02 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-310:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> VolumeSet shutdown hook fails on datanode restart
> -
>
> Key: HDDS-310
> URL: https://issues.apache.org/jira/browse/HDDS-310
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-310.00.patch, HDDS-310.01.patch
>
>
> {code}
> 2018-08-01 11:01:57,204 WARN org.apache.hadoop.fs.CachingGetSpaceUsed: Thread 
> Interrupted waiting to refresh disk information: sleep interrupted
> 2018-08-01 11:01:57,204 WARN org.apache.hadoop.util.ShutdownHookManager: 
> ShutdownHook 'VolumeSet$$Lambda$13/360062456' failed, 
> java.util.concurrent.ExecutionException: java.lang.IllegalStateException: 
> Shutdown in progress, cannot remove a shutdownHook
> java.util.concurrent.ExecutionException: java.lang.IllegalStateException: 
> Shutdown in progress, cannot remove a shutdownHook
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:206)
> at 
> org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:68)
> Caused by: java.lang.IllegalStateException: Shutdown in progress, cannot 
> remove a shutdownHook
> at 
> org.apache.hadoop.util.ShutdownHookManager.removeShutdownHook(ShutdownHookManager.java:247)
> at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.shutdown(VolumeSet.java:317)
> at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.lambda$initializeVolumeSet$0(VolumeSet.java:170)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-310) VolumeSet shutdown hook fails on datanode restart

2018-08-02 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566396#comment-16566396
 ] 

Nanda kumar commented on HDDS-310:
--

Thanks [~bharatviswa] for the contribution. I have committed this to trunk.

> VolumeSet shutdown hook fails on datanode restart
> -
>
> Key: HDDS-310
> URL: https://issues.apache.org/jira/browse/HDDS-310
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-310.00.patch, HDDS-310.01.patch
>
>
> {code}
> 2018-08-01 11:01:57,204 WARN org.apache.hadoop.fs.CachingGetSpaceUsed: Thread 
> Interrupted waiting to refresh disk information: sleep interrupted
> 2018-08-01 11:01:57,204 WARN org.apache.hadoop.util.ShutdownHookManager: 
> ShutdownHook 'VolumeSet$$Lambda$13/360062456' failed, 
> java.util.concurrent.ExecutionException: java.lang.IllegalStateException: 
> Shutdown in progress, cannot remove a shutdownHook
> java.util.concurrent.ExecutionException: java.lang.IllegalStateException: 
> Shutdown in progress, cannot remove a shutdownHook
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:206)
> at 
> org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:68)
> Caused by: java.lang.IllegalStateException: Shutdown in progress, cannot 
> remove a shutdownHook
> at 
> org.apache.hadoop.util.ShutdownHookManager.removeShutdownHook(ShutdownHookManager.java:247)
> at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.shutdown(VolumeSet.java:317)
> at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.lambda$initializeVolumeSet$0(VolumeSet.java:170)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-310) VolumeSet shutdown hook fails on datanode restart

2018-08-02 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566389#comment-16566389
 ] 

genericqa commented on HDDS-310:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
35s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDDS-310 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12934010/HDDS-310.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7db642152a43 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 735b492 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/683/testReport/ |
| Max. process+thread count | 329 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/container-service U: hadoop-hdds/container-service |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/683/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> VolumeSet shutdown hook fails on datanode restart
>