[jira] [Commented] (HDDS-325) Add event watcher for delete blocks command

2018-09-06 Thread Lokesh Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16606756#comment-16606756
 ] 

Lokesh Jain commented on HDDS-325:
--

Uploaded rebased v9 patch. I have enabled TestBlockDeletion in this patch.

> Add event watcher for delete blocks command
> ---
>
> Key: HDDS-325
> URL: https://issues.apache.org/jira/browse/HDDS-325
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-325.001.patch, HDDS-325.002.patch, 
> HDDS-325.003.patch, HDDS-325.004.patch, HDDS-325.005.patch, 
> HDDS-325.006.patch, HDDS-325.007.patch, HDDS-325.008.patch, HDDS-325.009.patch
>
>
> This Jira aims to add watcher for deleteBlocks command. It removes the 
> current rpc call required for datanode to send the acknowledgement for 
> deleteBlocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-325) Add event watcher for delete blocks command

2018-09-06 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-325:
-
Attachment: HDDS-325.009.patch

> Add event watcher for delete blocks command
> ---
>
> Key: HDDS-325
> URL: https://issues.apache.org/jira/browse/HDDS-325
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-325.001.patch, HDDS-325.002.patch, 
> HDDS-325.003.patch, HDDS-325.004.patch, HDDS-325.005.patch, 
> HDDS-325.006.patch, HDDS-325.007.patch, HDDS-325.008.patch, HDDS-325.009.patch
>
>
> This Jira aims to add watcher for deleteBlocks command. It removes the 
> current rpc call required for datanode to send the acknowledgement for 
> deleteBlocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13884) Improve the description of the setting dfs.image.compress

2018-09-06 Thread Ryan Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Wu updated HDFS-13884:
---
Attachment: HDFS-13884.001.patch

> Improve the description of the setting dfs.image.compress
> -
>
> Key: HDFS-13884
> URL: https://issues.apache.org/jira/browse/HDFS-13884
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.1.1
>Reporter: Yiqun Lin
>Assignee: Ryan Wu
>Priority: Minor
> Attachments: HDFS-13884.001.patch
>
>
> In HDFS-1435, we introduced a new option to store fsimage compressed. And 
> this can avoid that It consumes a lot of network bandwidth when SBN do 
> uploads a new fsimage to ANN. When lots of network bandwidth was consumed, it 
> will affect ANN to deal with normal RPC requests or sync edit logs.
> This is a very useful setting when our fsimage file is very large. However 
> current description of this setting is too simple, we can document this more.
> {noformat}
> 
>   dfs.image.compress
>   false
>   Should the dfs image be compressed?
>   
> 
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-233) Update ozone to latest ratis snapshot build

2018-09-06 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16606755#comment-16606755
 ] 

Shashikant Banerjee commented on HDDS-233:
--

Thanks [~anu]. I would prefer to have 
https://issues.apache.org/jira/browse/RATIS-310 included. Its quite close to 
resolution.

Would like to update the Ratis snapshot version accordingly.

> Update ozone to latest ratis snapshot build
> ---
>
> Key: HDDS-233
> URL: https://issues.apache.org/jira/browse/HDDS-233
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.2.1
>
>
> This jira proposes to update ozone to latest ratis snapshot build. This jira 
> also will add config to set append entry timeout as well as controlling the 
> number of entries in retry cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-351) Add chill mode state to SCM

2018-09-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16606732#comment-16606732
 ] 

Hadoop QA commented on HDDS-351:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
9s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
13s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m  1s{color} | {color:orange} root: The patch generated 5 new + 14 unchanged - 
0 fixed = 19 total (was 14) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 17s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
7s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
40s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m  5s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
47s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}110m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdds.scm.pipeline.TestNodeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:y

[jira] [Updated] (HDDS-190) Improve shell error message for unrecognized option

2018-09-06 Thread Jitendra Nath Pandey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDDS-190:
--
Priority: Blocker  (was: Major)

> Improve shell error message for unrecognized option
> ---
>
> Key: HDDS-190
> URL: https://issues.apache.org/jira/browse/HDDS-190
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Sandeep Nemuri
>Priority: Blocker
>  Labels: newbie
> Fix For: 0.2.1
>
>
> The error message with an unrecognized option is unfriendly. E.g.
> {code}
> $ ozone oz -badOption
> Unrecognized option: -badOptionERROR: null
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-174) Shell error messages are often cryptic

2018-09-06 Thread Jitendra Nath Pandey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDDS-174:
--
Priority: Blocker  (was: Critical)

> Shell error messages are often cryptic
> --
>
> Key: HDDS-174
> URL: https://issues.apache.org/jira/browse/HDDS-174
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Nanda kumar
>Priority: Blocker
>  Labels: newbie
> Fix For: 0.2.1
>
>
> Error messages in the Ozone shell are often too cryptic. e.g.
> {code}
> $ ozone oz -putKey /vol1/bucket1/key1 -file foo.txt
> Command Failed : Create key failed, error:INTERNAL_ERROR
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-283) Need an option to list all volumes created in the cluster

2018-09-06 Thread Jitendra Nath Pandey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDDS-283:
--
Priority: Blocker  (was: Major)

> Need an option to list all volumes created in the cluster
> -
>
> Key: HDDS-283
> URL: https://issues.apache.org/jira/browse/HDDS-283
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nilotpal Nandi
>Assignee: Nilotpal Nandi
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-283.001.patch
>
>
> Currently , listVolume command either gives :
> 1) all the volumes created by a particular user , using -user argument.
> 2) or , all the volumes created by the logged in user , if no -user argument 
> is provided.
>  
> We need an option to list all the volumes created in the cluster



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-318) ratis INFO logs should not shown during ozoneFs command-line execution

2018-09-06 Thread Jitendra Nath Pandey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDDS-318:
--
Priority: Blocker  (was: Minor)

> ratis INFO logs should not shown during ozoneFs command-line execution
> --
>
> Key: HDDS-318
> URL: https://issues.apache.org/jira/browse/HDDS-318
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Nilotpal Nandi
>Priority: Blocker
>  Labels: newbie
> Fix For: 0.2.1
>
>
> ratis INFOs should not be shown during ozoneFS CLI execution.
> Please find the snippet from one othe execution :
>  
> {noformat}
> hadoop@08315aa4b367:~/bin$ ./ozone fs -put /etc/passwd /p2
> 2018-08-02 12:17:18 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 2018-08-02 12:17:19 INFO ConfUtils:41 - raft.rpc.type = GRPC (default)
> 2018-08-02 12:17:19 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-02 12:17:19 INFO ConfUtils:41 - raft.client.rpc.retryInterval = 300 
> ms (default)
> 2018-08-02 12:17:19 INFO ConfUtils:41 - 
> raft.client.async.outstanding-requests.max = 100 (default)
> 2018-08-02 12:17:19 INFO ConfUtils:41 - raft.client.async.scheduler-threads = 
> 3 (default)
> 2018-08-02 12:17:19 INFO ConfUtils:41 - raft.grpc.flow.control.window = 1MB 
> (=1048576) (default)
> 2018-08-02 12:17:19 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-02 12:17:20 INFO ConfUtils:41 - raft.client.rpc.request.timeout = 
> 3000 ms (default)
> Aug 02, 2018 12:17:20 PM 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl detectProxy
> WARNING: Failed to construct URI for proxy lookup, proceeding without proxy
> ..
> ..
> ..
>  
> {noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-321) ozoneFS put/copyFromLocal command does not work for a directory when the directory contains file(s) as well as subdirectories

2018-09-06 Thread Jitendra Nath Pandey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDDS-321:
--
Priority: Blocker  (was: Major)

> ozoneFS put/copyFromLocal command does not work for a directory when the 
> directory contains file(s) as well as subdirectories
> -
>
> Key: HDDS-321
> URL: https://issues.apache.org/jira/browse/HDDS-321
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nilotpal Nandi
>Priority: Blocker
> Fix For: 0.2.1
>
>
> Steps taken :
> -
>  # Created a local directory 'TEST_DIR1' which contains  directory "SUB_DIR1" 
> and a file   "test_file1".
>  # Ran "./ozone fs -put TEST_DIR1/ /" . The command kept on running , 
> throwing error on console.
> stack trace of the error thrown on the console :
> {noformat}
> 2018-08-02 12:55:46 INFO ConfUtils:41 - raft.grpc.flow.control.window = 1MB 
> (=1048576) (default)
> 2018-08-02 12:55:46 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-02 12:55:46 INFO ConfUtils:41 - raft.client.rpc.request.timeout = 
> 3000 ms (default)
> Aug 02, 2018 12:55:46 PM 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl detectProxy
> WARNING: Failed to construct URI for proxy lookup, proceeding without proxy
> java.net.URISyntaxException: Illegal character in hostname at index 13: 
> https://ozone_datanode_3.ozone_default:9858
>  at java.net.URI$Parser.fail(URI.java:2848)
>  at java.net.URI$Parser.parseHostname(URI.java:3387)
>  at java.net.URI$Parser.parseServer(URI.java:3236)
>  at java.net.URI$Parser.parseAuthority(URI.java:3155)
>  at java.net.URI$Parser.parseHierarchical(URI.java:3097)
>  at java.net.URI$Parser.parse(URI.java:3053)
>  at java.net.URI.(URI.java:673)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl.detectProxy(ProxyDetectorImpl.java:128)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl.proxyFor(ProxyDetectorImpl.java:118)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.InternalSubchannel.startNewTransport(InternalSubchannel.java:207)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.InternalSubchannel.obtainActiveTransport(InternalSubchannel.java:188)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$SubchannelImpl.requestConnection(ManagedChannelImpl.java:1130)
>  at 
> org.apache.ratis.shaded.io.grpc.PickFirstBalancerFactory$PickFirstBalancer.handleResolvedAddressGroups(PickFirstBalancerFactory.java:79)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$NameResolverListenerImpl$1NamesResolved.run(ManagedChannelImpl.java:1032)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ChannelExecutor.drain(ChannelExecutor.java:73)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$4.get(ManagedChannelImpl.java:403)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ClientCallImpl.start(ClientCallImpl.java:238)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.CensusTracingModule$TracingClientInterceptor$1.start(CensusTracingModule.java:386)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.CensusStatsModule$StatsClientInterceptor$1.start(CensusStatsModule.java:679)
>  at 
> org.apache.ratis.shaded.io.grpc.stub.ClientCalls.startCall(ClientCalls.java:293)
>  at 
> org.apache.ratis.shaded.io.grpc.stub.ClientCalls.asyncStreamingRequestCall(ClientCalls.java:283)
>  at 
> org.apache.ratis.shaded.io.grpc.stub.ClientCalls.asyncBidiStreamingCall(ClientCalls.java:92)
>  at 
> org.apache.ratis.shaded.proto.grpc.RaftClientProtocolServiceGrpc$RaftClientProtocolServiceStub.append(RaftClientProtocolServiceGrpc.java:208)
>  at 
> org.apache.ratis.grpc.client.RaftClientProtocolClient.appendWithTimeout(RaftClientProtocolClient.java:139)
>  at 
> org.apache.ratis.grpc.client.GrpcClientRpc.sendRequest(GrpcClientRpc.java:109)
>  at 
> org.apache.ratis.grpc.client.GrpcClientRpc.sendRequest(GrpcClientRpc.java:88)
>  at 
> org.apache.ratis.client.impl.RaftClientImpl.sendRequest(RaftClientImpl.java:302)
>  at 
> org.apache.ratis.client.impl.RaftClientImpl.sendRequestWithRetry(RaftClientImpl.java:256)
>  at org.apache.ratis.client.impl.RaftClientImpl.send(RaftClientImpl.java:192)
>  at org.apache.ratis.client.impl.RaftClientImpl.send(RaftClientImpl.java:173)
>  at org.apache.ratis.client.RaftClient.send(RaftClient.java:80)
>  at 
> org.apache.hadoop.hdds.scm.XceiverClientRatis.sendRequest(XceiverClientRatis.java:218)
>  at 
> org.apache.hadoop.hdds.scm.XceiverClientRatis.sendCommand(XceiverClientRatis.java:235)
>  at 
> org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.writeChunk(ContainerProtocolCalls.java:219)
>  at 
> org.apache.hadoop.hdds.scm.storage.ChunkOutputStream.writeChu

[jira] [Created] (HDFS-13903) write data into hdfs is very slow while the hdfs has mounted to local file system via nfs3

2018-09-06 Thread Liao Chunbo (JIRA)
Liao Chunbo created HDFS-13903:
--

 Summary: write data into hdfs is very slow while the hdfs has 
mounted to local file system via nfs3
 Key: HDFS-13903
 URL: https://issues.apache.org/jira/browse/HDFS-13903
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 3.1.1
Reporter: Liao Chunbo


I have installed the hadoop 3.1.1, and mount hdfs to local filesystem via 
command of "mount -t nfs -o vers=3,nolock,rw,async,wsize=32768,rsize=32768 
hdfsserver:/ /mydir"

 

When I use use cp command to copy files to the mounted dir(/mydire). The speed 
is very slow. And some exception occurred as the follow:

 

2018-09-07 11:04:51,952 INFO security.ShellBasedIdMapping: Update cache now

2018-09-07 11:08:48,305 INFO security.ShellBasedIdMapping: Can't map group 
supergroup. Use its string hashcode:-1710818332

2018-09-07 11:08:48,806 INFO security.ShellBasedIdMapping: Can't map group 
supergroup. Use its string hashcode:-1710818332

2018-09-07 11:08:48,816 INFO security.ShellBasedIdMapping: Can't map group 
supergroup. Use its string hashcode:-1710818332

2018-09-07 11:08:48,829 INFO security.ShellBasedIdMapping: Can't map group 
supergroup. Use its string hashcode:-1710818332

2018-09-07 11:08:49,001 INFO security.ShellBasedIdMapping: Can't map group 
supergroup. Use its string hashcode:-1710818332

2018-09-07 11:10:24,975 INFO security.ShellBasedIdMapping: Can't map group 
supergroup. Use its string hashcode:-1710818332

2018-09-07 11:11:19,747 INFO security.ShellBasedIdMapping: Can't map group 
supergroup. Use its string hashcode:-1710818332

2018-09-07 11:11:19,756 INFO security.ShellBasedIdMapping: Can't map group 
supergroup. Use its string hashcode:-1710818332

2018-09-07 11:11:19,768 INFO security.ShellBasedIdMapping: Can't map group 
supergroup. Use its string hashcode:-1710818332

2018-09-07 11:12:17,220 INFO security.ShellBasedIdMapping: Can't map group 
supergroup. Use its string hashcode:-1710818332

2018-09-07 11:12:17,233 INFO security.ShellBasedIdMapping: Can't map group 
supergroup. Use its string hashcode:-1710818332

2018-09-07 11:12:35,109 INFO security.ShellBasedIdMapping: Can't map group 
supergroup. Use its string hashcode:-1710818332

2018-09-07 11:12:36,398 ERROR nfs3.RpcProgramNfs3: Setting file size is not 
supported when setattr, fileId: 27204

2018-09-07 11:12:44,424 ERROR nfs3.RpcProgramNfs3: Setting file size is not 
supported when setattr, fileId: 27204

2018-09-07 11:12:53,382 INFO security.ShellBasedIdMapping: Can't map group 
supergroup. Use its string hashcode:-1710818332

2018-09-07 11:12:53,394 INFO security.ShellBasedIdMapping: Can't map group 
supergroup. Use its string hashcode:-1710818332

2018-09-07 11:12:59,658 ERROR nfs3.RpcProgramNfs3: Setting file size is not 
supported when setattr, fileId: 27204

2018-09-07 11:13:17,519 INFO security.ShellBasedIdMapping: Can't map group 
supergroup. Use its string hashcode:-1710818332

2018-09-07 11:13:17,533 INFO security.ShellBasedIdMapping: Can't map group 
supergroup. Use its string hashcode:-1710818332

2018-09-07 11:13:18,602 INFO security.ShellBasedIdMapping: Can't map group 
supergroup. Use its string hashcode:-1710818332

2018-09-07 11:13:18,613 INFO security.ShellBasedIdMapping: Can't map group 
supergroup. Use its string hashcode:-1710818332

2018-09-07 11:13:18,933 INFO security.ShellBasedIdMapping: Can't map group 
supergroup. Use its string hashcode:-1710818332

2018-09-07 11:13:42,596 INFO security.ShellBasedIdMapping: Can't map group 
supergroup. Use its string hashcode:-1710818332

2018-09-07 11:13:42,608 INFO security.ShellBasedIdMapping: Can't map group 
supergroup. Use its string hashcode:-1710818332

2018-09-07 11:18:30,308 INFO security.ShellBasedIdMapping: Can't map group 
supergroup. Use its string hashcode:-1710818332

2018-09-07 11:19:23,438 INFO security.ShellBasedIdMapping: Can't map group 
supergroup. Use its string hashcode:-1710818332

2018-09-07 11:19:23,446 INFO security.ShellBasedIdMapping: Can't map group 
supergroup. Use its string hashcode:-1710818332

2018-09-07 11:19:23,666 INFO security.ShellBasedIdMapping: Can't map group 
supergroup. Use its string hashcode:-1710818332

2018-09-07 11:19:31,833 INFO security.ShellBasedIdMapping: Can't map group 
supergroup. Use its string hashcode:-1710818332

2018-09-07 11:19:51,967 INFO security.ShellBasedIdMapping: Update cache now

2018-09-07 11:20:02,884 WARN hdfs.DataStreamer: Exception for 
BP-1952000504-10.56.233.182-1536129789677:blk_1073751181_10452

java.io.EOFException: Unexpected EOF while trying to read response from server

    at 
org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:549)

    at 
org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213)

    at 
org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.

[jira] [Commented] (HDFS-13791) Limit logging frequency of edit tail related statements

2018-09-06 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16606703#comment-16606703
 ] 

Chao Sun commented on HDFS-13791:
-

Thanks [~xkrogen]! The patch v3 looks great. The idea of primary log makes 
sense to me. Could you attach the trunk part of this patch to HDFS-15726? I'd 
be happy to review there too.

> Limit logging frequency of edit tail related statements
> ---
>
> Key: HDFS-13791
> URL: https://issues.apache.org/jira/browse/HDFS-13791
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs, qjm
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13791-HDFS-12943.000.patch, 
> HDFS-13791-HDFS-12943.001.patch, HDFS-13791-HDFS-12943.002.patch, 
> HDFS-13791-HDFS-12943.003.patch
>
>
> There are a number of log statements that occur every time new edits are 
> tailed by a Standby NameNode. When edits are tailing only on the order of 
> every tens of seconds, this is fine. With the work in HDFS-13150, however, 
> edits may be tailed every few milliseconds, which can flood the logs with 
> tailing-related statements. We should throttle it to limit it to printing at 
> most, say, once per 5 seconds.
> We can implement logic similar to that used in HDFS-10713. This may be 
> slightly more tricky since the log statements are distributed across a few 
> classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13862) RBF: Router logs are not capturing few of the dfsrouteradmin commands

2018-09-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16606690#comment-16606690
 ] 

Hadoop QA commented on HDFS-13862:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m  6s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.fs.contract.router.web.TestRouterWebHDFSContractAppend |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13862 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938750/HDFS-13862-03.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e5c25d818896 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 396ce7b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24995/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24995/testReport/ |
| Max. process+thread count | 1046 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-projec

[jira] [Comment Edited] (HDDS-351) Add chill mode state to SCM

2018-09-06 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16606688#comment-16606688
 ] 

Ajay Kumar edited comment on HDDS-351 at 9/7/18 4:02 AM:
-

[~anu] SCMChillModeManager constructor has a preconditions check for null 
container list. Latest patch removes it and handles it in 
ContainerChillModeRule. For second point regarding race condition i think it is 
already handled with if condition in last patch. 
{code}
if (containerMap.containsKey(c.getContainerID())) {
  if(containerMap.remove(c.getContainerID()) != null) {
containerWithMinReplicas.getAndAdd(1);
  }
  }
{code}


was (Author: ajayydv):
[~anu] SCMChillModeManager constructor has a preconditions check for null 
container list. Latest patch removes it and handles it in 
ContainerChillModeRule. For second point regarding race condition i think it is 
handled with new if condition. 
{code}
if (containerMap.containsKey(c.getContainerID())) {
  if(containerMap.remove(c.getContainerID()) != null) {
containerWithMinReplicas.getAndAdd(1);
  }
  }
{code}

> Add chill mode state to SCM
> ---
>
> Key: HDDS-351
> URL: https://issues.apache.org/jira/browse/HDDS-351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-351.00.patch, HDDS-351.01.patch, HDDS-351.02.patch, 
> HDDS-351.03.patch, HDDS-351.04.patch, HDDS-351.05.patch, HDDS-351.06.patch, 
> HDDS-351.07.patch, HDDS-351.08.patch, HDDS-351.09.patch, HDDS-351.10.patch, 
> HDDS-351.11.patch, HDDS-351.12.patch, HDDS-351.13.patch
>
>
> Add chill mode state to SCM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-351) Add chill mode state to SCM

2018-09-06 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16606688#comment-16606688
 ] 

Ajay Kumar commented on HDDS-351:
-

[~anu] SCMChillModeManager constructor has a preconditions check for null 
container list. Latest patch removes it and handles it in 
ContainerChillModeRule. For second point regarding race condition i think it is 
handled with new if condition. 
{code}
if (containerMap.containsKey(c.getContainerID())) {
  if(containerMap.remove(c.getContainerID()) != null) {
containerWithMinReplicas.getAndAdd(1);
  }
  }
{code}

> Add chill mode state to SCM
> ---
>
> Key: HDDS-351
> URL: https://issues.apache.org/jira/browse/HDDS-351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-351.00.patch, HDDS-351.01.patch, HDDS-351.02.patch, 
> HDDS-351.03.patch, HDDS-351.04.patch, HDDS-351.05.patch, HDDS-351.06.patch, 
> HDDS-351.07.patch, HDDS-351.08.patch, HDDS-351.09.patch, HDDS-351.10.patch, 
> HDDS-351.11.patch, HDDS-351.12.patch, HDDS-351.13.patch
>
>
> Add chill mode state to SCM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-351) Add chill mode state to SCM

2018-09-06 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-351:

Attachment: HDDS-351.13.patch

> Add chill mode state to SCM
> ---
>
> Key: HDDS-351
> URL: https://issues.apache.org/jira/browse/HDDS-351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-351.00.patch, HDDS-351.01.patch, HDDS-351.02.patch, 
> HDDS-351.03.patch, HDDS-351.04.patch, HDDS-351.05.patch, HDDS-351.06.patch, 
> HDDS-351.07.patch, HDDS-351.08.patch, HDDS-351.09.patch, HDDS-351.10.patch, 
> HDDS-351.11.patch, HDDS-351.12.patch, HDDS-351.13.patch
>
>
> Add chill mode state to SCM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13899) Unexpected message type: PooledUnsafeDirectByteBuf when get datanode info by DatanodeWebHdfsMethods

2018-09-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16606681#comment-16606681
 ] 

Hadoop QA commented on HDFS-13899:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 47s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 8 unchanged - 0 fixed = 9 total (was 8) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 57s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}139m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.client.impl.TestBlockReaderLocal |
|   | hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13899 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938745/HDFS-13899.000.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ad1f8cac0369 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 396ce7b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24994/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24994/artifact/out/patch

[jira] [Commented] (HDFS-13902) Add jmx conf and stacks menus to the datanode page

2018-09-06 Thread maobaolong (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16606667#comment-16606667
 ] 

maobaolong commented on HDFS-13902:
---

[~fengchuang] Look good, we can easy to open the stacks jmx and conf page link. 
thank you.

>  Add jmx conf and stacks menus to the datanode page
> ---
>
> Key: HDFS-13902
> URL: https://issues.apache.org/jira/browse/HDFS-13902
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.3
>Reporter: fengchuang
>Priority: Minor
> Attachments: HDFS-13902.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13806) EC: No error message for unsetting EC policy of the directory inherits the erasure coding policy from an ancestor directory

2018-09-06 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1660#comment-1660
 ] 

Xiao Chen commented on HDFS-13806:
--

+1 on patch 6 from me. 

> EC: No error message for unsetting EC policy of the directory inherits the 
> erasure coding policy from an ancestor directory
> ---
>
> Key: HDFS-13806
> URL: https://issues.apache.org/jira/browse/HDFS-13806
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
> Environment: 3 Node SUSE Linux cluster
>Reporter: Souryakanta Dwivedy
>Assignee: Ayush Saxena
>Priority: Minor
> Attachments: HDFS-13806-01.patch, HDFS-13806-02.patch, 
> HDFS-13806-03.patch, HDFS-13806-04.patch, HDFS-13806-05.patch, 
> HDFS-13806-06.patch, No_error_unset_ec_policy.png
>
>
> No error message thrown for unsetting EC policy of the directory inherits the 
> erasure coding policy from an ancestor directory
> Steps :-
> --
>  * Create a Directory
>  - Set EC policy for the Directory
>  - Create a file in-side that Directory 
>  - Create a sub-directory inside the parent directory
>  - Check both the file and sub-directory inherit the EC policy from parent 
> directory
>  - Try to unset EC Policy for the file and check it will throw error as [ 
> Cannot unset an erasure coding policy on a file]
>  - Try to unset EC Policy for the sub-directory and check it will throw a 
> success message as [Unset erasure coding policy from ] 
>  instead of throwing the error message,which is wrong behavior
> Actual output :-
> No proper error message thrown for unsetting EC policy of the directory 
> inherits the erasure coding policy from an ancestor directory
>  A success message is displayed instead of throwing an error message
>  Expected output :-
>  
>  Proper error message should be thrown while trying to unset EC policy of the 
> directory inherits the erasure coding policy from an ancestor directory
>  like error message thrown while unsetting the EC policy of a file inherits 
> the erasure coding policy from an ancestor directory



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13820) Disable CacheReplicationMonitor If No Cached Paths Exist

2018-09-06 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16606665#comment-16606665
 ] 

Xiao Chen edited comment on HDFS-13820 at 9/7/18 3:25 AM:
--

Thanks Hrishikesh for revving. By 'pre-commit fix' I meant checkstyle warning 
from pre-commit. :)
+1 pending


was (Author: xiaochen):
Thanks Hrishikesh for revving. By pre-commit fix I meant checkstyle. :)
+1 pending

> Disable CacheReplicationMonitor If No Cached Paths Exist
> 
>
> Key: HDFS-13820
> URL: https://issues.apache.org/jira/browse/HDFS-13820
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: caching
>Affects Versions: 2.10.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: Hrishikesh Gadre
>Priority: Minor
> Attachments: HDFS-13820-001.patch, HDFS-13820-002.patch
>
>
> Stating with [HDFS-6106] the loop for checking caching is set to be every 30 
> seconds.
> Please implement a way to disable the {{CacheReplicationMonitor}} class if 
> there are no paths specified.  Adding the first cached path to the NameNode 
> should kick off the {{CacheReplicationMonitor}} and when the last one is 
> deleted, the {{CacheReplicationMonitor}} should be disabled again.
> Alternatively, provide a configuration flag to turn this feature off 
> altogether.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13820) Disable CacheReplicationMonitor If No Cached Paths Exist

2018-09-06 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16606665#comment-16606665
 ] 

Xiao Chen commented on HDFS-13820:
--

Thanks Hrishikesh for revving. By pre-commit fix I meant checkstyle. :)
+1 pending

> Disable CacheReplicationMonitor If No Cached Paths Exist
> 
>
> Key: HDFS-13820
> URL: https://issues.apache.org/jira/browse/HDFS-13820
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: caching
>Affects Versions: 2.10.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: Hrishikesh Gadre
>Priority: Minor
> Attachments: HDFS-13820-001.patch, HDFS-13820-002.patch
>
>
> Stating with [HDFS-6106] the loop for checking caching is set to be every 30 
> seconds.
> Please implement a way to disable the {{CacheReplicationMonitor}} class if 
> there are no paths specified.  Adding the first cached path to the NameNode 
> should kick off the {{CacheReplicationMonitor}} and when the last one is 
> deleted, the {{CacheReplicationMonitor}} should be disabled again.
> Alternatively, provide a configuration flag to turn this feature off 
> altogether.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-351) Add chill mode state to SCM

2018-09-06 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16606664#comment-16606664
 ] 

Anu Engineer commented on HDDS-351:
---

{quote} Addressed all your comments in patch v11.
{quote}
[~ajayydv] thanks for the quick response on the comments. Are you sure you have 
attached the latest version of the patch? 

Somehow I see that these two issues are not addressed in patch v12.
{code:java}
1. SCMChillModeManager.java:ContainerChillModeRule - The function crashes with 
NullPointerException if the container is null. Check if the container null 
before accessing the variable? It is easy to repro with passing a null instead 
of the container in TestSCMChillModeManager: Line 66.

2. SCMChillModeManager.java#ContainerChillModeRule#process - Most probably a 
no-op comment. You might want to check if the EventQueue has a serialization 
guarantee. That is, it will only deliver one event at a time. If there is no 
such guarantee in the EventQueue class, we have a race condition in the process 
function
{code}
 

> Add chill mode state to SCM
> ---
>
> Key: HDDS-351
> URL: https://issues.apache.org/jira/browse/HDDS-351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-351.00.patch, HDDS-351.01.patch, HDDS-351.02.patch, 
> HDDS-351.03.patch, HDDS-351.04.patch, HDDS-351.05.patch, HDDS-351.06.patch, 
> HDDS-351.07.patch, HDDS-351.08.patch, HDDS-351.09.patch, HDDS-351.10.patch, 
> HDDS-351.11.patch, HDDS-351.12.patch
>
>
> Add chill mode state to SCM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13902) Add jmx conf and stacks menus to the datanode page

2018-09-06 Thread fengchuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchuang updated HDFS-13902:
--
Attachment: HDFS-13902.001.patch

>  Add jmx conf and stacks menus to the datanode page
> ---
>
> Key: HDFS-13902
> URL: https://issues.apache.org/jira/browse/HDFS-13902
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.0.3
>Reporter: fengchuang
>Priority: Minor
> Attachments: HDFS-13902.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13862) RBF: Router logs are not capturing few of the dfsrouteradmin commands

2018-09-06 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16606647#comment-16606647
 ] 

Ayush Saxena commented on HDFS-13862:
-

Thanx [~elgoiri] for the comment.
Have made changes as per your suggestions.
Pls Review Once

> RBF: Router logs are not capturing few of the dfsrouteradmin commands
> -
>
> Key: HDFS-13862
> URL: https://issues.apache.org/jira/browse/HDFS-13862
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Soumyapn
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13862-01.patch, HDFS-13862-02.patch, 
> HDFS-13862-03.patch
>
>
> Test Steps :
> Below commands are not getting captured in the Router logs.
>  # Destination entry name in the add command. Log says "Added new mount point 
> /apps9 to resolver".
>  # Safemode enter|leave|get commands
>  # nameservice enable



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13862) RBF: Router logs are not capturing few of the dfsrouteradmin commands

2018-09-06 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-13862:

Attachment: HDFS-13862-03.patch

> RBF: Router logs are not capturing few of the dfsrouteradmin commands
> -
>
> Key: HDFS-13862
> URL: https://issues.apache.org/jira/browse/HDFS-13862
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Soumyapn
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13862-01.patch, HDFS-13862-02.patch, 
> HDFS-13862-03.patch
>
>
> Test Steps :
> Below commands are not getting captured in the Router logs.
>  # Destination entry name in the add command. Log says "Added new mount point 
> /apps9 to resolver".
>  # Safemode enter|leave|get commands
>  # nameservice enable



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13838) WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" status

2018-09-06 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16606645#comment-16606645
 ] 

Siyao Meng commented on HDFS-13838:
---

+1 jenkins. unrelated test failures.

> WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" 
> status
> 
>
> Key: HDFS-13838
> URL: https://issues.apache.org/jira/browse/HDFS-13838
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, webhdfs
>Affects Versions: 3.1.0, 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13838.001.patch, HDFS-13838.002.patch, 
> HDFS-13838.003.patch, HDFS-13838.004.patch
>
>
> "Snapshot enabled" status has been added in HDFS-12455 by [~ajaykumar].
> However, it is found by [~jojochuang] that WebHdfsFileSystem.getFileStatus() 
> won't return the correct "snapshot enabled" status. The reason is that 
> JsonUtilClient.toFileStatus() did not check and append the "snapshot enabled" 
> flag to the resulting HdfsFileStatus object.
> Proof:
> In TestWebHDFS#testWebHdfsAllowandDisallowSnapshots(), add the following 
> lines indicated by prepending "+":
> {code:java}
> // allow snapshots on /bar using webhdfs
> webHdfs.allowSnapshot(bar);
> +// check if snapshot status is enabled
> +assertTrue(dfs.getFileStatus(bar).isSnapshotEnabled());
> +assertTrue(webHdfs.getFileStatus(bar).isSnapshotEnabled());
> {code} 
> The first assertion will pass, as expected, while the second assertion will 
> fail because of the reason above.
> Update:
> A further investigation shows that FSOperations.toJsonInner() also doesn't 
> check the "snapshot enabled" bit. Therefore, 
> "fs.getFileStatus(path).isSnapshotEnabled()" will always return false for fs 
> type HttpFSFileSystem/WebHdfsFileSystem/SWebhdfsFileSystem. This will be 
> addressed in a separate jira HDFS-13886.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13902) Add jmx conf and stacks menus to the datanode page

2018-09-06 Thread fengchuang (JIRA)
fengchuang created HDFS-13902:
-

 Summary:  Add jmx conf and stacks menus to the datanode page
 Key: HDFS-13902
 URL: https://issues.apache.org/jira/browse/HDFS-13902
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 3.0.3
Reporter: fengchuang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-400) Check global replication state of the reported containers on SCM

2018-09-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16606621#comment-16606621
 ] 

Hadoop QA commented on HDDS-400:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 27s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
36s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 57s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
34s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 98m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.freon.TestFreon |
|   | hadoop.ozone.ozShell.TestOzoneShell |
|   | hadoop.ozone.TestMiniOzoneCluster |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDDS-400 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938664/HDDS-400.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | 

[jira] [Commented] (HDFS-13791) Limit logging frequency of edit tail related statements

2018-09-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16606615#comment-16606615
 ] 

Hadoop QA commented on HDFS-13791:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-12943 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  6m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 7s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
37s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
38s{color} | {color:green} HDFS-12943 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
52s{color} | {color:red} hadoop-hdfs in HDFS-12943 failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
57s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
53s{color} | {color:green} HDFS-12943 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
18s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
45s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} hadoop-hdfs-project_hadoop-hdfs generated 0 new + 5 
unchanged - 2 fixed = 5 total (was 7) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
16s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 44s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}121m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:9b55946 |
| JIRA Issue | HDFS-13791 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938732/HDFS-13791-HDFS-12943.003.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9f14a5fd74e1 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
|

[jira] [Updated] (HDFS-13899) Unexpected message type: PooledUnsafeDirectByteBuf when get datanode info by DatanodeWebHdfsMethods

2018-09-06 Thread sunlisheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunlisheng updated HDFS-13899:
--
Status: Open  (was: Patch Available)

> Unexpected message type: PooledUnsafeDirectByteBuf when get datanode info by 
> DatanodeWebHdfsMethods
> ---
>
> Key: HDFS-13899
> URL: https://issues.apache.org/jira/browse/HDFS-13899
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: sunlisheng
>Priority: Major
> Attachments: HDFS-13899.000.patch
>
>
> Unexpected message type: PooledUnsafeDirectByteBuf when get datanode info by 
> DatanodeWebHdfsMethods.
> Because no HttpRequestDecoder in InboundHandler of netty,  appear unexpected 
> message type when read message.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13899) Unexpected message type: PooledUnsafeDirectByteBuf when get datanode info by DatanodeWebHdfsMethods

2018-09-06 Thread sunlisheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunlisheng updated HDFS-13899:
--
Attachment: HDFS-13899.000.patch
Status: Patch Available  (was: Open)

> Unexpected message type: PooledUnsafeDirectByteBuf when get datanode info by 
> DatanodeWebHdfsMethods
> ---
>
> Key: HDFS-13899
> URL: https://issues.apache.org/jira/browse/HDFS-13899
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: sunlisheng
>Priority: Major
> Attachments: HDFS-13899.000.patch
>
>
> Unexpected message type: PooledUnsafeDirectByteBuf when get datanode info by 
> DatanodeWebHdfsMethods.
> Because no HttpRequestDecoder in InboundHandler of netty,  appear unexpected 
> message type when read message.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13899) Unexpected message type: PooledUnsafeDirectByteBuf when get datanode info by DatanodeWebHdfsMethods

2018-09-06 Thread sunlisheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunlisheng updated HDFS-13899:
--
Attachment: (was: HDFS-13899.000.patch)

> Unexpected message type: PooledUnsafeDirectByteBuf when get datanode info by 
> DatanodeWebHdfsMethods
> ---
>
> Key: HDFS-13899
> URL: https://issues.apache.org/jira/browse/HDFS-13899
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: sunlisheng
>Priority: Major
>
> Unexpected message type: PooledUnsafeDirectByteBuf when get datanode info by 
> DatanodeWebHdfsMethods.
> Because no HttpRequestDecoder in InboundHandler of netty,  appear unexpected 
> message type when read message.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-351) Add chill mode state to SCM

2018-09-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16606606#comment-16606606
 ] 

Hadoop QA commented on HDDS-351:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
32s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 25m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
20m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
13s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 34s{color} | {color:orange} root: The patch generated 2 new + 14 unchanged - 
0 fixed = 16 total (was 14) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
9s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
39s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 23s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
46s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}130m 27s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.container.TestContainerReplication |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce 

[jira] [Commented] (HDDS-397) Handle deletion for keys with no blocks

2018-09-06 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16606580#comment-16606580
 ] 

Hudson commented on HDDS-397:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14893 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14893/])
HDDS-397. Handle deletion for keys with no blocks. Contributed by Lokesh (xyao: 
rev 396ce7b884d0bf3f85664b2e0e2321203314bb24)
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestKeyDeletingService.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java


> Handle deletion for keys with no blocks
> ---
>
> Key: HDDS-397
> URL: https://issues.apache.org/jira/browse/HDDS-397
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-397.001.patch
>
>
> Keys which do not contain blocks can be deleted directly from OzoneManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13836) RBF: Handle mount table znode with null value

2018-09-06 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16606561#comment-16606561
 ] 

Hudson commented on HDFS-13836:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14892 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14892/])
HDFS-13836. RBF: Handle mount table znode with null value. Contributed 
(inigoiri: rev 527288ef891dc26019d003bd85ddfd50eb4f3b7b)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/curator/ZKCuratorManager.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/curator/TestZKCuratorManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/store/driver/TestStateStoreZK.java


> RBF: Handle mount table znode with null value
> -
>
> Key: HDFS-13836
> URL: https://issues.apache.org/jira/browse/HDFS-13836
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation, hdfs
>Affects Versions: 3.1.0
>Reporter: yanghuafeng
>Assignee: yanghuafeng
>Priority: Major
> Fix For: 2.9.0, 2.10.0, 3.2.0, 3.0.4, 3.1.2
>
> Attachments: HDFS-13836.001.patch, HDFS-13836.002.patch, 
> HDFS-13836.003.patch, HDFS-13836.004.patch, HDFS-13836.005.patch
>
>
> When we are adding the mounttable entry, the router sever is terminated. 
> Some error messages show in log, as follow:
>  2018-08-20 14:18:32,404 ERROR 
> org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreZooKeeperImpl:
>  Cannot get data for 0SLASH0testzk: null. 
> The reason is that router server have created the znode but not to set data 
> before being terminated. But the method zkManager.getStringData(path, stat) 
> will throw NPE if the path has null value in the StateStoreZooKeeperImpl, 
> leading to fail in adding the same mounttable entry and deleting the existing 
> znode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-400) Check global replication state of the reported containers on SCM

2018-09-06 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru reassigned HDDS-400:
---

Assignee: Elek, Marton  (was: Hanisha Koneru)

> Check global replication state of the reported containers on SCM
> 
>
> Key: HDDS-400
> URL: https://issues.apache.org/jira/browse/HDDS-400
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-400.001.patch, HDDS-400.002.patch, 
> HDDS-400.004.patch, HDDS-400.005.patch
>
>
> Current container replication handler compare the reported containers with 
> the previous report. It handles over an under replicated state.
> But there is no logic to check the cluster-wide replication count. If a node 
> is went down it won't be detected.
> For the sake of simplicity I would add this check to the 
> ContainerReportHandler (as of now). So all the reported container should have 
> enough replicas. 
> We can check the performance implication with genesis, but as a first 
> implementation I think it could be good enough. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-400) Check global replication state of the reported containers on SCM

2018-09-06 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-400:

Status: Patch Available  (was: In Progress)

> Check global replication state of the reported containers on SCM
> 
>
> Key: HDDS-400
> URL: https://issues.apache.org/jira/browse/HDDS-400
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Hanisha Koneru
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-400.001.patch, HDDS-400.002.patch, 
> HDDS-400.004.patch, HDDS-400.005.patch
>
>
> Current container replication handler compare the reported containers with 
> the previous report. It handles over an under replicated state.
> But there is no logic to check the cluster-wide replication count. If a node 
> is went down it won't be detected.
> For the sake of simplicity I would add this check to the 
> ContainerReportHandler (as of now). So all the reported container should have 
> enough replicas. 
> We can check the performance implication with genesis, but as a first 
> implementation I think it could be good enough. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-400) Check global replication state of the reported containers on SCM

2018-09-06 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru reassigned HDDS-400:
---

Assignee: Hanisha Koneru  (was: Elek, Marton)

> Check global replication state of the reported containers on SCM
> 
>
> Key: HDDS-400
> URL: https://issues.apache.org/jira/browse/HDDS-400
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Hanisha Koneru
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-400.001.patch, HDDS-400.002.patch, 
> HDDS-400.004.patch, HDDS-400.005.patch
>
>
> Current container replication handler compare the reported containers with 
> the previous report. It handles over an under replicated state.
> But there is no logic to check the cluster-wide replication count. If a node 
> is went down it won't be detected.
> For the sake of simplicity I would add this check to the 
> ContainerReportHandler (as of now). So all the reported container should have 
> enough replicas. 
> We can check the performance implication with genesis, but as a first 
> implementation I think it could be good enough. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12284) RBF: Support for Kerberos authentication

2018-09-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16606551#comment-16606551
 ] 

Hadoop QA commented on HDFS-12284:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HDFS-12284 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-12284 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12927689/HDFS-12284.003.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24993/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: Support for Kerberos authentication
> 
>
> Key: HDFS-12284
> URL: https://issues.apache.org/jira/browse/HDFS-12284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: security
>Reporter: Zhe Zhang
>Assignee: Sherwood Zheng
>Priority: Major
> Attachments: HDFS-12284.000.patch, HDFS-12284.001.patch, 
> HDFS-12284.002.patch, HDFS-12284.003.patch
>
>
> HDFS Router should support Kerberos authentication and issuing / managing 
> HDFS delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-408) Read (getKey) operation is very slow

2018-09-06 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reassigned HDDS-408:
---

Assignee: Xiaoyu Yao

> Read (getKey) operation is very slow
> 
>
> Key: HDDS-408
> URL: https://issues.apache.org/jira/browse/HDDS-408
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, Ozone Manager
>Reporter: Nilotpal Nandi
>Assignee: Xiaoyu Yao
>Priority: Blocker
> Fix For: 0.2.1
>
>
> PutKey operation for a 2GB file took around ~42 secs .
> real 0m41.955s
> user 0m19.367s
> sys 0m7.480s
>  
> For the same Key, getKey operation took around 8min 49 secs.
> real 0m41.955s
> user 0m19.367s
> sys 0m7.480s
>  
> Repeated operation multiple times, results were similar.
> For read operation , the client waits for a long time. In ozone.log, there is 
> a long trail of these messages on all datanodes :
>  
> {noformat}
> 2018-09-06 14:10:52,288 [BlockDeletingService#2] DEBUG 
> (TopNOrderedContainerDeletionChoosingPolicy.java:79) - Stop looking for next 
> container, there is no pending deletion block contained in remaining 
> containers.
> 2018-09-06 14:11:02,413 [Datanode State Machine Thread - 0] DEBUG 
> (DatanodeStateMachine.java:145) - Executing cycle Number : 3350
> 2018-09-06 14:11:32,414 [Datanode State Machine Thread - 0] DEBUG 
> (DatanodeStateMachine.java:145) - Executing cycle Number : 3351
> 2018-09-06 14:11:41,797 [Datanode ReportManager Thread - 0] DEBUG 
> (ContainerSet.java:191) - Starting container report iteration.
> 2018-09-06 14:11:52,288 [BlockDeletingService#2] DEBUG 
> (TopNOrderedContainerDeletionChoosingPolicy.java:79) - Stop looking for next 
> container, there is no pending deletion block contained in remaining 
> containers.
> 2018-09-06 14:12:02,414 [Datanode State Machine Thread - 0] DEBUG 
> (DatanodeStateMachine.java:145) - Executing cycle Number : 3352
> 2018-09-06 14:12:32,416 [Datanode State Machine Thread - 0] DEBUG 
> (DatanodeStateMachine.java:145) - Executing cycle Number : 3353
> 2018-09-06 14:12:48,723 [Datanode ReportManager Thread - 0] DEBUG 
> (ContainerSet.java:191) - Starting container report iteration.
> 2018-09-06 14:12:52,289 [BlockDeletingService#2] DEBUG 
> (TopNOrderedContainerDeletionChoosingPolicy.java:79) - Stop looking for next 
> container, there is no pending deletion block contained in remaining 
> containers.
> 2018-09-06 14:13:02,416 [Datanode State Machine Thread - 0] DEBUG 
> (DatanodeStateMachine.java:145) - Executing cycle Number : 3354
> 2018-09-06 14:13:32,416 [Datanode State Machine Thread - 0] DEBUG 
> (DatanodeStateMachine.java:145) - Executing cycle Number : 3355
> 2018-09-06 14:13:52,289 [BlockDeletingService#2] DEBUG 
> (TopNOrderedContainerDeletionChoosingPolicy.java:79) - Stop looking for next 
> container, there is no pending deletion block contained in remaining 
> containers.
> 2018-09-06 14:14:02,416 [Datanode State Machine Thread - 0] DEBUG 
> (DatanodeStateMachine.java:145) - Executing cycle Number : 3356
> 2018-09-06 14:14:31,725 [Datanode ReportManager Thread - 2] DEBUG 
> (ContainerSet.java:191) - Starting container report iteration.
> 2018-09-06 14:14:32,416 [Datanode State Machine Thread - 0] DEBUG 
> (DatanodeStateMachine.java:145) - Executing cycle Number : 3357
> 2018-09-06 14:14:52,290 [BlockDeletingService#2] DEBUG 
> (TopNOrderedContainerDeletionChoosingPolicy.java:79) - Stop looking for next 
> container, there is no pending deletion block contained in remaining 
> containers.
> 2018-09-06 14:15:02,417 [Datanode State Machine Thread - 0] DEBUG 
> (DatanodeStateMachine.java:145) - Executing cycle Number : 3358
> 2018-09-06 14:15:32,418 [Datanode State Machine Thread - 0] DEBUG 
> (DatanodeStateMachine.java:145) - Executing cycle Number : 3359
> 2018-09-06 14:15:52,290 [BlockDeletingService#2] DEBUG 
> (TopNOrderedContainerDeletionChoosingPolicy.java:79) - Stop looking for next 
> container, there is no pending deletion block contained in remaining 
> containers.
> 2018-09-06 14:16:02,419 [Datanode State Machine Thread - 0] DEBUG 
> (DatanodeStateMachine.java:145) - Executing cycle Number : 3360
> 2018-09-06 14:16:26,897 [Datanode ReportManager Thread - 0] DEBUG 
> (ContainerSet.java:191) - Starting container report iteration.
> 2018-09-06 14:16:32,420 [Datanode State Machine Thread - 0] DEBUG 
> (DatanodeStateMachine.java:145) - Executing cycle Number : 3361
> 2018-09-06 14:16:52,290 [BlockDeletingService#2] DEBUG 
> (TopNOrderedContainerDeletionChoosingPolicy.java:79) - Stop looking for next 
> container, there is no pending deletion block contained in remaining 
> containers.
> 2018-09-06 14:17:02,420 [Datanode State Machine Thread - 0] DEBUG 
> (DatanodeStateMachine.java:145) - Executing cycle Number : 3362
> 2018-09-06 14:17:32,420 [Datanode 

[jira] [Updated] (HDDS-397) Handle deletion for keys with no blocks

2018-09-06 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-397:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks [~ljain] for the contribution. I've committed the patch to trunk. 

> Handle deletion for keys with no blocks
> ---
>
> Key: HDDS-397
> URL: https://issues.apache.org/jira/browse/HDDS-397
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-397.001.patch
>
>
> Keys which do not contain blocks can be deleted directly from OzoneManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-397) Handle deletion for keys with no blocks

2018-09-06 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-397:

Fix Version/s: (was: 0.3.0)

> Handle deletion for keys with no blocks
> ---
>
> Key: HDDS-397
> URL: https://issues.apache.org/jira/browse/HDDS-397
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-397.001.patch
>
>
> Keys which do not contain blocks can be deleted directly from OzoneManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-397) Handle deletion for keys with no blocks

2018-09-06 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16606548#comment-16606548
 ] 

Xiaoyu Yao commented on HDDS-397:
-

Thanks [~ljain] for working on this. Patch LGTM, +1. I will commit it shortly.

> Handle deletion for keys with no blocks
> ---
>
> Key: HDDS-397
> URL: https://issues.apache.org/jira/browse/HDDS-397
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1, 0.3.0
>
> Attachments: HDDS-397.001.patch
>
>
> Keys which do not contain blocks can be deleted directly from OzoneManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12284) RBF: Support for Kerberos authentication

2018-09-06 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16606546#comment-16606546
 ] 

Íñigo Goiri commented on HDFS-12284:


We will discuss in the meeting tomorrow the delegation token part but I think 
this is ready for review.

>From the discussion, the only controversial part is why we need to use 
>{{doAs()}} in {{FederationUtil}} to check the JMX.
This part is only needed to get the details of a secured Namenode so the 
cluster can operate without the change for now.
Given that, I would remove the changes related to that from 
[^HDFS-12284.003.patch] and open a new JIRA to dicuss that.
This will allow us to move forward in the branch and have a working framework 
for the DT side.

> RBF: Support for Kerberos authentication
> 
>
> Key: HDFS-12284
> URL: https://issues.apache.org/jira/browse/HDFS-12284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: security
>Reporter: Zhe Zhang
>Assignee: Sherwood Zheng
>Priority: Major
> Attachments: HDFS-12284.000.patch, HDFS-12284.001.patch, 
> HDFS-12284.002.patch, HDFS-12284.003.patch
>
>
> HDFS Router should support Kerberos authentication and issuing / managing 
> HDFS delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13838) WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" status

2018-09-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16606545#comment-16606545
 ] 

Hadoop QA commented on HDFS-13838:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
35s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}100m  7s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}174m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.blockmanagement.TestReplicationPolicy |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13838 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938710/HDFS-13838.004.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2f1cf6ef5a7f 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/p

[jira] [Commented] (HDDS-400) Check global replication state of the reported containers on SCM

2018-09-06 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16606541#comment-16606541
 ] 

Hanisha Koneru commented on HDDS-400:
-

[~elek], Patch v05 LGTM. Can you update the Jira description please.

+1 pending Jenkins.

> Check global replication state of the reported containers on SCM
> 
>
> Key: HDDS-400
> URL: https://issues.apache.org/jira/browse/HDDS-400
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-400.001.patch, HDDS-400.002.patch, 
> HDDS-400.004.patch, HDDS-400.005.patch
>
>
> Current container replication handler compare the reported containers with 
> the previous report. It handles over an under replicated state.
> But there is no logic to check the cluster-wide replication count. If a node 
> is went down it won't be detected.
> For the sake of simplicity I would add this check to the 
> ContainerReportHandler (as of now). So all the reported container should have 
> enough replicas. 
> We can check the performance implication with genesis, but as a first 
> implementation I think it could be good enough. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13836) RBF: Handle mount table znode with null value

2018-09-06 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13836:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: (was: 3.1.0)
   (was: 3.0.0)
   3.1.2
   3.0.4
   2.10.0
   Status: Resolved  (was: Patch Available)

Thanks [~hfyang20071] for the fix.
Committed to trunk, branch-3.1, branch-3.0, branch-2, and branch-2.9.

> RBF: Handle mount table znode with null value
> -
>
> Key: HDFS-13836
> URL: https://issues.apache.org/jira/browse/HDFS-13836
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation, hdfs
>Affects Versions: 3.1.0
>Reporter: yanghuafeng
>Assignee: yanghuafeng
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.0.4, 3.1.2, 2.9.0
>
> Attachments: HDFS-13836.001.patch, HDFS-13836.002.patch, 
> HDFS-13836.003.patch, HDFS-13836.004.patch, HDFS-13836.005.patch
>
>
> When we are adding the mounttable entry, the router sever is terminated. 
> Some error messages show in log, as follow:
>  2018-08-20 14:18:32,404 ERROR 
> org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreZooKeeperImpl:
>  Cannot get data for 0SLASH0testzk: null. 
> The reason is that router server have created the znode but not to set data 
> before being terminated. But the method zkManager.getStringData(path, stat) 
> will throw NPE if the path has null value in the StateStoreZooKeeperImpl, 
> leading to fail in adding the same mounttable entry and deleting the existing 
> znode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13836) RBF: Handle mount table znode with null value

2018-09-06 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13836:
---
Summary: RBF: Handle mount table znode with null value  (was: RBF: Handle 
the exception when the mount table znode has null value)

> RBF: Handle mount table znode with null value
> -
>
> Key: HDFS-13836
> URL: https://issues.apache.org/jira/browse/HDFS-13836
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation, hdfs
>Affects Versions: 3.1.0
>Reporter: yanghuafeng
>Assignee: yanghuafeng
>Priority: Major
> Fix For: 2.9.0, 3.0.0, 3.1.0, 3.2.0
>
> Attachments: HDFS-13836.001.patch, HDFS-13836.002.patch, 
> HDFS-13836.003.patch, HDFS-13836.004.patch, HDFS-13836.005.patch
>
>
> When we are adding the mounttable entry, the router sever is terminated. 
> Some error messages show in log, as follow:
>  2018-08-20 14:18:32,404 ERROR 
> org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreZooKeeperImpl:
>  Cannot get data for 0SLASH0testzk: null. 
> The reason is that router server have created the znode but not to set data 
> before being terminated. But the method zkManager.getStringData(path, stat) 
> will throw NPE if the path has null value in the StateStoreZooKeeperImpl, 
> leading to fail in adding the same mounttable entry and deleting the existing 
> znode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13836) RBF: Handle the exception when the mount table znode has null value

2018-09-06 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13836:
---
Summary: RBF: Handle the exception when the mount table znode has null 
value  (was: RBF: To handle the exception when the mounttable znode have null 
value.)

> RBF: Handle the exception when the mount table znode has null value
> ---
>
> Key: HDFS-13836
> URL: https://issues.apache.org/jira/browse/HDFS-13836
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation, hdfs
>Affects Versions: 3.1.0
>Reporter: yanghuafeng
>Assignee: yanghuafeng
>Priority: Major
> Fix For: 2.9.0, 3.0.0, 3.1.0, 3.2.0
>
> Attachments: HDFS-13836.001.patch, HDFS-13836.002.patch, 
> HDFS-13836.003.patch, HDFS-13836.004.patch, HDFS-13836.005.patch
>
>
> When we are adding the mounttable entry, the router sever is terminated. 
> Some error messages show in log, as follow:
>  2018-08-20 14:18:32,404 ERROR 
> org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreZooKeeperImpl:
>  Cannot get data for 0SLASH0testzk: null. 
> The reason is that router server have created the znode but not to set data 
> before being terminated. But the method zkManager.getStringData(path, stat) 
> will throw NPE if the path has null value in the StateStoreZooKeeperImpl, 
> leading to fail in adding the same mounttable entry and deleting the existing 
> znode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13900) NameNode: Unable to trigger a roll of the active NN

2018-09-06 Thread Rong Tang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16606531#comment-16606531
 ] 

Rong Tang commented on HDFS-13900:
--

Hi, [~liuhongtong]  it looks like the same issue as HADOOP-15684 ,  it provides 
a patch fix, please check.

> NameNode: Unable to trigger a roll of the active NN
> ---
>
> Key: HDFS-13900
> URL: https://issues.apache.org/jira/browse/HDFS-13900
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: liuhongtong
>Priority: Critical
>
> I have backport Multi-standby NNs to our own hdfs version. I found an issue 
> of EditLog roll.
> h2. Reproducible Steps:
> h3. 1.original state
> nn1 active
> nn2 standby
> nn3 standby
> h3. 2. stop nn1
> h3. 3. new state
> nn1 stopped
> nn2 active
> nn3 standby
> h3. 4. nn3 unable to trigger a roll of the active NN
> [2018-08-22T10:33:38.025+08:00] [WARN] 
> namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java 307) [Edit 
> log tailer] : Unable to trigger a roll of the active NN
> java.net.ConnectException: Call From  to  failed 
> on connection exception: java.net.ConnectException: Connection refused; For 
> more details see:[http://wiki.apache.org/hadoop/ConnectionRefused]
> at sun.reflect.GeneratedConstructorAccessor17.newInstance(Unknown Source)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:782)
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:722)
> at org.apache.hadoop.ipc.Client.call(Client.java:1536)
> at org.apache.hadoop.ipc.Client.call(Client.java:1463)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:237)
> at com.sun.proxy.$Proxy16.rollEditLog(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.rollEditLog(NamenodeProtocolTranslatorPB.java:148)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$2.doWork(EditLogTailer.java:301)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$2.doWork(EditLogTailer.java:298)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$MultipleNameNodeProxy.call(EditLogTailer.java:414)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java:304)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.access$800(EditLogTailer.java:69)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:346)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:315)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:332)
> at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:328)
> Caused by: java.net.ConnectException: Connection refused
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
> at 
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:521)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:485)
> at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:658)
> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:756)
> at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:419)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1585)
> at org.apache.hadoop.ipc.Client.call(Client.java:1502)
> ... 14 more



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13862) RBF: Router logs are not capturing few of the dfsrouteradmin commands

2018-09-06 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16606532#comment-16606532
 ] 

Íñigo Goiri commented on HDFS-13862:


I guess the negative cases should be logged as error right?

> RBF: Router logs are not capturing few of the dfsrouteradmin commands
> -
>
> Key: HDFS-13862
> URL: https://issues.apache.org/jira/browse/HDFS-13862
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Soumyapn
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13862-01.patch, HDFS-13862-02.patch
>
>
> Test Steps :
> Below commands are not getting captured in the Router logs.
>  # Destination entry name in the add command. Log says "Added new mount point 
> /apps9 to resolver".
>  # Safemode enter|leave|get commands
>  # nameservice enable



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13791) Limit logging frequency of edit tail related statements

2018-09-06 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16606533#comment-16606533
 ] 

Erik Krogen commented on HDFS-13791:


Attached v003 fixing the broken {{FSNamesystemLock}} test, adding one 
additional test in {{TestLimitedFrequencyLogHelper}} to catch that behavior, 
and fixing checkstyle / license warnings.

> Limit logging frequency of edit tail related statements
> ---
>
> Key: HDFS-13791
> URL: https://issues.apache.org/jira/browse/HDFS-13791
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs, qjm
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13791-HDFS-12943.000.patch, 
> HDFS-13791-HDFS-12943.001.patch, HDFS-13791-HDFS-12943.002.patch, 
> HDFS-13791-HDFS-12943.003.patch
>
>
> There are a number of log statements that occur every time new edits are 
> tailed by a Standby NameNode. When edits are tailing only on the order of 
> every tens of seconds, this is fine. With the work in HDFS-13150, however, 
> edits may be tailed every few milliseconds, which can flood the logs with 
> tailing-related statements. We should throttle it to limit it to printing at 
> most, say, once per 5 seconds.
> We can implement logic similar to that used in HDFS-10713. This may be 
> slightly more tricky since the log statements are distributed across a few 
> classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13791) Limit logging frequency of edit tail related statements

2018-09-06 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-13791:
---
Attachment: HDFS-13791-HDFS-12943.003.patch

> Limit logging frequency of edit tail related statements
> ---
>
> Key: HDFS-13791
> URL: https://issues.apache.org/jira/browse/HDFS-13791
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs, qjm
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13791-HDFS-12943.000.patch, 
> HDFS-13791-HDFS-12943.001.patch, HDFS-13791-HDFS-12943.002.patch, 
> HDFS-13791-HDFS-12943.003.patch
>
>
> There are a number of log statements that occur every time new edits are 
> tailed by a Standby NameNode. When edits are tailing only on the order of 
> every tens of seconds, this is fine. With the work in HDFS-13150, however, 
> edits may be tailed every few milliseconds, which can flood the logs with 
> tailing-related statements. We should throttle it to limit it to printing at 
> most, say, once per 5 seconds.
> We can implement logic similar to that used in HDFS-10713. This may be 
> slightly more tricky since the log statements are distributed across a few 
> classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13900) NameNode: Unable to trigger a roll of the active NN

2018-09-06 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16606529#comment-16606529
 ] 

Íñigo Goiri commented on HDFS-13900:


I'm guessing the multi-standby means HDFS-6440. Linking it.

> NameNode: Unable to trigger a roll of the active NN
> ---
>
> Key: HDFS-13900
> URL: https://issues.apache.org/jira/browse/HDFS-13900
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: liuhongtong
>Priority: Critical
>
> I have backport Multi-standby NNs to our own hdfs version. I found an issue 
> of EditLog roll.
> h2. Reproducible Steps:
> h3. 1.original state
> nn1 active
> nn2 standby
> nn3 standby
> h3. 2. stop nn1
> h3. 3. new state
> nn1 stopped
> nn2 active
> nn3 standby
> h3. 4. nn3 unable to trigger a roll of the active NN
> [2018-08-22T10:33:38.025+08:00] [WARN] 
> namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java 307) [Edit 
> log tailer] : Unable to trigger a roll of the active NN
> java.net.ConnectException: Call From  to  failed 
> on connection exception: java.net.ConnectException: Connection refused; For 
> more details see:[http://wiki.apache.org/hadoop/ConnectionRefused]
> at sun.reflect.GeneratedConstructorAccessor17.newInstance(Unknown Source)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:782)
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:722)
> at org.apache.hadoop.ipc.Client.call(Client.java:1536)
> at org.apache.hadoop.ipc.Client.call(Client.java:1463)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:237)
> at com.sun.proxy.$Proxy16.rollEditLog(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.rollEditLog(NamenodeProtocolTranslatorPB.java:148)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$2.doWork(EditLogTailer.java:301)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$2.doWork(EditLogTailer.java:298)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$MultipleNameNodeProxy.call(EditLogTailer.java:414)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java:304)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.access$800(EditLogTailer.java:69)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:346)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:315)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:332)
> at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:328)
> Caused by: java.net.ConnectException: Connection refused
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
> at 
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:521)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:485)
> at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:658)
> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:756)
> at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:419)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1585)
> at org.apache.hadoop.ipc.Client.call(Client.java:1502)
> ... 14 more



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-190) Improve shell error message for unrecognized option

2018-09-06 Thread Jitendra Nath Pandey (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16606528#comment-16606528
 ] 

Jitendra Nath Pandey commented on HDDS-190:
---

[~Sandeep Nemuri], are you planning to provide a patch for this?

> Improve shell error message for unrecognized option
> ---
>
> Key: HDDS-190
> URL: https://issues.apache.org/jira/browse/HDDS-190
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
>
> The error message with an unrecognized option is unfriendly. E.g.
> {code}
> $ ozone oz -badOption
> Unrecognized option: -badOptionERROR: null
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-164) Add unit test for HddsDatanodeService

2018-09-06 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-164:
--
Fix Version/s: (was: 0.2.1)
   0.3.0

> Add unit test for HddsDatanodeService
> -
>
> Key: HDDS-164
> URL: https://issues.apache.org/jira/browse/HDDS-164
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Nanda kumar
>Assignee: Junjie Chen
>Priority: Major
>  Labels: newbie, test
> Fix For: 0.3.0
>
>
> We have to add unit-test for {{HddsDatanodeService}} class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-220) Create maven artifacts with the hdds/ozone client proto files

2018-09-06 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-220:
--
Fix Version/s: (was: 0.2.1)
   0.3.0

> Create maven artifacts with the hdds/ozone client proto files
> -
>
> Key: HDDS-220
> URL: https://issues.apache.org/jira/browse/HDDS-220
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Priority: Major
>  Labels: newbie
> Fix For: 0.3.0
>
>
> It would be great to upload all the protofiles required to connect to an 
> ozone/hdds cluster to the maven repository as separated artifacts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-2) Chill Mode to consider percentage of container reports

2018-09-06 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16606514#comment-16606514
 ] 

Arpit Agarwal edited comment on HDDS-2 at 9/6/18 11:11 PM:
---

HDDS-351 takes care of this.


was (Author: anu):
HDDS-351

> Chill Mode to consider percentage of container reports
> --
>
> Key: HDDS-2
> URL: https://issues.apache.org/jira/browse/HDDS-2
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: SCM
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Critical
> Fix For: 0.2.1
>
> Attachments: Chill Mode.pdf, HDDS-02.002.patch, HDDS-02.003.patch, 
> HDDS-2.004.patch, HDFS-13500.00.patch, HDFS-13500.01.patch, 
> HDFS-13500.02.patch
>
>
> To come out of chill mode currenly if one datanode is registered, we come out 
> of chill mode in SCM.
> This needs to be changed to consider percentage of container reports.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2) Chill Mode to consider percentage of container reports

2018-09-06 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-2?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-2.
-
Resolution: Duplicate

HDDS-351

> Chill Mode to consider percentage of container reports
> --
>
> Key: HDDS-2
> URL: https://issues.apache.org/jira/browse/HDDS-2
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: SCM
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Critical
> Fix For: 0.2.1
>
> Attachments: Chill Mode.pdf, HDDS-02.002.patch, HDDS-02.003.patch, 
> HDDS-2.004.patch, HDFS-13500.00.patch, HDFS-13500.01.patch, 
> HDFS-13500.02.patch
>
>
> To come out of chill mode currenly if one datanode is registered, we come out 
> of chill mode in SCM.
> This needs to be changed to consider percentage of container reports.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-26) Fix Ozone Unit Test Failures

2018-09-06 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-26?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-26:
-
Fix Version/s: (was: 0.2.1)
   0.3.0

> Fix Ozone Unit Test Failures
> 
>
> Key: HDDS-26
> URL: https://issues.apache.org/jira/browse/HDDS-26
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Fix For: 0.3.0
>
>
> This is an umbrellas JIRA to fix unit test failures related or unrelated 
> HDDS-1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-50) EventQueue: Add a priority based execution model for events in eventqueue.

2018-09-06 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-50?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-50:
-
Fix Version/s: (was: 0.2.1)
   0.3.0

> EventQueue: Add a priority based execution model for events in eventqueue.
> --
>
> Key: HDDS-50
> URL: https://issues.apache.org/jira/browse/HDDS-50
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-50.001.patch, HDDS-50.002.patch, HDDS-50.003.patch, 
> HDDS-50.004.patch, HDDS-50.005.patch
>
>
> Currently all the events in SCM are executed with the same priority. This 
> jira will add a priority based execution model where the "niceness" value of 
> an event will determine the priority of the execution of the event.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-64) OzoneException needs to become an IOException

2018-09-06 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-64?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-64:
-
Fix Version/s: (was: 0.2.1)
   0.3.0

>  OzoneException needs to become an IOException
> --
>
> Key: HDDS-64
> URL: https://issues.apache.org/jira/browse/HDDS-64
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Anu Engineer
>Assignee: Mukul Kumar Singh
>Priority: Critical
> Fix For: 0.3.0
>
>
> From Review Comments from [~ste...@apache.org]:
> For now, the biggest issue I have is that OzoneException needs to become an 
> IOE, so simplifying exception handling all round, preserving information, not 
> losing stack traces, and generally leading to happy support teams as well as 
> developers. Changing the base class isn't itself traumatic, but it will 
> implicate the client code as there's almost no longer any need to catch & 
> wrap things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-58) Optimize number of allocated block rpc by aggregating multiple block allocation requests

2018-09-06 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-58?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-58:
-
Fix Version/s: (was: 0.2.1)
   0.3.0

> Optimize number of allocated block rpc by aggregating multiple block 
> allocation requests
> 
>
> Key: HDDS-58
> URL: https://issues.apache.org/jira/browse/HDDS-58
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDFS-12838-HDFS-7240.001.patch, 
> HDFS-12838-HDFS-7240.002.patch, HDFS-12838-HDFS-7240.003.patch, 
> HDFS-12838-HDFS-7240.004.patch, HDFS-12838-HDFS-7240.005.patch, 
> HDFS-12838-HDFS-7240.006.patch
>
>
> Currently KeySpaceManager allocates multiple blocks by sending multiple block 
> allocation requests over the RPC. This can be optimized to aggregate multiple 
> block allocation request over one rpc.
> {code}
>   while (requestedSize > 0) {
> long allocateSize = Math.min(scmBlockSize, requestedSize);
> AllocatedBlock allocatedBlock =
> scmBlockClient.allocateBlock(allocateSize, type, factor);
> KsmKeyLocationInfo subKeyInfo = new KsmKeyLocationInfo.Builder()
> .setContainerName(allocatedBlock.getPipeline().getContainerName())
> .setBlockID(allocatedBlock.getKey())
> .setShouldCreateContainer(allocatedBlock.getCreateContainer())
> .setIndex(idx++)
> .setLength(allocateSize)
> .setOffset(0)
> .build();
> locations.add(subKeyInfo);
> requestedSize -= allocateSize;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-59) Ozone client should update blocksize in OM for sub-block writes

2018-09-06 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-59?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-59:
-
Fix Version/s: (was: 0.2.1)
   0.3.0

> Ozone client should update blocksize in OM for sub-block writes
> ---
>
> Key: HDDS-59
> URL: https://issues.apache.org/jira/browse/HDDS-59
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Blocker
> Fix For: 0.3.0
>
> Attachments: HDDS-59.001.patch
>
>
> Currently ozone client allocates block of the required length from SCM 
> through KSM.
> However it might happen due to error cases or because of small writes that 
> the allocated block is not completely written.
> In these cases, client should update the KSM with the length of the block. 
> This will help in error cases as well as cases where client does not write 
> the complete block to Ozone.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-95) Shade the hadoop-ozone/objectstore-service project

2018-09-06 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-95?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-95:
-
Fix Version/s: (was: 0.2.1)
   0.3.0

> Shade the hadoop-ozone/objectstore-service project
> --
>
> Key: HDDS-95
> URL: https://issues.apache.org/jira/browse/HDDS-95
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Elek, Marton
>Priority: Major
> Fix For: 0.3.0
>
>
> Ozone has datanode plugin (hadoop-hdds/container-service) which is activated 
> as a datanode service plugin 
> (dfs.datanode.plugins=org.apache.hadoop.ozone.HddsDatanodeService)
> Also HddsDatanodeService plugin will use hadoop-ozone/object-store service 
> component (configured by 
> hdds.datanode.plugins=org.apache.hadoop.ozone.web.OzoneHddsDatanodeService).
> The goal is to shade all the required classes two one jar file 
> (objectstore-service + all the dependencies). If the jar will be added to the 
> classpath of any hadoop cluster (let's say 3.0 or 3.1) it should be started 
> without any conflict in the classes (ozone uses hadoop trunk where hadoop 
> common could be newer). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-118) Introduce datanode container command dispatcher to syncronize various datanode commands

2018-09-06 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-118:
--
Fix Version/s: (was: 0.2.1)
   0.3.0

> Introduce datanode container command dispatcher to syncronize various 
> datanode commands
> ---
>
> Key: HDDS-118
> URL: https://issues.apache.org/jira/browse/HDDS-118
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.3.0
>
>
> ContainerStateMachine provides mechanism to synchronize various container 
> command operations. However with multiple protocol endpoints like. 1) Netty, 
> 2) Grpc, 3) Ratis, 4) Heartbeat. It will be advisable to synchronize 
> operations between multiple endpoints.
> This jira proposes to introduce a single command executor to which the 
> protocol endpoints will enqueue the command for execution. All the 
> synchronization can be enforced by this executor therefore.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-165) Add unit test for OzoneHddsDatanodeService

2018-09-06 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-165:
--
Fix Version/s: (was: 0.2.1)
   0.3.0

> Add unit test for OzoneHddsDatanodeService
> --
>
> Key: HDDS-165
> URL: https://issues.apache.org/jira/browse/HDDS-165
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Nanda kumar
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie, test
> Fix For: 0.3.0
>
>
> We have to add unit-test for {{OzoneHddsDatanodeService}} class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-406) Enable acceptace test of the putKey for rpc protocol

2018-09-06 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16606510#comment-16606510
 ] 

Hudson commented on HDDS-406:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14891 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14891/])
HDDS-406. Enable acceptace test of the putKey for rpc protocol. (aengineer: rev 
c5bf43a8e8aec595d1a8133cb0656778b252de89)
* (edit) 
hadoop-ozone/acceptance-test/src/test/acceptance/basic/ozone-shell.robot


> Enable acceptace test of the putKey for rpc protocol
> 
>
> Key: HDDS-406
> URL: https://issues.apache.org/jira/browse/HDDS-406
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.2.1
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-406.001.patch
>
>
> The current acceptance tests are not testing the putKey behaviour with RPC 
> protocol just with REST interface (Maybe there were some issues at the time 
> of the test creation).
> I would like to enable the putKey test for all the ozone shell usage 
> (rpc/rest).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-351) Add chill mode state to SCM

2018-09-06 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-351:

Attachment: HDDS-351.12.patch

> Add chill mode state to SCM
> ---
>
> Key: HDDS-351
> URL: https://issues.apache.org/jira/browse/HDDS-351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-351.00.patch, HDDS-351.01.patch, HDDS-351.02.patch, 
> HDDS-351.03.patch, HDDS-351.04.patch, HDDS-351.05.patch, HDDS-351.06.patch, 
> HDDS-351.07.patch, HDDS-351.08.patch, HDDS-351.09.patch, HDDS-351.10.patch, 
> HDDS-351.11.patch, HDDS-351.12.patch
>
>
> Add chill mode state to SCM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-196) PipelineManager should choose datanodes based on ContainerPlacementPolicy

2018-09-06 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-196:
--
Fix Version/s: (was: 0.2.1)
   0.3.0

> PipelineManager should choose datanodes based on ContainerPlacementPolicy 
> --
>
> Key: HDDS-196
> URL: https://issues.apache.org/jira/browse/HDDS-196
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Blocker
> Fix For: 0.3.0
>
> Attachments: HDDS-196.001.patch
>
>
> This is somehow not connected now after refactoring. This ticket is opened to 
> fix it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-188) TestOmMetrcis should not use the deprecated WhiteBox class

2018-09-06 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-188:
--
Fix Version/s: (was: 0.2.1)
   0.3.0

> TestOmMetrcis should not use the deprecated WhiteBox class
> --
>
> Key: HDDS-188
> URL: https://issues.apache.org/jira/browse/HDDS-188
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>  Labels: newbie
> Fix For: 0.3.0
>
>
> TestOmMetrcis should stop using {{org.apache.hadoop.test.Whitebox}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-351) Add chill mode state to SCM

2018-09-06 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16606512#comment-16606512
 ] 

Ajay Kumar commented on HDDS-351:
-

2 test failures are related as corresponding change in getReplicaMap was 
reverted in last patch. new patch to address it.

> Add chill mode state to SCM
> ---
>
> Key: HDDS-351
> URL: https://issues.apache.org/jira/browse/HDDS-351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-351.00.patch, HDDS-351.01.patch, HDDS-351.02.patch, 
> HDDS-351.03.patch, HDDS-351.04.patch, HDDS-351.05.patch, HDDS-351.06.patch, 
> HDDS-351.07.patch, HDDS-351.08.patch, HDDS-351.09.patch, HDDS-351.10.patch, 
> HDDS-351.11.patch
>
>
> Add chill mode state to SCM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-216) hadoop-hdds unit tests should use randomized ports

2018-09-06 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-216:
--
Fix Version/s: (was: 0.2.1)
   0.3.0

> hadoop-hdds unit tests should use randomized ports
> --
>
> Key: HDDS-216
> URL: https://issues.apache.org/jira/browse/HDDS-216
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Arpit Agarwal
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie, test
> Fix For: 0.3.0
>
> Attachments: HDDS-216.001.patch, HDDS-216.002.patch
>
>
> MiniOzoneCluster should use randomized ports by default, so individual tests 
> don't have to do anything to avoid port conflicts at runtime. e.g. 
> TestStorageContainerManagerHttpServer fails if port 9876 is in use.
> {code}
> [INFO] Running 
> org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer
> [ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 2.084 
> s <<< FAILURE! - in 
> org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer
> [ERROR] 
> testHttpPolicy[0](org.apache.hadoop.hdds.scm.TestStorageContainerManagerHttpServer)
>   Time elapsed: 0.401 s  <<< ERROR!
> java.net.BindException: Port in use: 0.0.0.0:9876
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-221) Create acceptance test to test ./start-all.sh for ozone/hdds

2018-09-06 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-221:
--
Fix Version/s: (was: 0.2.1)
   0.3.0

> Create acceptance test to test ./start-all.sh for ozone/hdds
> 
>
> Key: HDDS-221
> URL: https://issues.apache.org/jira/browse/HDDS-221
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie
> Fix For: 0.3.0
>
>
> Usually use the 'ozone' shell command to test our ozone/hdds cluster.
> We need to create different acceptance test compose files to test the 
> ./start-all.sh and ./hadoop-daemon.sh functionality.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-223) Create acceptance test for using datanode plugin

2018-09-06 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-223:
--
Fix Version/s: (was: 0.2.1)
   0.3.0

> Create acceptance test for using datanode plugin
> 
>
> Key: HDDS-223
> URL: https://issues.apache.org/jira/browse/HDDS-223
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Sandeep Nemuri
>Priority: Major
>  Labels: newbie
> Fix For: 0.3.0
>
>
> In the current docker-compose files (both in the hadoop-dist and 
> acceptance-test) we use  simplified ozone clusters: there is no namenode and 
> we use standalone hdds datanode processes.
> To test ozone/hdds as a datanode plugin we need to create separated 
> acceptance tests which uses hadoop:3.1 and hadoop:3.0 + ozone hdds datanode 
> plugin artifact



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-233) Update ozone to latest ratis snapshot build

2018-09-06 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16606508#comment-16606508
 ] 

Anu Engineer commented on HDDS-233:
---

[~shashikant], [~elek] Can you please confirm that we are good with Ratis 
Snapshot version for the Alpha release of Ozone. We *don't* need an official 
version for Alpha release of Ozone, a snapshot is good enough.

> Update ozone to latest ratis snapshot build
> ---
>
> Key: HDDS-233
> URL: https://issues.apache.org/jira/browse/HDDS-233
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.2.1
>
>
> This jira proposes to update ozone to latest ratis snapshot build. This jira 
> also will add config to set append entry timeout as well as controlling the 
> number of entries in retry cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-296) OMMetadataManagerLock is hold by getPendingDeletionKeys for a full table scan

2018-09-06 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16606503#comment-16606503
 ] 

Arpit Agarwal edited comment on HDDS-296 at 9/6/18 11:04 PM:
-

Fixed via HDDS-355,HDDS-356,HDDS-357,HDDS-358...


was (Author: anu):
Fixed via 355,356,357,358...

> OMMetadataManagerLock is hold by getPendingDeletionKeys for a full table scan
> -
>
> Key: HDDS-296
> URL: https://issues.apache.org/jira/browse/HDDS-296
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Anu Engineer
>Priority: Critical
> Fix For: 0.2.1
>
> Attachments: local.png
>
>
> We identified the problem during freon tests on real clusters. First I saw it 
> on a kubernetes based pseudo cluster (50 datanode, 1 freon). After a while 
> the rate of the key allocation was slowed down. (See the attached image).
> I could also reproduce the problem with local cluster (I used the 
> hadoop-dist/target/compose/ozoneperf setup). After the first 1 million keys 
> the key creation is almost stopped.
> With the help of [~nandakumar131] we identified the problem is the lock in 
> the ozone manager. (We profiled the OM with visual vm and found that the code 
> is locked for an extremity long time, also checked the rocksdb/rpc metrics 
> from prometheus and everything else was worked well.
> [~nandakumar131] suggested to use Instrumented lock in the OMMetadataManager. 
> With a custom build we identified that the problem is that the deletion 
> service holds the OMMetadataManager lock for a full range scan. For 1 million 
> keys it took about 10 seconds (with my local developer machine + ssd)
> {code}
> ozoneManager_1  | 2018-07-25 12:45:03 WARN  OMMetadataManager:143 - Lock held 
> time above threshold: lock identifier: OMMetadataManagerLock 
> lockHeldTimeMs=2648 ms. Suppressed 0 lock warnings. The stack trace is: 
> java.lang.Thread.getStackTrace(Thread.java:1559)
> ozoneManager_1  | 
> org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:1032)
> ozoneManager_1  | 
> org.apache.hadoop.util.InstrumentedLock.logWarning(InstrumentedLock.java:148)
> ozoneManager_1  | 
> org.apache.hadoop.util.InstrumentedLock.check(InstrumentedLock.java:186)
> ozoneManager_1  | 
> org.apache.hadoop.util.InstrumentedReadLock.unlock(InstrumentedReadLock.java:78)
> ozoneManager_1  | 
> org.apache.hadoop.ozone.om.KeyManagerImpl.getPendingDeletionKeys(KeyManagerImpl.java:506)
> ozoneManager_1  | 
> org.apache.hadoop.ozone.om.KeyDeletingService$KeyDeletingTask.call(KeyDeletingService.java:98)
> ozoneManager_1  | 
> org.apache.hadoop.ozone.om.KeyDeletingService$KeyDeletingTask.call(KeyDeletingService.java:85)
> ozoneManager_1  | java.util.concurrent.FutureTask.run(FutureTask.java:266)
> ozoneManager_1  | 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> ozoneManager_1  | java.util.concurrent.FutureTask.run(FutureTask.java:266)
> ozoneManager_1  | 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> ozoneManager_1  | java.util.concurrent.FutureTask.run(FutureTask.java:266)
> ozoneManager_1  | 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> ozoneManager_1  | 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> ozoneManager_1  | 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> ozoneManager_1  | 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> ozoneManager_1  | java.lang.Thread.run(Thread.java:748)
> {code}
> I checked it with disabled DeletionService and worked well.
> Deletion service should be improved to make it work without long term locking.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-252) Eliminate the datanode ID file

2018-09-06 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-252:
--
Fix Version/s: (was: 0.2.1)
   0.3.0

> Eliminate the datanode ID file
> --
>
> Key: HDDS-252
> URL: https://issues.apache.org/jira/browse/HDDS-252
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-252.00.patch, HDDS-252.01.patch, HDDS-252.02.patch, 
> HDDS-252.03.patch, HDDS-252.04.patch, HDDS-252.05.patch, HDDS-252.06.patch, 
> HDDS-252.07.patch
>
>
> This Jira is to remove the datanodeID file. After ContainerIO  work (HDDS-48 
> branch) is merged, we have a version file in each Volume which stores 
> datanodeUuid and some additional fields in that file.
> And also if this disk containing datanodeId path is removed, that DN will now 
> be unusable with current code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-274) Handle overreplication in ReplicationManager

2018-09-06 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-274:
--
Fix Version/s: (was: 0.2.1)
   0.3.0

> Handle overreplication in ReplicationManager
> 
>
> Key: HDDS-274
> URL: https://issues.apache.org/jira/browse/HDDS-274
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.3.0
>
>
> HDDS-199 provides the framework to handle over/under replicated containers, 
> but it contains implementation only for the under replicated containers.
> The over replicated containers should be handled and should be deleted from 
> the datanodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-296) OMMetadataManagerLock is hold by getPendingDeletionKeys for a full table scan

2018-09-06 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-296.
---
Resolution: Implemented

Fixed via 355,356,357,358...

> OMMetadataManagerLock is hold by getPendingDeletionKeys for a full table scan
> -
>
> Key: HDDS-296
> URL: https://issues.apache.org/jira/browse/HDDS-296
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Anu Engineer
>Priority: Critical
> Fix For: 0.2.1
>
> Attachments: local.png
>
>
> We identified the problem during freon tests on real clusters. First I saw it 
> on a kubernetes based pseudo cluster (50 datanode, 1 freon). After a while 
> the rate of the key allocation was slowed down. (See the attached image).
> I could also reproduce the problem with local cluster (I used the 
> hadoop-dist/target/compose/ozoneperf setup). After the first 1 million keys 
> the key creation is almost stopped.
> With the help of [~nandakumar131] we identified the problem is the lock in 
> the ozone manager. (We profiled the OM with visual vm and found that the code 
> is locked for an extremity long time, also checked the rocksdb/rpc metrics 
> from prometheus and everything else was worked well.
> [~nandakumar131] suggested to use Instrumented lock in the OMMetadataManager. 
> With a custom build we identified that the problem is that the deletion 
> service holds the OMMetadataManager lock for a full range scan. For 1 million 
> keys it took about 10 seconds (with my local developer machine + ssd)
> {code}
> ozoneManager_1  | 2018-07-25 12:45:03 WARN  OMMetadataManager:143 - Lock held 
> time above threshold: lock identifier: OMMetadataManagerLock 
> lockHeldTimeMs=2648 ms. Suppressed 0 lock warnings. The stack trace is: 
> java.lang.Thread.getStackTrace(Thread.java:1559)
> ozoneManager_1  | 
> org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:1032)
> ozoneManager_1  | 
> org.apache.hadoop.util.InstrumentedLock.logWarning(InstrumentedLock.java:148)
> ozoneManager_1  | 
> org.apache.hadoop.util.InstrumentedLock.check(InstrumentedLock.java:186)
> ozoneManager_1  | 
> org.apache.hadoop.util.InstrumentedReadLock.unlock(InstrumentedReadLock.java:78)
> ozoneManager_1  | 
> org.apache.hadoop.ozone.om.KeyManagerImpl.getPendingDeletionKeys(KeyManagerImpl.java:506)
> ozoneManager_1  | 
> org.apache.hadoop.ozone.om.KeyDeletingService$KeyDeletingTask.call(KeyDeletingService.java:98)
> ozoneManager_1  | 
> org.apache.hadoop.ozone.om.KeyDeletingService$KeyDeletingTask.call(KeyDeletingService.java:85)
> ozoneManager_1  | java.util.concurrent.FutureTask.run(FutureTask.java:266)
> ozoneManager_1  | 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> ozoneManager_1  | java.util.concurrent.FutureTask.run(FutureTask.java:266)
> ozoneManager_1  | 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> ozoneManager_1  | java.util.concurrent.FutureTask.run(FutureTask.java:266)
> ozoneManager_1  | 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> ozoneManager_1  | 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> ozoneManager_1  | 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> ozoneManager_1  | 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> ozoneManager_1  | java.lang.Thread.run(Thread.java:748)
> {code}
> I checked it with disabled DeletionService and worked well.
> Deletion service should be improved to make it work without long term locking.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-314) ozoneShell putKey command overwrites the existing key having same name

2018-09-06 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-314.
---
Resolution: Not A Problem

> ozoneShell putKey command overwrites the existing key having same name
> --
>
> Key: HDDS-314
> URL: https://issues.apache.org/jira/browse/HDDS-314
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nilotpal Nandi
>Assignee: Nilotpal Nandi
>Priority: Critical
> Fix For: 0.2.1
>
> Attachments: HDDS-314.001.patch, HDDS-314.002.patch, 
> HDDS-314.003.patch
>
>
> steps taken : 
> 1) created a volume root-volume and a bucket root-bucket.
> 2)  Ran following command to put a key with name 'passwd'
>  
> {noformat}
> hadoop@08315aa4b367:~/bin$ ./ozone oz -putKey /root-volume/root-bucket/passwd 
> -file /etc/services -v
> 2018-08-02 09:20:17 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Volume Name : root-volume
> Bucket Name : root-bucket
> Key Name : passwd
> File Hash : 567c100888518c1163b3462993de7d47
> 2018-08-02 09:20:18 INFO ConfUtils:41 - raft.rpc.type = GRPC (default)
> 2018-08-02 09:20:18 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-02 09:20:18 INFO ConfUtils:41 - raft.client.rpc.retryInterval = 300 
> ms (default)
> 2018-08-02 09:20:18 INFO ConfUtils:41 - 
> raft.client.async.outstanding-requests.max = 100 (default)
> 2018-08-02 09:20:18 INFO ConfUtils:41 - raft.client.async.scheduler-threads = 
> 3 (default)
> 2018-08-02 09:20:18 INFO ConfUtils:41 - raft.grpc.flow.control.window = 1MB 
> (=1048576) (default)
> 2018-08-02 09:20:18 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-02 09:20:18 INFO ConfUtils:41 - raft.client.rpc.request.timeout = 
> 3000 ms (default)
> Aug 02, 2018 9:20:18 AM 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl detectProxy
>  
> {noformat}
> 3) Ran following command to put a key with name 'passwd' again.
> {noformat}
> hadoop@08315aa4b367:~/bin$ ./ozone oz -putKey /root-volume/root-bucket/passwd 
> -file /etc/passwd -v
> 2018-08-02 09:20:41 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Volume Name : root-volume
> Bucket Name : root-bucket
> Key Name : passwd
> File Hash : b056233571cc80d6879212911cb8e500
> 2018-08-02 09:20:41 INFO ConfUtils:41 - raft.rpc.type = GRPC (default)
> 2018-08-02 09:20:42 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-02 09:20:42 INFO ConfUtils:41 - raft.client.rpc.retryInterval = 300 
> ms (default)
> 2018-08-02 09:20:42 INFO ConfUtils:41 - 
> raft.client.async.outstanding-requests.max = 100 (default)
> 2018-08-02 09:20:42 INFO ConfUtils:41 - raft.client.async.scheduler-threads = 
> 3 (default)
> 2018-08-02 09:20:42 INFO ConfUtils:41 - raft.grpc.flow.control.window = 1MB 
> (=1048576) (default)
> 2018-08-02 09:20:42 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-02 09:20:42 INFO ConfUtils:41 - raft.client.rpc.request.timeout = 
> 3000 ms (default)
> Aug 02, 2018 9:20:42 AM 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl 
> detectProxy{noformat}
>  
> key 'passwd' was overwritten with new content and it did not throw any saying 
> that the key is already present.
> Expectation :
> ---
> key overwrite with same name should not be allowed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-325) Add event watcher for delete blocks command

2018-09-06 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-325:
--
Fix Version/s: (was: 0.2.1)
   0.3.0

> Add event watcher for delete blocks command
> ---
>
> Key: HDDS-325
> URL: https://issues.apache.org/jira/browse/HDDS-325
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-325.001.patch, HDDS-325.002.patch, 
> HDDS-325.003.patch, HDDS-325.004.patch, HDDS-325.005.patch, 
> HDDS-325.006.patch, HDDS-325.007.patch, HDDS-325.008.patch
>
>
> This Jira aims to add watcher for deleteBlocks command. It removes the 
> current rpc call required for datanode to send the acknowledgement for 
> deleteBlocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-325) Add event watcher for delete blocks command

2018-09-06 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16606498#comment-16606498
 ] 

Anu Engineer commented on HDDS-325:
---

[~nandakumar131], [~ljain] Please feel free to commit if you deem so. Please 
change the fix version if this can make it into Acadia.

> Add event watcher for delete blocks command
> ---
>
> Key: HDDS-325
> URL: https://issues.apache.org/jira/browse/HDDS-325
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-325.001.patch, HDDS-325.002.patch, 
> HDDS-325.003.patch, HDDS-325.004.patch, HDDS-325.005.patch, 
> HDDS-325.006.patch, HDDS-325.007.patch, HDDS-325.008.patch
>
>
> This Jira aims to add watcher for deleteBlocks command. It removes the 
> current rpc call required for datanode to send the acknowledgement for 
> deleteBlocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-323) Rename Storage Containers

2018-09-06 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-323:
--
Fix Version/s: (was: 0.2.1)
   0.3.0

> Rename Storage Containers
> -
>
> Key: HDDS-323
> URL: https://issues.apache.org/jira/browse/HDDS-323
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
> Fix For: 0.3.0
>
>
> The term container is heavily overloaded and easy to confuse with yarn/Linux 
> containers.
> I propose renaming _*containers*_ to _*bins*_. Am very much open to better 
> suggestions though.
> This also means that SCM (Storage Container Manager) gets renamed to SBM 
> (Storage Bin Manager).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-338) ozoneFS allows to create file key and directory key with same keyname

2018-09-06 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-338:
--
Fix Version/s: (was: 0.2.1)
   0.3.0

> ozoneFS allows to create file key and directory key with same keyname
> -
>
> Key: HDDS-338
> URL: https://issues.apache.org/jira/browse/HDDS-338
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Nilotpal Nandi
>Priority: Major
> Fix For: 0.3.0
>
>
> steps taken :
> --
> 1. created a directory through ozoneFS interface.
> {noformat}
> hadoop@1a1fa8a11332:~/bin$ ./ozone fs -mkdir /temp1/
> 2018-08-08 13:50:26 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> hadoop@1a1fa8a11332:~/bin$ ./ozone fs -ls /
> 2018-08-08 14:09:59 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Found 1 items
> drwxrwxrwx - 0 2018-08-08 13:51 /temp1{noformat}
> 2. create a new key with name 'temp1'  at same bucket.
> {noformat}
> hadoop@1a1fa8a11332:~/bin$ ./ozone oz -putKey root-volume/root-bucket/temp1 
> -file /etc/passwd
> 2018-08-08 14:10:34 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 2018-08-08 14:10:35 INFO ConfUtils:41 - raft.rpc.type = GRPC (default)
> 2018-08-08 14:10:35 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-08 14:10:35 INFO ConfUtils:41 - raft.client.rpc.retryInterval = 300 
> ms (default)
> 2018-08-08 14:10:35 INFO ConfUtils:41 - 
> raft.client.async.outstanding-requests.max = 100 (default)
> 2018-08-08 14:10:35 INFO ConfUtils:41 - raft.client.async.scheduler-threads = 
> 3 (default)
> 2018-08-08 14:10:35 INFO ConfUtils:41 - raft.grpc.flow.control.window = 1MB 
> (=1048576) (default)
> 2018-08-08 14:10:35 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 
> (custom)
> 2018-08-08 14:10:35 INFO ConfUtils:41 - raft.client.rpc.request.timeout = 
> 3000 ms (default)
> Aug 08, 2018 2:10:36 PM 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl detectProxy
> WARNING: Failed to construct URI for proxy lookup, proceeding without proxy
> java.net.URISyntaxException: Illegal character in hostname at index 13: 
> https://ozone_datanode_3.ozone_default:9858
>  at java.net.URI$Parser.fail(URI.java:2848)
>  at java.net.URI$Parser.parseHostname(URI.java:3387)
>  at java.net.URI$Parser.parseServer(URI.java:3236)
>  at java.net.URI$Parser.parseAuthority(URI.java:3155)
>  at java.net.URI$Parser.parseHierarchical(URI.java:3097)
>  at java.net.URI$Parser.parse(URI.java:3053)
>  at java.net.URI.(URI.java:673)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl.detectProxy(ProxyDetectorImpl.java:128)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl.proxyFor(ProxyDetectorImpl.java:118)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.InternalSubchannel.startNewTransport(InternalSubchannel.java:207)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.InternalSubchannel.obtainActiveTransport(InternalSubchannel.java:188)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$SubchannelImpl.requestConnection(ManagedChannelImpl.java:1130)
>  at 
> org.apache.ratis.shaded.io.grpc.PickFirstBalancerFactory$PickFirstBalancer.handleResolvedAddressGroups(PickFirstBalancerFactory.java:79)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$NameResolverListenerImpl$1NamesResolved.run(ManagedChannelImpl.java:1032)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ChannelExecutor.drain(ChannelExecutor.java:73)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$LbHelperImpl.runSerialized(ManagedChannelImpl.java:1000)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$NameResolverListenerImpl.onAddresses(ManagedChannelImpl.java:1044)
>  at 
> org.apache.ratis.shaded.io.grpc.internal.DnsNameResolver$1.run(DnsNameResolver.java:201)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){noformat}
> Observed that there are multiple entries of 'temp1' when ozone fs -ls command 
> is run. Also . both the entries are considered as file . '/temp1' directory 
> is not visible anymore.
> {noformat}
> hadoop@1a1fa8a11332:~/bin$ ./ozone fs -ls /
> 2018-08-08 14:10:41 WARN NativeCodeLoader:60 - Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Found 2 items
> -

[jira] [Updated] (HDDS-326) Enable and disable ReplicationActivityStatus based on node status

2018-09-06 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-326:
--
Fix Version/s: (was: 0.2.1)
   0.3.0

> Enable and disable ReplicationActivityStatus based on node status
> -
>
> Key: HDDS-326
> URL: https://issues.apache.org/jira/browse/HDDS-326
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Priority: Critical
> Fix For: 0.3.0
>
>
> In HDDS-245 we introduce a ReplicationActivityStatus which can store the 
> actual state of the replication: it could be enabled or disabled. Replication 
> should be enabled after leaving the chill mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-340) ContainerStateMachine#readStateMachinedata should read from temporary chunk file if the data is not present as committed chunk

2018-09-06 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-340:
--
Fix Version/s: (was: 0.2.1)
   0.3.0

> ContainerStateMachine#readStateMachinedata should read from temporary chunk 
> file if the data is not present as committed chunk
> --
>
> Key: HDDS-340
> URL: https://issues.apache.org/jira/browse/HDDS-340
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.3.0
>
>
> ContainerStateMachine#readStateMachinedata currently only reads data from a 
> committed chunk right now. However for leader, it might be necessary to read 
> the chunk data from the temporary chunk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-354) VolumeInfo.getScmUsed throws NPE

2018-09-06 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-354:
--
Fix Version/s: (was: 0.2.1)
   0.3.0

> VolumeInfo.getScmUsed throws NPE
> 
>
> Key: HDDS-354
> URL: https://issues.apache.org/jira/browse/HDDS-354
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
>
> {code}java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeInfo.getScmUsed(VolumeInfo.java:107)
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.getNodeReport(VolumeSet.java:366)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getNodeReport(OzoneContainer.java:264)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:39)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.publishReport(ReportPublisher.java:86)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.run(ReportPublisher.java:73)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)
>   at java.util.concurrent.FutureTask.run(FutureTask.java)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-354) VolumeInfo.getScmUsed throws NPE

2018-09-06 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16606493#comment-16606493
 ] 

Anu Engineer commented on HDDS-354:
---

moving this to arches release, Please bring back if needed

> VolumeInfo.getScmUsed throws NPE
> 
>
> Key: HDDS-354
> URL: https://issues.apache.org/jira/browse/HDDS-354
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
>
> {code}java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeInfo.getScmUsed(VolumeInfo.java:107)
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.getNodeReport(VolumeSet.java:366)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getNodeReport(OzoneContainer.java:264)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:39)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.publishReport(ReportPublisher.java:86)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.run(ReportPublisher.java:73)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)
>   at java.util.concurrent.FutureTask.run(FutureTask.java)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-360) Use RocksDBStore and TableStore for SCM Metadata

2018-09-06 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-360:
--
Fix Version/s: (was: 0.2.1)
   0.3.0

> Use RocksDBStore and TableStore for SCM Metadata
> 
>
> Key: HDDS-360
> URL: https://issues.apache.org/jira/browse/HDDS-360
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.3.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-385) Optimize pipeline creation by sending reinitialization to all the node in parallel

2018-09-06 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-385:
--
Fix Version/s: (was: 0.2.1)
   0.3.0

> Optimize pipeline creation by sending reinitialization to all the node in 
> parallel
> --
>
> Key: HDDS-385
> URL: https://issues.apache.org/jira/browse/HDDS-385
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.3.0
>
>
> Currently during pipeline creation, re initialization is send in a serially 
> to multiple nodes. This can be optimized by sending multiple rationalizations 
> in parallel.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-361) Use DBStore and TableStore for DN metadata

2018-09-06 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-361:
--
Fix Version/s: (was: 0.2.1)
   0.3.0

> Use DBStore and TableStore for DN metadata
> --
>
> Key: HDDS-361
> URL: https://issues.apache.org/jira/browse/HDDS-361
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.3.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-397) Handle deletion for keys with no blocks

2018-09-06 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-397:
--
Fix Version/s: 0.3.0

> Handle deletion for keys with no blocks
> ---
>
> Key: HDDS-397
> URL: https://issues.apache.org/jira/browse/HDDS-397
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.2.1, 0.3.0
>
> Attachments: HDDS-397.001.patch
>
>
> Keys which do not contain blocks can be deleted directly from OzoneManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-386) Create a datanode debug cli

2018-09-06 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-386:
--
Fix Version/s: (was: 0.2.1)
   0.3.0

> Create a datanode debug cli
> ---
>
> Key: HDDS-386
> URL: https://issues.apache.org/jira/browse/HDDS-386
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.3.0
>
>
> For block deletion we need a debug cli on the datanode to know the state of 
> the containers and number of chunks present in the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-398) Support multiple tests in freon

2018-09-06 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-398:
--
Fix Version/s: (was: 0.2.1)
   0.3.0

> Support multiple tests in freon
> ---
>
> Key: HDDS-398
> URL: https://issues.apache.org/jira/browse/HDDS-398
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Minor
> Fix For: 0.3.0
>
> Attachments: HDDS-398.001.patch, HDDS-398.002.patch, 
> HDDS-398.003.patch
>
>
> Current freon supports only one kind of tests (creates volumes/buckets and 
> generates random keys).
> To ensure the correctness of ozone we need to use multiple and different kind 
> of tests (for example: test only ozone manager or just a datanode).
> In this patch I propose to use the picocli based simplified command line 
> which is introduced by HDDS-379 to make it easier to add more freon tests.
> This patch is just about the cli cleanup, more freon tests could be added in 
> following Jira where the progress calculation and metrics handling also could 
> be unified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-399) Handle pipeline discovery on SCM restart.

2018-09-06 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-399:
--
Priority: Blocker  (was: Major)

> Handle pipeline discovery on SCM restart.
> -
>
> Key: HDDS-399
> URL: https://issues.apache.org/jira/browse/HDDS-399
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-399.001.patch
>
>
> On SCM restart, as part on node registration, SCM should find out the list on 
> open pipeline on the node. Once all the nodes of the pipeline have reported 
> back, they should be added as active pipelines for further allocations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-403) infoKey shows wrong "createdOn", "modifiedOn" metadata for key

2018-09-06 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-403:
--
Priority: Blocker  (was: Major)

> infoKey shows wrong "createdOn", "modifiedOn" metadata for key
> --
>
> Key: HDDS-403
> URL: https://issues.apache.org/jira/browse/HDDS-403
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Nilotpal Nandi
>Assignee: Dinesh Chitlangia
>Priority: Blocker
>  Labels: newbie
> Fix For: 0.2.1
>
>
> 1. ran putKey command for a file
> {noformat}
> [root@ctr-e138-1518143905142-459606-01-03 bin]# ./ozone oz -putKey 
> /test-vol1/test-bucket1/file1 -file /etc/passwd -v
> 2018-09-05 10:25:11,498 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> Volume Name : test-vol1
> Bucket Name : test-bucket1
> Key Name : file1
> File Hash : 8164cc3d5b05c44b73a6277661aa4645
> 2018-09-05 10:25:12,377 INFO conf.ConfUtils: raft.rpc.type = GRPC (default)
> 2018-09-05 10:25:12,390 INFO conf.ConfUtils: raft.grpc.message.size.max = 
> 33554432 (custom)
> 2018-09-05 10:25:12,402 INFO conf.ConfUtils: raft.client.rpc.retryInterval = 
> 300 ms (default)
> 2018-09-05 10:25:12,407 INFO conf.ConfUtils: 
> raft.client.async.outstanding-requests.max = 100 (default)
> 2018-09-05 10:25:12,407 INFO conf.ConfUtils: 
> raft.client.async.scheduler-threads = 3 (default)
> 2018-09-05 10:25:12,518 INFO conf.ConfUtils: raft.grpc.flow.control.window = 
> 1MB (=1048576) (default)
> 2018-09-05 10:25:12,518 INFO conf.ConfUtils: raft.grpc.message.size.max = 
> 33554432 (custom)
> 2018-09-05 10:25:12,866 INFO conf.ConfUtils: raft.client.rpc.request.timeout 
> = 3000 ms (default)
> 2018-09-05 10:25:13,644 INFO conf.ConfUtils: raft.grpc.flow.control.window = 
> 1MB (=1048576) (default)
> 2018-09-05 10:25:13,644 INFO conf.ConfUtils: raft.grpc.message.size.max = 
> 33554432 (custom)
> 2018-09-05 10:25:13,645 INFO conf.ConfUtils: raft.client.rpc.request.timeout 
> = 3000 ms (default)
> [root@ctr-e138-1518143905142-459606-01-03 bin]# ./ozone oz -getKey 
> /test-vol1/test-bucket1/file1 -file getkey3
> 2018-09-05 10:25:22,020 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 2018-09-05 10:25:22,778 INFO conf.ConfUtils: raft.rpc.type = GRPC (default)
> 2018-09-05 10:25:22,790 INFO conf.ConfUtils: raft.grpc.message.size.max = 
> 33554432 (custom)
> 2018-09-05 10:25:22,800 INFO conf.ConfUtils: raft.client.rpc.retryInterval = 
> 300 ms (default)
> 2018-09-05 10:25:22,804 INFO conf.ConfUtils: 
> raft.client.async.outstanding-requests.max = 100 (default)
> 2018-09-05 10:25:22,805 INFO conf.ConfUtils: 
> raft.client.async.scheduler-threads = 3 (default)
> 2018-09-05 10:25:22,890 INFO conf.ConfUtils: raft.grpc.flow.control.window = 
> 1MB (=1048576) (default)
> 2018-09-05 10:25:22,890 INFO conf.ConfUtils: raft.grpc.message.size.max = 
> 33554432 (custom)
> 2018-09-05 10:25:23,250 INFO conf.ConfUtils: raft.client.rpc.request.timeout 
> = 3000 ms (default)
> 2018-09-05 10:25:24,066 INFO conf.ConfUtils: raft.grpc.flow.control.window = 
> 1MB (=1048576) (default)
> 2018-09-05 10:25:24,067 INFO conf.ConfUtils: raft.grpc.message.size.max = 
> 33554432 (custom)
> 2018-09-05 10:25:24,067 INFO conf.ConfUtils: raft.client.rpc.request.timeout 
> = 3000 ms (default){noformat}
> 2. Ran infoKey on that key
> {noformat}
> [root@ctr-e138-1518143905142-459606-01-03 bin]# ./ozone oz -infoKey 
> /test-vol1/test-bucket1/file1 -v
> 2018-09-05 10:54:42,053 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> Volume Name : test-vol1
> Bucket Name : test-bucket1
> Key Name : file1
> {
>  "version" : 0,
>  "md5hash" : null,
>  "createdOn" : "Sat, 14 Dec +114522267 00:51:17 GMT",
>  "modifiedOn" : "Fri, 09 Jun +50648 04:30:12 GMT",
>  "size" : 4659,
>  "keyName" : "file1",
>  "keyLocations" : [ {
>  "containerID" : 16,
>  "localID" : 1536143112267,
>  "length" : 4659,
>  "offset" : 0
>  } ]
> }{noformat}
> "createdOn" and "modifiedOn" metadata are incorrect.
> Here is the current date:
> {noformat}
> [root@ctr-e138-1518143905142-459606-01-03 bin]# date
> Wed Sep 5 10:54:52 UTC 2018{noformat}
> Also , the "md5hash" for the key is showing as null.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-407) ozone logs are wriiten to ozone.log. instead of ozone.log

2018-09-06 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-407:
--
Fix Version/s: (was: 0.2.1)
   0.3.0

> ozone logs are wriiten to ozone.log. instead of ozone.log
> ---
>
> Key: HDDS-407
> URL: https://issues.apache.org/jira/browse/HDDS-407
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Nilotpal Nandi
>Priority: Major
> Fix For: 0.3.0
>
>
> Please refer below details 
> ozone related logs are written to ozone.log.2018-09-05 instead of ozone.log. 
> Also, please check the timestamps of the logs. The cluster was created 
> {noformat}
> [root@ctr-e138-1518143905142-459606-01-02 logs]# ls -lhart 
> /root/hadoop_trunk/ozone-0.2.1-SNAPSHOT/logs/
> total 968K
> drwxr-xr-x 9 root root 4.0K Sep 5 10:04 ..
> -rw-r--r-- 1 root root 0 Sep 5 10:04 fairscheduler-statedump.log
> -rw-r--r-- 1 root root 17K Sep 5 10:05 
> hadoop-root-om-ctr-e138-1518143905142-459606-01-02.hwx.site.out.1
> -rw-r--r-- 1 root root 16K Sep 5 10:10 
> hadoop-root-om-ctr-e138-1518143905142-459606-01-02.hwx.site.out
> -rw-r--r-- 1 root root 11K Sep 5 10:10 
> hadoop-root-om-ctr-e138-1518143905142-459606-01-02.hwx.site.log
> -rw-r--r-- 1 root root 17K Sep 6 05:42 
> hadoop-root-datanode-ctr-e138-1518143905142-459606-01-02.hwx.site.out
> -rw-r--r-- 1 root root 2.1K Sep 6 13:20 ozone.log
> -rw-r--r-- 1 root root 67K Sep 6 13:22 
> hadoop-root-datanode-ctr-e138-1518143905142-459606-01-02.hwx.site.log
> drwxr-xr-x 2 root root 4.0K Sep 6 13:31 .
> -rw-r--r-- 1 root root 811K Sep 6 13:39 ozone.log.2018-09-05
> [root@ctr-e138-1518143905142-459606-01-02 logs]# date
> Thu Sep 6 13:39:47 UTC 2018{noformat}
>  
> tail of ozone.log
> {noformat}
> [root@ctr-e138-1518143905142-459606-01-02 logs]# tail -f ozone.log
> 2018-09-06 10:51:56,616 [IPC Server handler 13 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key 0file allocated in volume test-vol2 bucket 
> test-bucket2
> 2018-09-06 10:52:18,570 [IPC Server handler 9 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key 0file1 allocated in volume test-vol2 bucket 
> test-bucket2
> 2018-09-06 10:52:32,256 [IPC Server handler 12 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key 0file2 allocated in volume test-vol2 bucket 
> test-bucket2
> 2018-09-06 10:53:11,008 [IPC Server handler 14 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key 0file2 allocated in volume test-vol2 bucket 
> test-bucket2
> 2018-09-06 10:53:28,316 [IPC Server handler 10 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key 0file2 allocated in volume test-vol2 bucket 
> test-bucket2
> 2018-09-06 10:53:39,509 [IPC Server handler 17 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key 0file3 allocated in volume test-vol2 bucket 
> test-bucket2
> 2018-09-06 11:31:02,388 [IPC Server handler 19 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key 2GBFILE allocated in volume test-vol2 bucket 
> test-bucket2
> 2018-09-06 11:32:44,269 [IPC Server handler 12 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key 2GBFILE_1 allocated in volume test-vol2 
> bucket test-bucket2
> 2018-09-06 13:17:33,408 [IPC Server handler 16 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key FILEWITHZEROS allocated in volume test-vol2 
> bucket test-bucket2
> 2018-09-06 13:20:13,897 [IPC Server handler 15 on 9889] DEBUG 
> (KeyManagerImpl.java:255) - Key FILEWITHZEROS1 allocated in volume test-vol2 
> bucket test-bucket2{noformat}
>  
> tail of ozone.log.2018-09-05:
> {noformat}
> root@ctr-e138-1518143905142-459606-01-02 logs]# tail -50 
> ozone.log.2018-09-05
> 2018-09-06 13:28:57,866 [BlockDeletingService#8] DEBUG 
> (TopNOrderedContainerDeletionChoosingPolicy.java:79) - Stop looking for next 
> container, there is no pending deletion block contained in remaining 
> containers.
> 2018-09-06 13:29:07,816 [Datanode State Machine Thread - 0] DEBUG 
> (DatanodeStateMachine.java:145) - Executing cycle Number : 3266
> 2018-09-06 13:29:13,687 [Datanode ReportManager Thread - 0] DEBUG 
> (ContainerSet.java:191) - Starting container report iteration.
> 2018-09-06 13:29:37,816 [Datanode State Machine Thread - 0] DEBUG 
> (DatanodeStateMachine.java:145) - Executing cycle Number : 3267
> 2018-09-06 13:29:57,866 [BlockDeletingService#8] DEBUG 
> (TopNOrderedContainerDeletionChoosingPolicy.java:79) - Stop looking for next 
> container, there is no pending deletion block contained in remaining 
> containers.
> 2018-09-06 13:30:07,816 [Datanode State Machine Thread - 0] DEBUG 
> (DatanodeStateMachine.java:145) - Executing cycle Number : 3268
> 2018-09-06 13:30:19,186 [Datanode ReportManager Thread - 0] DEBUG 
> (ContainerSet.java:191) - Starting container report iteration.
> 2018-09-06 13:30:37,816 [Datanode State Machine

[jira] [Commented] (HDFS-13695) Move logging to slf4j in HDFS package

2018-09-06 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16606482#comment-16606482
 ] 

Hudson commented on HDFS-13695:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14890 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14890/])
HDFS-13695. Move logging to slf4j in HDFS package. Contributed by Ian (gifuma: 
rev eca1a4bfe952fc184fe90dde50bac9b0e5293568)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReplaceDatanodeOnFailure.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSPermission.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameCache.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdmin.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/RestCsrfPreventionFilterHandler.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQuorumJournalManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/MetaRecoveryContext.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestWriteReadStripedFile.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestWriteStripedFileWithFailure.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/security/TestPermissionSymlinks.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNStorageRetentionManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestStartup.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAFsck.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDatanodeReport.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/TestOfflineEditsViewer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/ReadStripedFileWithDecodingHelper.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CachePool.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EditLogFileInputStream.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestEnhancedByteBufferAccess.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AdminStatesBaseTest.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/MiniJournalCluster.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSRollback.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockReportRateLimiting.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRespectsBindHostKeys.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/WebHdfsDtFetcher.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/Checkpointer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHASafeMode.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileJournalManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HostFileManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureReporting.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/SimpleHttpProxyHandler.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsTimeouts.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/server/TestJournalNodeRespectsBindHostKeys.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeMXBean.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/nameno

[jira] [Commented] (HDFS-13791) Limit logging frequency of edit tail related statements

2018-09-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16606481#comment-16606481
 ] 

Hadoop QA commented on HDFS-13791:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-12943 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  6m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 7s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
22s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
26s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
37s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
57s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
31s{color} | {color:green} HDFS-12943 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 22m  
0s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 54s{color} | {color:orange} root: The patch generated 2 new + 146 unchanged 
- 0 fixed = 148 total (was 146) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 55s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
43s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}105m  9s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
49s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}239m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestFSNamesystemLock |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:9b55946 |
| JIRA Issue | HDFS-13791 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938690/HDFS-13791-HDFS-12943.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a59fd94b32bf 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/pr

  1   2   3   >