[jira] [Updated] (HDDS-268) Add SCM close container watcher

2018-09-04 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-268:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks [~ajayydv] for the contribution. I've committed the patch to the trunk. 

> Add SCM close container watcher
> ---
>
> Key: HDDS-268
> URL: https://issues.apache.org/jira/browse/HDDS-268
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-268.00.patch, HDDS-268.01.patch, HDDS-268.02.patch, 
> HDDS-268.03.patch, HDDS-268.04.patch, HDDS-268.05.patch
>
>
> Add a event watcher for CLOSE_CONTAINER_STATUS events.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13882) Change dfs.client.block.write.locateFollowingBlock.retries default from 5 to 10

2018-09-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603934#comment-16603934
 ] 

Hadoop QA commented on HDFS-13882:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
58s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
31s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}127m  1s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}197m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks 
|
|   | hadoop.hdfs.server.namenode.TestAddStripedBlocks |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13882 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12937624/HDFS-13882.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  

[jira] [Commented] (HDFS-13812) Fix the inconsistent default refresh interval on Caching documentation

2018-09-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603927#comment-16603927
 ] 

Hudson commented on HDFS-13812:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14877 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14877/])
HDFS-13812. Fix the inconsistent default refresh interval on Caching (xiao: rev 
6ccb809c2d38a45e716153ba16e135cb76167b2b)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/CentralizedCacheManagement.md


> Fix the inconsistent default refresh interval on Caching documentation 
> ---
>
> Key: HDFS-13812
> URL: https://issues.apache.org/jira/browse/HDFS-13812
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.9.1
>Reporter: BELUGA BEHR
>Assignee: Hrishikesh Gadre
>Priority: Trivial
> Fix For: 2.10.0, 3.2.0, 2.9.2, 3.0.4, 2.8.5, 2.7.8, 3.1.2
>
> Attachments: HDFS-13812-001.patch
>
>
> {quote}
> dfs.namenode.path.based.cache.refresh.interval.ms
> The NameNode will use this as the amount of milliseconds between subsequent 
> path cache rescans. This calculates the blocks to cache and each DataNode 
> containing a replica of the block that should cache it.
> By default, this parameter is set to 30, which is five minutes.
> {quote}
> [https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html]
> However, this default value was change in [HDFS-6106] to 30 seconds.  Please 
> update docs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13890) Allow Delimited PB OIV tool to print out INodeReferences

2018-09-04 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603929#comment-16603929
 ] 

Xiao Chen commented on HDFS-13890:
--

Thanks for your thoughts on this [~adam.antal].
bq. ... should rather use hdfs ls command.
This goes into the question of use cases. I think ls can do most of the things 
oiv can do, and the core difference being oiv gives someone the ability to 
analyze the image in an _offline_ fashion. Out of the several oiv processors, I 
think XML is the most powerful one. Delimited maybe more friendly on 
post-processing (e.g. grep / awk / perl / whatever), but as you noticed doesn't 
support snapshots well.

One improvement maybe we can do is, to support loading the snapshot / 
snapshotdiff sections of the image. I don't know if that is necessary, because 
1) I'm not sure how many people care about snapshot enough and use delimited 
oiv 2) I'm not sure out of people in #1, what portion can live with just the 
XML processor. 
Anyways, because the output won't be as layered as xml, so we need to make sure 
the delimited output makes sense.

bq. ...a bit complicated...
I think the credit goes to hdfs snapshot itself - be it oiv or inside NN. :)


> Allow Delimited PB OIV tool to print out INodeReferences
> 
>
> Key: HDFS-13890
> URL: https://issues.apache.org/jira/browse/HDFS-13890
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Minor
>
> HDFS-9721 added the possibility to process PB-based FSImages containing 
> snapshots by simply ignoring them. 
> Although the XML tool can provide information about the snapshots, the user 
> may find helpful if this is shown within the Delimited output (in the 
> Delimited format).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13862) RBF: Router logs are not capturing few of the dfsrouteradmin commands

2018-09-04 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603909#comment-16603909
 ] 

Brahma Reddy Battula commented on HDFS-13862:
-

IMO, better to have for failure case also.(i.e when {{success==false}}).

> RBF: Router logs are not capturing few of the dfsrouteradmin commands
> -
>
> Key: HDFS-13862
> URL: https://issues.apache.org/jira/browse/HDFS-13862
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Soumyapn
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13862-01.patch
>
>
> Test Steps :
> Below commands are not getting captured in the Router logs.
>  # Destination entry name in the add command. Log says "Added new mount point 
> /apps9 to resolver".
>  # Safemode enter|leave|get commands
>  # nameservice enable



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13812) Fix the inconsistent default refresh interval on Caching documentation

2018-09-04 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13812:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.1.2
   2.7.8
   2.8.5
   3.0.4
   2.9.2
   3.2.0
   2.10.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk all the way down to branch-2.7.

> Fix the inconsistent default refresh interval on Caching documentation 
> ---
>
> Key: HDFS-13812
> URL: https://issues.apache.org/jira/browse/HDFS-13812
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.9.1
>Reporter: BELUGA BEHR
>Assignee: Hrishikesh Gadre
>Priority: Trivial
> Fix For: 2.10.0, 3.2.0, 2.9.2, 3.0.4, 2.8.5, 2.7.8, 3.1.2
>
> Attachments: HDFS-13812-001.patch
>
>
> {quote}
> dfs.namenode.path.based.cache.refresh.interval.ms
> The NameNode will use this as the amount of milliseconds between subsequent 
> path cache rescans. This calculates the blocks to cache and each DataNode 
> containing a replica of the block that should cache it.
> By default, this parameter is set to 30, which is five minutes.
> {quote}
> [https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html]
> However, this default value was change in [HDFS-6106] to 30 seconds.  Please 
> update docs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13812) Fix the inconsistent default refresh interval on Caching documentation

2018-09-04 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603899#comment-16603899
 ] 

Xiao Chen commented on HDFS-13812:
--

+1. Thanks [~belugabehr] for reporting and [~hgadre] for fixing the issue.

> Fix the inconsistent default refresh interval on Caching documentation 
> ---
>
> Key: HDFS-13812
> URL: https://issues.apache.org/jira/browse/HDFS-13812
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.9.1
>Reporter: BELUGA BEHR
>Assignee: Hrishikesh Gadre
>Priority: Trivial
> Attachments: HDFS-13812-001.patch
>
>
> {quote}
> dfs.namenode.path.based.cache.refresh.interval.ms
> The NameNode will use this as the amount of milliseconds between subsequent 
> path cache rescans. This calculates the blocks to cache and each DataNode 
> containing a replica of the block that should cache it.
> By default, this parameter is set to 30, which is five minutes.
> {quote}
> [https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html]
> However, this default value was change in [HDFS-6106] to 30 seconds.  Please 
> update docs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13895) EC: Fix Intermittent Failure in TestDFSStripedOutputStreamWithFailureWithRandomECPolicy

2018-09-04 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-13895:

Status: Patch Available  (was: Open)

> EC: Fix Intermittent Failure in 
> TestDFSStripedOutputStreamWithFailureWithRandomECPolicy
> ---
>
> Key: HDFS-13895
> URL: https://issues.apache.org/jira/browse/HDFS-13895
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-13895.patch
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/24893/testReport/org.apache.hadoop.hdfs/TestDFSStripedOutputStreamWithFailureWithRandomECPolicy/testCloseWithExceptionsInStreamer/
> {noformat}
> java.io.IOException: Failed: the number of failed blocks = 2 > the number of 
> parity blocks = 1
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamers(DFSStripedOutputStream.java:395)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamerFailures(DFSStripedOutputStream.java:623)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.writeChunk(DFSStripedOutputStream.java:566)
>   at 
> org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:217)
>   at 
> org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:164)
>   at 
> org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:145)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.closeImpl(DFSStripedOutputStream.java:1166)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13895) EC: Fix Intermittent Failure in TestDFSStripedOutputStreamWithFailureWithRandomECPolicy

2018-09-04 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-13895:

Attachment: HDFS-13895.patch

> EC: Fix Intermittent Failure in 
> TestDFSStripedOutputStreamWithFailureWithRandomECPolicy
> ---
>
> Key: HDFS-13895
> URL: https://issues.apache.org/jira/browse/HDFS-13895
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-13895.patch
>
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/24893/testReport/org.apache.hadoop.hdfs/TestDFSStripedOutputStreamWithFailureWithRandomECPolicy/testCloseWithExceptionsInStreamer/
> {noformat}
> java.io.IOException: Failed: the number of failed blocks = 2 > the number of 
> parity blocks = 1
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamers(DFSStripedOutputStream.java:395)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamerFailures(DFSStripedOutputStream.java:623)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.writeChunk(DFSStripedOutputStream.java:566)
>   at 
> org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:217)
>   at 
> org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:164)
>   at 
> org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:145)
>   at 
> org.apache.hadoop.hdfs.DFSStripedOutputStream.closeImpl(DFSStripedOutputStream.java:1166)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13812) Fix the inconsistent default refresh interval on Caching documentation

2018-09-04 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13812:
-
Summary: Fix the inconsistent default refresh interval on Caching 
documentation   (was: Update Docs on Caching - Default Refresh Value)

> Fix the inconsistent default refresh interval on Caching documentation 
> ---
>
> Key: HDFS-13812
> URL: https://issues.apache.org/jira/browse/HDFS-13812
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.9.1
>Reporter: BELUGA BEHR
>Assignee: Hrishikesh Gadre
>Priority: Trivial
> Attachments: HDFS-13812-001.patch
>
>
> {quote}
> dfs.namenode.path.based.cache.refresh.interval.ms
> The NameNode will use this as the amount of milliseconds between subsequent 
> path cache rescans. This calculates the blocks to cache and each DataNode 
> containing a replica of the block that should cache it.
> By default, this parameter is set to 30, which is five minutes.
> {quote}
> [https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html]
> However, this default value was change in [HDFS-6106] to 30 seconds.  Please 
> update docs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13820) Disable CacheReplicationMonitor If No Cached Paths Exist

2018-09-04 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603891#comment-16603891
 ] 

Xiao Chen commented on HDFS-13820:
--

Thanks [~hgadre], looks good!
Do you mind updating the logs (e.g. "Ignoring cache report from ..." ) to be 
parameterized logging because we use slf4j logger? +1 pending that and the 
pre-commit fix.

> Disable CacheReplicationMonitor If No Cached Paths Exist
> 
>
> Key: HDFS-13820
> URL: https://issues.apache.org/jira/browse/HDFS-13820
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: caching
>Affects Versions: 2.10.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: Hrishikesh Gadre
>Priority: Minor
> Attachments: HDFS-13820-001.patch
>
>
> Stating with [HDFS-6106] the loop for checking caching is set to be every 30 
> seconds.
> Please implement a way to disable the {{CacheReplicationMonitor}} class if 
> there are no paths specified.  Adding the first cached path to the NameNode 
> should kick off the {{CacheReplicationMonitor}} and when the last one is 
> deleted, the {{CacheReplicationMonitor}} should be disabled again.
> Alternatively, provide a configuration flag to turn this feature off 
> altogether.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-383) Ozone Client should discard preallocated blocks from closed containers

2018-09-04 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603889#comment-16603889
 ] 

Shashikant Banerjee commented on HDDS-383:
--

Thanks [~szetszwo] for the review and commit.

..Just a question: is it correct to always set offset to 0 in 
getLocationInfoList()?

Its not a bug as we always set the offset filed to be zero in all cases 
currently. We might need to change it if we support append/update semantics in 
Ozone.

> Ozone Client should discard preallocated blocks from closed containers
> --
>
> Key: HDDS-383
> URL: https://issues.apache.org/jira/browse/HDDS-383
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-383.00.patch, HDDS-383.01.patch, HDDS-383.02.patch, 
> HDDS-383.03.patch
>
>
> When key write happens in Ozone client, based on the initial size given, 
> preallocation of blocks happen. While write happens, containers can get 
> closed and if the remaining preallocated blocks  belong to closed containers 
> , they can be discarded right away instead of trying to write these blocks 
> and failing with exception. This Jira aims to address this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13862) RBF: Router logs are not capturing few of the dfsrouteradmin commands

2018-09-04 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603870#comment-16603870
 ] 

Íñigo Goiri commented on HDFS-13862:


bq. Could you please tell what negative cases you are talking about ?  

When {{success==false}}. Are those worth mentioning?

> RBF: Router logs are not capturing few of the dfsrouteradmin commands
> -
>
> Key: HDFS-13862
> URL: https://issues.apache.org/jira/browse/HDFS-13862
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Soumyapn
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13862-01.patch
>
>
> Test Steps :
> Below commands are not getting captured in the Router logs.
>  # Destination entry name in the add command. Log says "Added new mount point 
> /apps9 to resolver".
>  # Safemode enter|leave|get commands
>  # nameservice enable



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13895) EC: Fix Intermittent Failure in TestDFSStripedOutputStreamWithFailureWithRandomECPolicy

2018-09-04 Thread Ayush Saxena (JIRA)
Ayush Saxena created HDFS-13895:
---

 Summary: EC: Fix Intermittent Failure in 
TestDFSStripedOutputStreamWithFailureWithRandomECPolicy
 Key: HDFS-13895
 URL: https://issues.apache.org/jira/browse/HDFS-13895
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: erasure-coding
Reporter: Ayush Saxena
Assignee: Ayush Saxena


https://builds.apache.org/job/PreCommit-HDFS-Build/24893/testReport/org.apache.hadoop.hdfs/TestDFSStripedOutputStreamWithFailureWithRandomECPolicy/testCloseWithExceptionsInStreamer/

{noformat}
java.io.IOException: Failed: the number of failed blocks = 2 > the number of 
parity blocks = 1
at 
org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamers(DFSStripedOutputStream.java:395)
at 
org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamerFailures(DFSStripedOutputStream.java:623)
at 
org.apache.hadoop.hdfs.DFSStripedOutputStream.writeChunk(DFSStripedOutputStream.java:566)
at 
org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:217)
at 
org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:164)
at 
org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:145)
at 
org.apache.hadoop.hdfs.DFSStripedOutputStream.closeImpl(DFSStripedOutputStream.java:1166)
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13862) RBF: Router logs are not capturing few of the dfsrouteradmin commands

2018-09-04 Thread Soumyapn (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603858#comment-16603858
 ] 

Soumyapn commented on HDFS-13862:
-

Thanks for the comment [~elgoiri]. Here we are logging the positive successful 
scenarios like 
 # Nameservice enable/disable
 # Router safemode enter/leave/get

Could you please tell what negative cases you are talking about ?  :)

> RBF: Router logs are not capturing few of the dfsrouteradmin commands
> -
>
> Key: HDFS-13862
> URL: https://issues.apache.org/jira/browse/HDFS-13862
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Soumyapn
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13862-01.patch
>
>
> Test Steps :
> Below commands are not getting captured in the Router logs.
>  # Destination entry name in the add command. Log says "Added new mount point 
> /apps9 to resolver".
>  # Safemode enter|leave|get commands
>  # nameservice enable



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13820) Disable CacheReplicationMonitor If No Cached Paths Exist

2018-09-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603845#comment-16603845
 ] 

Hadoop QA commented on HDFS-13820:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
39s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 57s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 6 new + 502 unchanged - 0 fixed = 508 total (was 502) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}101m 17s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}162m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.client.impl.TestBlockReaderLocal |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13820 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938377/HDFS-13820-001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 7c0c3a926803 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6e4c731 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24959/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24959/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| 

[jira] [Commented] (HDDS-351) Add chill mode state to SCM

2018-09-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603839#comment-16603839
 ] 

Hadoop QA commented on HDDS-351:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
30s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m  2s{color} | {color:orange} root: The patch generated 2 new + 15 unchanged - 
0 fixed = 17 total (was 15) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 47s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
6s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
38s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m 10s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}113m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.client.rest.TestOzoneRestClient |
|   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
|   | 

[jira] [Commented] (HDFS-13695) Move logging to slf4j in HDFS package

2018-09-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603824#comment-16603824
 ] 

Hadoop QA commented on HDFS-13695:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 213 new or modified 
test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
0s{color} | {color:green} hadoop-hdfs-project generated 0 new + 469 unchanged - 
110 fixed = 469 total (was 579) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 21s{color} | {color:orange} hadoop-hdfs-project: The patch generated 4 new + 
6736 unchanged - 81 fixed = 6740 total (was 6817) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
31s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}106m 55s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
27s{color} | {color:green} hadoop-hdfs-nfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}179m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestWriteReadStripedFile |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13695 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938370/HDFS-13695.v11.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 42c0b880c435 

[jira] [Commented] (HDFS-13882) Change dfs.client.block.write.locateFollowingBlock.retries default from 5 to 10

2018-09-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603816#comment-16603816
 ] 

Hadoop QA commented on HDFS-13882:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
29s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 10s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}174m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy 
|
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.server.namenode.TestAddStripedBlocks |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13882 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12937624/HDFS-13882.001.patch |
| Optional Tests |  

[jira] [Reopened] (HDFS-7033) dfs.web.authentication.filter should be documented

2018-09-04 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-7033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar reopened HDFS-7033:
--
  Assignee: Ajay Kumar  (was: Srikanth Upputuri)

> dfs.web.authentication.filter should be documented
> --
>
> Key: HDFS-7033
> URL: https://issues.apache.org/jira/browse/HDFS-7033
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, security
>Affects Versions: 2.4.0
>Reporter: Allen Wittenauer
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-7033.00.patch
>
>
> HDFS-5716 added dfs.web.authentication.filter but this doesn't appear to be 
> documented anywhere.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-7033) dfs.web.authentication.filter should be documented

2018-09-04 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-7033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-7033:
-
Status: Patch Available  (was: Reopened)

> dfs.web.authentication.filter should be documented
> --
>
> Key: HDFS-7033
> URL: https://issues.apache.org/jira/browse/HDFS-7033
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, security
>Affects Versions: 2.4.0
>Reporter: Allen Wittenauer
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-7033.00.patch
>
>
> HDFS-5716 added dfs.web.authentication.filter but this doesn't appear to be 
> documented anywhere.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-7033) dfs.web.authentication.filter should be documented

2018-09-04 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-7033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-7033:
-
Attachment: HDFS-7033.00.patch

> dfs.web.authentication.filter should be documented
> --
>
> Key: HDFS-7033
> URL: https://issues.apache.org/jira/browse/HDFS-7033
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, security
>Affects Versions: 2.4.0
>Reporter: Allen Wittenauer
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDFS-7033.00.patch
>
>
> HDFS-5716 added dfs.web.authentication.filter but this doesn't appear to be 
> documented anywhere.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-358) Use DBStore and TableStore for DeleteKeyService

2018-09-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603804#comment-16603804
 ] 

Hadoop QA commented on HDDS-358:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
32s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 45s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
33s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDDS-358 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938384/HDDS-358.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 227a015f58cc 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6883fe8 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/965/testReport/ |
| Max. process+thread count | 301 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/common 

[jira] [Commented] (HDFS-13744) OIV tool should better handle control characters present in file or directory names

2018-09-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603795#comment-16603795
 ] 

Hadoop QA commented on HDFS-13744:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}101m 29s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}169m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13744 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938366/HDFS-13744.03.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 402aec829876 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9964e33 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24956/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24956/testReport/ |
| Max. process+thread count | 2881 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24956/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was 

[jira] [Commented] (HDDS-383) Ozone Client should discard preallocated blocks from closed containers

2018-09-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603779#comment-16603779
 ] 

Hudson commented on HDDS-383:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14876 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14876/])
HDDS-383. Ozone Client should discard preallocated blocks from closed 
(szetszwo: rev 6883fe860f484da2b835f9f57307b84165ed7f6f)
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/ChunkGroupOutputStream.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestCloseContainerHandlingByClient.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmKeyInfo.java


> Ozone Client should discard preallocated blocks from closed containers
> --
>
> Key: HDDS-383
> URL: https://issues.apache.org/jira/browse/HDDS-383
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-383.00.patch, HDDS-383.01.patch, HDDS-383.02.patch, 
> HDDS-383.03.patch
>
>
> When key write happens in Ozone client, based on the initial size given, 
> preallocation of blocks happen. While write happens, containers can get 
> closed and if the remaining preallocated blocks  belong to closed containers 
> , they can be discarded right away instead of trying to write these blocks 
> and failing with exception. This Jira aims to address this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-396) Remove openContainers.db from SCM

2018-09-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603778#comment-16603778
 ] 

Hudson commented on HDDS-396:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14876 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14876/])
HDDS-396. Remove openContainers.db from SCM. Contributed by Dinesh (aengineer: 
rev 6e4c73147185ae2e5529028c552c47d1edcead36)
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
* (edit) 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/scm/cli/SQLCLI.java


> Remove openContainers.db from SCM
> -
>
> Key: HDDS-396
> URL: https://issues.apache.org/jira/browse/HDDS-396
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Mukul Kumar Singh
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-396.001.patch
>
>
> openContainers.db(OPEN_CONTAINERS_DB) is not being used anywhere in the code 
> right now. It can be removed from the code as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-351) Add chill mode state to SCM

2018-09-04 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603776#comment-16603776
 ] 

Anu Engineer commented on HDDS-351:
---

{quote}Added HDDS-402 to handle it in separate jira.
{quote}
shouldn't the tests be part of this patch ?

> Add chill mode state to SCM
> ---
>
> Key: HDDS-351
> URL: https://issues.apache.org/jira/browse/HDDS-351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-351.00.patch, HDDS-351.01.patch, HDDS-351.02.patch, 
> HDDS-351.03.patch, HDDS-351.04.patch, HDDS-351.05.patch
>
>
> Add chill mode state to SCM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-383) Ozone Client should discard preallocated blocks from closed containers

2018-09-04 Thread Tsz Wo Nicholas Sze (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDDS-383:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

I have committed this.  Thanks, [~shashikant]!

> Ozone Client should discard preallocated blocks from closed containers
> --
>
> Key: HDDS-383
> URL: https://issues.apache.org/jira/browse/HDDS-383
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-383.00.patch, HDDS-383.01.patch, HDDS-383.02.patch, 
> HDDS-383.03.patch
>
>
> When key write happens in Ozone client, based on the initial size given, 
> preallocation of blocks happen. While write happens, containers can get 
> closed and if the remaining preallocated blocks  belong to closed containers 
> , they can be discarded right away instead of trying to write these blocks 
> and failing with exception. This Jira aims to address this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-268) Add SCM close container watcher

2018-09-04 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603754#comment-16603754
 ] 

Xiaoyu Yao commented on HDDS-268:
-

Thanks [~ajayydv] for the update. Patch v5 LGTM. +1, I will commit it shortly.

> Add SCM close container watcher
> ---
>
> Key: HDDS-268
> URL: https://issues.apache.org/jira/browse/HDDS-268
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-268.00.patch, HDDS-268.01.patch, HDDS-268.02.patch, 
> HDDS-268.03.patch, HDDS-268.04.patch, HDDS-268.05.patch
>
>
> Add a event watcher for CLOSE_CONTAINER_STATUS events.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-383) Ozone Client should discard preallocated blocks from closed containers

2018-09-04 Thread Tsz Wo Nicholas Sze (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603752#comment-16603752
 ] 

Tsz Wo Nicholas Sze commented on HDDS-383:
--

Even if it is a bug, we may fix it in a separated JIRA.

Will commit the patch shortly.

> Ozone Client should discard preallocated blocks from closed containers
> --
>
> Key: HDDS-383
> URL: https://issues.apache.org/jira/browse/HDDS-383
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-383.00.patch, HDDS-383.01.patch, HDDS-383.02.patch, 
> HDDS-383.03.patch
>
>
> When key write happens in Ozone client, based on the initial size given, 
> preallocation of blocks happen. While write happens, containers can get 
> closed and if the remaining preallocated blocks  belong to closed containers 
> , they can be discarded right away instead of trying to write these blocks 
> and failing with exception. This Jira aims to address this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-351) Add chill mode state to SCM

2018-09-04 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603744#comment-16603744
 ] 

Ajay Kumar commented on HDDS-351:
-

[~xyao] thanks for review. Addressed most of the comments in patch v5.

{quote}SCMChillModeManager.java
Line 71-72, 146-150: this can be optimized to reduce unnecessary map-list-map 
conversion. ContainerStateManager#getContainerMap can return a map 
directly.{quote}
Currently {{getAllContainers}} is used in few other places. If we add another 
function to return return an immutable map we doesn't gain much as it will be 
converting Map-UnModifiableMap-Map.Let me know if you still think we should do 
it.
{quote}
Also, can you clarify if this rule expects close container only from 
containerStateManager?{quote}
We are checking for 1 replica of every container. So it expects both closed and 
open containers.
{quote}
Line 103: seems only CONT_EXIT_RULE get to process the registration report? But 
all the other rules are invoked to validate. Can you clarify with the expected 
usage for the ChillModeExitRule interface?{quote}
Idea is to  allow more such rules in future with minimal changes in the 
ChillModeManager. (Ex waiting for certain % of pipelines to be reported.)
{quote}
TestStorageContainerManager.java
Line 535: can we add some validation on the chill mode exit state after DN 
starts?{quote}
We are already validating SCM chill mode state before and after datanode 
registration. 
{quote}
Can we add more unit tests on the exit rules, e.g., the default percentage 
rule?{quote}
Added HDDS-402 to handle it in separate jira. 


> Add chill mode state to SCM
> ---
>
> Key: HDDS-351
> URL: https://issues.apache.org/jira/browse/HDDS-351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-351.00.patch, HDDS-351.01.patch, HDDS-351.02.patch, 
> HDDS-351.03.patch, HDDS-351.04.patch, HDDS-351.05.patch
>
>
> Add chill mode state to SCM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-402) add separate unit tests for SCM chill mode exit rules

2018-09-04 Thread Ajay Kumar (JIRA)
Ajay Kumar created HDDS-402:
---

 Summary:  add separate unit tests for SCM chill mode exit rules
 Key: HDDS-402
 URL: https://issues.apache.org/jira/browse/HDDS-402
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Ajay Kumar


 add separate unit tests for SCM chill mode exit rules



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13894) Access HDFS through a proxy and natively

2018-09-04 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603735#comment-16603735
 ] 

Íñigo Goiri commented on HDFS-13894:


I can try to abstract some of this out of HDFS and HttpFS and make it more 
generic and move it to Hadoop commons.

In addition, I should add some documentation.
I can add some pointer to {{hadoop-hdfs-httpfs/src/site/markdown/index.md}}.

> Access HDFS through a proxy and natively
> 
>
> Key: HDFS-13894
> URL: https://issues.apache.org/jira/browse/HDFS-13894
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13894.000.patch
>
>
> HDFS deployments are usually behind a firewall where one can access the 
> Namenode but not the Datanodes. To mitigate this situation there are proxies 
> that catch the DN requests (e.g., HttpFS). However, if a user submits a job 
> using the HttpFS endpoint, all the workers will use such endpoint which will 
> usually be a bottleneck.
> We should create a new filesystem that supports accessing both:
> * HttpFS for submission from outside the firewal
> * HDFS from within the cluster



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-351) Add chill mode state to SCM

2018-09-04 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-351:

Attachment: HDDS-351.05.patch

> Add chill mode state to SCM
> ---
>
> Key: HDDS-351
> URL: https://issues.apache.org/jira/browse/HDDS-351
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-351.00.patch, HDDS-351.01.patch, HDDS-351.02.patch, 
> HDDS-351.03.patch, HDDS-351.04.patch, HDDS-351.05.patch
>
>
> Add chill mode state to SCM



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13894) Access HDFS through a proxy and natively

2018-09-04 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603732#comment-16603732
 ] 

Íñigo Goiri commented on HDFS-13894:


The setup we internally have is an HDFS cluster in Azure VMs where the Routers 
are exposed through a load balancer.
To access metadata we just point to the Load Balancer.
However, to access the data itself, we need to use HttpFs which uses WebHDFS to 
proxy the requests to the DNs.

In core-default.xml, we set:
{code}
  
fs.hdfs.impl
org.apache.hadoop.hdfs.HdfsWithProxyFileSystem
  
  
fs.AbstractFileSystem.hdfs.impl
org.apache.hadoop.fs.AbstractHdfsWithProxyFileSystem
  
  
fs.hdfs.proxy.azure-cluster-fed
webhdfs://loadbalancer.azure.com:/
  
{code}

In hdfs-site.xml, we set:
{code}
  
dfs.nameservices
azure-cluster-fed
  
  
dfs.namenode.rpc-address.azure-cluster-fed
routerinternaladdress:
  
{code}

Then, the user sets the environment variable {{HDFS_USE_PROXY}} to {{true}} in 
the client machine.
The {{HdfsWithProxyFileSystem}} will use the proxy address in the client 
machine and the native HDFS address when running inside of the firewall.

> Access HDFS through a proxy and natively
> 
>
> Key: HDFS-13894
> URL: https://issues.apache.org/jira/browse/HDFS-13894
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13894.000.patch
>
>
> HDFS deployments are usually behind a firewall where one can access the 
> Namenode but not the Datanodes. To mitigate this situation there are proxies 
> that catch the DN requests (e.g., HttpFS). However, if a user submits a job 
> using the HttpFS endpoint, all the workers will use such endpoint which will 
> usually be a bottleneck.
> We should create a new filesystem that supports accessing both:
> * HttpFS for submission from outside the firewal
> * HDFS from within the cluster



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-358) Use DBStore and TableStore for DeleteKeyService

2018-09-04 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-358:
--
Attachment: HDDS-358.003.patch

> Use DBStore and TableStore for DeleteKeyService
> ---
>
> Key: HDDS-358
> URL: https://issues.apache.org/jira/browse/HDDS-358
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-358.001.patch, HDDS-358.002.patch, 
> HDDS-358.003.patch
>
>
> DeleteKeysService and OpenKeyDeleteService.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-358) Use DBStore and TableStore for DeleteKeyService

2018-09-04 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603731#comment-16603731
 ] 

Anu Engineer commented on HDDS-358:
---

Rebased again .. Patch v3

> Use DBStore and TableStore for DeleteKeyService
> ---
>
> Key: HDDS-358
> URL: https://issues.apache.org/jira/browse/HDDS-358
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-358.001.patch, HDDS-358.002.patch, 
> HDDS-358.003.patch
>
>
> DeleteKeysService and OpenKeyDeleteService.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13791) Limit logging frequency of edit tail related statements

2018-09-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603726#comment-16603726
 ] 

Hadoop QA commented on HDFS-13791:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-12943 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  7m  
3s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
53s{color} | {color:green} HDFS-12943 passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  9m 
18s{color} | {color:red} root in HDFS-12943 failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
51s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
14s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
25s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} HDFS-12943 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
34s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 17m 34s{color} 
| {color:red} root generated 178 new + 1276 unchanged - 0 fixed = 1454 total 
(was 1276) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 57s{color} | {color:orange} root: The patch generated 4 new + 146 unchanged 
- 0 fixed = 150 total (was 146) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
44s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}112m 41s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
53s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}221m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestCrcCorruption |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:9b55946 |
| JIRA Issue | HDFS-13791 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938299/HDFS-13791-HDFS-12943.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9ad85d2090f7 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| 

[jira] [Updated] (HDDS-396) Remove openContainers.db from SCM

2018-09-04 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-396:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Remove openContainers.db from SCM
> -
>
> Key: HDDS-396
> URL: https://issues.apache.org/jira/browse/HDDS-396
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Mukul Kumar Singh
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-396.001.patch
>
>
> openContainers.db(OPEN_CONTAINERS_DB) is not being used anywhere in the code 
> right now. It can be removed from the code as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-396) Remove openContainers.db from SCM

2018-09-04 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603725#comment-16603725
 ] 

Anu Engineer commented on HDDS-396:
---

[~dineshchitlangia] Thanks for the contribution. [~msingh] Thanks for filing 
this issue.

> Remove openContainers.db from SCM
> -
>
> Key: HDDS-396
> URL: https://issues.apache.org/jira/browse/HDDS-396
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Mukul Kumar Singh
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-396.001.patch
>
>
> openContainers.db(OPEN_CONTAINERS_DB) is not being used anywhere in the code 
> right now. It can be removed from the code as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-358) Use DBStore and TableStore for DeleteKeyService

2018-09-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603721#comment-16603721
 ] 

Hadoop QA commented on HDDS-358:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} HDDS-358 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDDS-358 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938375/HDDS-358.002.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/964/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Use DBStore and TableStore for DeleteKeyService
> ---
>
> Key: HDDS-358
> URL: https://issues.apache.org/jira/browse/HDDS-358
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-358.001.patch, HDDS-358.002.patch
>
>
> DeleteKeysService and OpenKeyDeleteService.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12943) Consistent Reads from Standby Node

2018-09-04 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603722#comment-16603722
 ] 

Konstantin Shvachko commented on HDFS-12943:


Attached Test Plan document.

> Consistent Reads from Standby Node
> --
>
> Key: HDFS-12943
> URL: https://issues.apache.org/jira/browse/HDFS-12943
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs
>Reporter: Konstantin Shvachko
>Priority: Major
> Attachments: ConsistentReadsFromStandbyNode.pdf, 
> ConsistentReadsFromStandbyNode.pdf, 
> TestPlan-ConsistentReadsFromStandbyNode.pdf
>
>
> StandbyNode in HDFS is a replica of the active NameNode. The states of the 
> NameNodes are coordinated via the journal. It is natural to consider 
> StandbyNode as a read-only replica. As with any replicated distributed system 
> the problem of stale reads should be resolved. Our main goal is to provide 
> reads from standby in a consistent way in order to enable a wide range of 
> existing applications running on top of HDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12943) Consistent Reads from Standby Node

2018-09-04 Thread Konstantin Shvachko (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-12943:
---
Attachment: TestPlan-ConsistentReadsFromStandbyNode.pdf

> Consistent Reads from Standby Node
> --
>
> Key: HDFS-12943
> URL: https://issues.apache.org/jira/browse/HDFS-12943
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs
>Reporter: Konstantin Shvachko
>Priority: Major
> Attachments: ConsistentReadsFromStandbyNode.pdf, 
> ConsistentReadsFromStandbyNode.pdf, 
> TestPlan-ConsistentReadsFromStandbyNode.pdf
>
>
> StandbyNode in HDFS is a replica of the active NameNode. The states of the 
> NameNodes are coordinated via the journal. It is natural to consider 
> StandbyNode as a read-only replica. As with any replicated distributed system 
> the problem of stale reads should be resolved. Our main goal is to provide 
> reads from standby in a consistent way in order to enable a wide range of 
> existing applications running on top of HDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13894) Access HDFS through a proxy and natively

2018-09-04 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13894:
---
Attachment: HDFS-13894.000.patch

> Access HDFS through a proxy and natively
> 
>
> Key: HDFS-13894
> URL: https://issues.apache.org/jira/browse/HDFS-13894
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13894.000.patch
>
>
> HDFS deployments are usually behind a firewall where one can access the 
> Namenode but not the Datanodes. To mitigate this situation there are proxies 
> that catch the DN requests (e.g., HttpFS). However, if a user submits a job 
> using the HttpFS endpoint, all the workers will use such endpoint which will 
> usually be a bottleneck.
> We should create a new filesystem that supports accessing both:
> * HttpFS for submission from outside the firewal
> * HDFS from within the cluster



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13894) Access HDFS through a proxy and natively

2018-09-04 Thread JIRA
Íñigo Goiri created HDFS-13894:
--

 Summary: Access HDFS through a proxy and natively
 Key: HDFS-13894
 URL: https://issues.apache.org/jira/browse/HDFS-13894
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Íñigo Goiri
Assignee: Íñigo Goiri


HDFS deployments are usually behind a firewall where one can access the 
Namenode but not the Datanodes. To mitigate this situation there are proxies 
that catch the DN requests (e.g., HttpFS). However, if a user submits a job 
using the HttpFS endpoint, all the workers will use such endpoint which will 
usually be a bottleneck.

We should create a new filesystem that supports accessing both:
* HttpFS for submission from outside the firewal
* HDFS from within the cluster



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-369) Remove the containers of a dead node from the container state map

2018-09-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603708#comment-16603708
 ] 

Hudson commented on HDDS-369:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14875 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14875/])
HDDS-369. Remove the containers of a dead node from the container state 
(hanishakoneru: rev 9964e33e8df1a6574d106c22fcaf339db8d48750)
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/states/Node2ContainerMap.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
* (add) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestDeadNodeHandler.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DeadNodeHandler.java


> Remove the containers of a dead node from the container state map
> -
>
> Key: HDDS-369
> URL: https://issues.apache.org/jira/browse/HDDS-369
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-369.001.patch, HDDS-369.002.patch, 
> HDDS-369.003.patch, HDDS-369.004.patch, HDDS-369.005.patch, HDDS-369.006.patch
>
>
> In case of a node is dead we need to update the container replicas 
> information of the containerStateMap for all the containers from that 
> specific node.
> With removing the replica information we can detect the under replicated 
> state and start the replication.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-5376) Incremental rescanning of cached blocks and cache entries

2018-09-04 Thread Hrishikesh Gadre (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-5376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hrishikesh Gadre reassigned HDFS-5376:
--

Assignee: Hrishikesh Gadre  (was: Andrew Wang)

> Incremental rescanning of cached blocks and cache entries
> -
>
> Key: HDFS-5376
> URL: https://issues.apache.org/jira/browse/HDFS-5376
> Project: Hadoop HDFS
>  Issue Type: Wish
>  Components: namenode
>Affects Versions: HDFS-4949
>Reporter: Andrew Wang
>Assignee: Hrishikesh Gadre
>Priority: Major
>
> {{CacheReplicationMonitor#rescan}} is invoked whenever a new cache entry is 
> added or removed. This involves a complete rescan of all cache entries and 
> cached blocks, which is potentially expensive. It'd be better to do an 
> incremental scan instead. This would also let us incrementally re-scan on 
> namespace changes like rename and create for better caching latency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13838) WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" status

2018-09-04 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603705#comment-16603705
 ] 

Siyao Meng commented on HDFS-13838:
---

rev 003 jenkins: All failed tests passed locally.

Pending code review [~jojochuang], thanks!

> WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" 
> status
> 
>
> Key: HDFS-13838
> URL: https://issues.apache.org/jira/browse/HDFS-13838
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, webhdfs
>Affects Versions: 3.1.0, 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13838.001.patch, HDFS-13838.002.patch, 
> HDFS-13838.003.patch
>
>
> "Snapshot enabled" status has been added in HDFS-12455 by [~ajaykumar].
> However, it is found by [~jojochuang] that WebHdfsFileSystem.getFileStatus() 
> won't return the correct "snapshot enabled" status. The reason is that 
> JsonUtilClient.toFileStatus() did not check and append the "snapshot enabled" 
> flag to the resulting HdfsFileStatus object.
> Proof:
> In TestWebHDFS#testWebHdfsAllowandDisallowSnapshots(), add the following 
> lines indicated by prepending "+":
> {code:java}
> // allow snapshots on /bar using webhdfs
> webHdfs.allowSnapshot(bar);
> +// check if snapshot status is enabled
> +assertTrue(dfs.getFileStatus(bar).isSnapshotEnabled());
> +assertTrue(webHdfs.getFileStatus(bar).isSnapshotEnabled());
> {code} 
> The first assertion will pass, as expected, while the second assertion will 
> fail because of the reason above.
> Update:
> A further investigation shows that FSOperations.toJsonInner() also doesn't 
> check the "snapshot enabled" bit. Therefore, 
> "fs.getFileStatus(path).isSnapshotEnabled()" will always return false for fs 
> type HttpFSFileSystem/WebHdfsFileSystem/SWebhdfsFileSystem. This will be 
> addressed in a separate jira HDFS-13886.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13820) Disable CacheReplicationMonitor If No Cached Paths Exist

2018-09-04 Thread Hrishikesh Gadre (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603706#comment-16603706
 ] 

Hrishikesh Gadre commented on HDFS-13820:
-

Note - I have reimplemented the functionality removed as part of HDFS-5651. It 
is not a clean revert of HDFS-5651, but only changes required to disable the 
centralized caching feature altogether. I am also working on HDFS-5376, which 
will implement incremental scanning of cache blocks and locations.

> Disable CacheReplicationMonitor If No Cached Paths Exist
> 
>
> Key: HDFS-13820
> URL: https://issues.apache.org/jira/browse/HDFS-13820
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: caching
>Affects Versions: 2.10.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: Hrishikesh Gadre
>Priority: Minor
> Attachments: HDFS-13820-001.patch
>
>
> Stating with [HDFS-6106] the loop for checking caching is set to be every 30 
> seconds.
> Please implement a way to disable the {{CacheReplicationMonitor}} class if 
> there are no paths specified.  Adding the first cached path to the NameNode 
> should kick off the {{CacheReplicationMonitor}} and when the last one is 
> deleted, the {{CacheReplicationMonitor}} should be disabled again.
> Alternatively, provide a configuration flag to turn this feature off 
> altogether.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work stopped] (HDFS-13820) Disable CacheReplicationMonitor If No Cached Paths Exist

2018-09-04 Thread Hrishikesh Gadre (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-13820 stopped by Hrishikesh Gadre.
---
> Disable CacheReplicationMonitor If No Cached Paths Exist
> 
>
> Key: HDFS-13820
> URL: https://issues.apache.org/jira/browse/HDFS-13820
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: caching
>Affects Versions: 2.10.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: Hrishikesh Gadre
>Priority: Minor
> Attachments: HDFS-13820-001.patch
>
>
> Stating with [HDFS-6106] the loop for checking caching is set to be every 30 
> seconds.
> Please implement a way to disable the {{CacheReplicationMonitor}} class if 
> there are no paths specified.  Adding the first cached path to the NameNode 
> should kick off the {{CacheReplicationMonitor}} and when the last one is 
> deleted, the {{CacheReplicationMonitor}} should be disabled again.
> Alternatively, provide a configuration flag to turn this feature off 
> altogether.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13820) Disable CacheReplicationMonitor If No Cached Paths Exist

2018-09-04 Thread Hrishikesh Gadre (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hrishikesh Gadre updated HDFS-13820:

Attachment: HDFS-13820-001.patch

> Disable CacheReplicationMonitor If No Cached Paths Exist
> 
>
> Key: HDFS-13820
> URL: https://issues.apache.org/jira/browse/HDFS-13820
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: caching
>Affects Versions: 2.10.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: Hrishikesh Gadre
>Priority: Minor
> Attachments: HDFS-13820-001.patch
>
>
> Stating with [HDFS-6106] the loop for checking caching is set to be every 30 
> seconds.
> Please implement a way to disable the {{CacheReplicationMonitor}} class if 
> there are no paths specified.  Adding the first cached path to the NameNode 
> should kick off the {{CacheReplicationMonitor}} and when the last one is 
> deleted, the {{CacheReplicationMonitor}} should be disabled again.
> Alternatively, provide a configuration flag to turn this feature off 
> altogether.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13820) Disable CacheReplicationMonitor If No Cached Paths Exist

2018-09-04 Thread Hrishikesh Gadre (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hrishikesh Gadre updated HDFS-13820:

Status: Patch Available  (was: Open)

> Disable CacheReplicationMonitor If No Cached Paths Exist
> 
>
> Key: HDFS-13820
> URL: https://issues.apache.org/jira/browse/HDFS-13820
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: caching
>Affects Versions: 2.10.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: Hrishikesh Gadre
>Priority: Minor
> Attachments: HDFS-13820-001.patch
>
>
> Stating with [HDFS-6106] the loop for checking caching is set to be every 30 
> seconds.
> Please implement a way to disable the {{CacheReplicationMonitor}} class if 
> there are no paths specified.  Adding the first cached path to the NameNode 
> should kick off the {{CacheReplicationMonitor}} and when the last one is 
> deleted, the {{CacheReplicationMonitor}} should be disabled again.
> Alternatively, provide a configuration flag to turn this feature off 
> altogether.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-358) Use DBStore and TableStore for DeleteKeyService

2018-09-04 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603697#comment-16603697
 ] 

Anu Engineer commented on HDDS-358:
---

[~nandakumar131], [~ljain] Thanks for the comments. I appreciate the comments. 
Patch v2 is both rebased and I have added tests and fixed all the issues.

Please review when you get a chance.

> Use DBStore and TableStore for DeleteKeyService
> ---
>
> Key: HDDS-358
> URL: https://issues.apache.org/jira/browse/HDDS-358
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-358.001.patch, HDDS-358.002.patch
>
>
> DeleteKeysService and OpenKeyDeleteService.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-358) Use DBStore and TableStore for DeleteKeyService

2018-09-04 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-358:
--
Status: Patch Available  (was: Open)

> Use DBStore and TableStore for DeleteKeyService
> ---
>
> Key: HDDS-358
> URL: https://issues.apache.org/jira/browse/HDDS-358
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-358.001.patch, HDDS-358.002.patch
>
>
> DeleteKeysService and OpenKeyDeleteService.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-358) Use DBStore and TableStore for DeleteKeyService

2018-09-04 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-358:
--
Attachment: HDDS-358.002.patch

> Use DBStore and TableStore for DeleteKeyService
> ---
>
> Key: HDDS-358
> URL: https://issues.apache.org/jira/browse/HDDS-358
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-358.001.patch, HDDS-358.002.patch
>
>
> DeleteKeysService and OpenKeyDeleteService.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13838) WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" status

2018-09-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603679#comment-16603679
 ] 

Hadoop QA commented on HDFS-13838:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 45s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
32s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}133m  3s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}198m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
|   | hadoop.hdfs.TestMaintenanceState |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13838 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938295/HDFS-13838.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3a00c6829a06 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HDFS-13860) Space character in the path is shown as "+" while creating dirs in WebHDFS

2018-09-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603671#comment-16603671
 ] 

Hadoop QA commented on HDFS-13860:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 43s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 144 unchanged - 0 fixed = 145 total (was 144) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}130m 21s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}187m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13860 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12937879/HDFS-13860.01.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1fbd6afc2c63 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6bbd249 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24951/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24951/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  

[jira] [Updated] (HDFS-13868) WebHDFS: GETSNAPSHOTDIFF API NPE when param "snapshotname" is given but "oldsnapshotname" is not.

2018-09-04 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh updated HDFS-13868:

Attachment: HDFS-13868.002.patch

> WebHDFS: GETSNAPSHOTDIFF API NPE when param "snapshotname" is given but 
> "oldsnapshotname" is not.
> -
>
> Key: HDFS-13868
> URL: https://issues.apache.org/jira/browse/HDFS-13868
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, webhdfs
>Affects Versions: 3.1.0, 3.0.3
>Reporter: Siyao Meng
>Assignee: Pranay Singh
>Priority: Major
> Attachments: HDFS-13868.001.patch, HDFS-13868.002.patch
>
>
> HDFS-13052 implements GETSNAPSHOTDIFF for WebHDFS.
>  
> Proof:
> {code:java}
> # Bash
> # Prerequisite: You will need to create the directory "/snapshot", 
> allowSnapshot() on it, and create a snapshot named "snap3" for it to reach 
> NPE.
> $ curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap2=snap3"
> # Note that I intentionally typed the wrong parameter name for 
> "oldsnapshotname" above to cause NPE.
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}}
> # OR
> $ curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs==snap3"
> # Empty string for oldsnapshotname
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}}
> # OR
> $ curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap3"
> # Missing param oldsnapshotname, essentially the same as the first case.
> {"RemoteException":{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException","message":null}{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-396) Remove openContainers.db from SCM

2018-09-04 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603657#comment-16603657
 ] 

Hanisha Koneru edited comment on HDDS-396 at 9/4/18 10:02 PM:
--

Sorry resolved it by mistake.
Re-opened and submitted the patch.


was (Author: hanishakoneru):
Sorry resolved it by mistake.

> Remove openContainers.db from SCM
> -
>
> Key: HDDS-396
> URL: https://issues.apache.org/jira/browse/HDDS-396
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Mukul Kumar Singh
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-396.001.patch
>
>
> openContainers.db(OPEN_CONTAINERS_DB) is not being used anywhere in the code 
> right now. It can be removed from the code as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13695) Move logging to slf4j in HDFS package

2018-09-04 Thread Ian Pickering (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ian Pickering updated HDFS-13695:
-
Attachment: HDFS-13695.v11.patch

> Move logging to slf4j in HDFS package
> -
>
> Key: HDFS-13695
> URL: https://issues.apache.org/jira/browse/HDFS-13695
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Ian Pickering
>Priority: Major
> Attachments: HDFS-13695.v1.patch, HDFS-13695.v10.patch, 
> HDFS-13695.v11.patch, HDFS-13695.v2.patch, HDFS-13695.v3.patch, 
> HDFS-13695.v4.patch, HDFS-13695.v5.patch, HDFS-13695.v6.patch, 
> HDFS-13695.v7.patch, HDFS-13695.v8.patch, HDFS-13695.v9.patch
>
>
> Move logging to slf4j in HDFS package



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-396) Remove openContainers.db from SCM

2018-09-04 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-396:

Status: Patch Available  (was: Reopened)

> Remove openContainers.db from SCM
> -
>
> Key: HDDS-396
> URL: https://issues.apache.org/jira/browse/HDDS-396
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Mukul Kumar Singh
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-396.001.patch
>
>
> openContainers.db(OPEN_CONTAINERS_DB) is not being used anywhere in the code 
> right now. It can be removed from the code as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDDS-396) Remove openContainers.db from SCM

2018-09-04 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru reopened HDDS-396:
-

Sorry resolved it by mistake.

> Remove openContainers.db from SCM
> -
>
> Key: HDDS-396
> URL: https://issues.apache.org/jira/browse/HDDS-396
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Mukul Kumar Singh
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-396.001.patch
>
>
> openContainers.db(OPEN_CONTAINERS_DB) is not being used anywhere in the code 
> right now. It can be removed from the code as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-369) Remove the containers of a dead node from the container state map

2018-09-04 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-369:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Remove the containers of a dead node from the container state map
> -
>
> Key: HDDS-369
> URL: https://issues.apache.org/jira/browse/HDDS-369
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-369.001.patch, HDDS-369.002.patch, 
> HDDS-369.003.patch, HDDS-369.004.patch, HDDS-369.005.patch, HDDS-369.006.patch
>
>
> In case of a node is dead we need to update the container replicas 
> information of the containerStateMap for all the containers from that 
> specific node.
> With removing the replica information we can detect the under replicated 
> state and start the replication.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13713) Add specification of Multipart Upload API to FS specification, with contract tests

2018-09-04 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603655#comment-16603655
 ] 

Íñigo Goiri commented on HDFS-13713:


bq. Maybe, but would pointing to the S3 implementation preempt the 
possibilities of having a wasb and/or adl implementation?

Ideally we would extend the documentation once we get the new implementations 
but I see the risk there.
It might be safer to just point to the FileSystem one and add the others later.

bq. There is no example yet of a use since this is a primitive that will be 
used by forthcoming work (HDFS-12090). But it has wide applicability (e.g. 
DistCP) so it was submitted to trunk and not on the HDFS-12090 branch.

I think it would be valuable to have some example and I think DistCP would be 
the perfect target for this.
Can we open a JIRA for that?

> Add specification of Multipart Upload API to FS specification, with contract 
> tests
> --
>
> Key: HDFS-13713
> URL: https://issues.apache.org/jira/browse/HDFS-13713
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs, test
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Ewan Higgs
>Priority: Blocker
> Attachments: HDFS-13713.001.patch, HDFS-13713.002.patch, 
> multipartuploader.md
>
>
> There's nothing in the FS spec covering the new API. Add it in a new .md file
> * add FS model with the notion of a function mapping (uploadID -> Upload), 
> the operations (list, commit, abort). The [TLA+ 
> mode|https://issues.apache.org/jira/secure/attachment/12865161/objectstore.pdf]l
>  of HADOOP-13786 shows how to do this.
> * Contract tests of not just the successful path, but all the invalid ones.
> * implementations of the contract tests of all FSs which support the new API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-369) Remove the containers of a dead node from the container state map

2018-09-04 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603653#comment-16603653
 ] 

Hanisha Koneru commented on HDDS-369:
-

Committed to trunk.
Thanks [~elek] for working on this and [~ajayydv] and [~nandakumar131] for 
reviewing.

> Remove the containers of a dead node from the container state map
> -
>
> Key: HDDS-369
> URL: https://issues.apache.org/jira/browse/HDDS-369
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-369.001.patch, HDDS-369.002.patch, 
> HDDS-369.003.patch, HDDS-369.004.patch, HDDS-369.005.patch, HDDS-369.006.patch
>
>
> In case of a node is dead we need to update the container replicas 
> information of the containerStateMap for all the containers from that 
> specific node.
> With removing the replica information we can detect the under replicated 
> state and start the replication.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-396) Remove openContainers.db from SCM

2018-09-04 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-396:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Remove openContainers.db from SCM
> -
>
> Key: HDDS-396
> URL: https://issues.apache.org/jira/browse/HDDS-396
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Mukul Kumar Singh
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
> Attachments: HDDS-396.001.patch
>
>
> openContainers.db(OPEN_CONTAINERS_DB) is not being used anywhere in the code 
> right now. It can be removed from the code as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-6255) fuse_dfs will not adhere to ACL permissions in some cases

2018-09-04 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh reassigned HDFS-6255:
--

Assignee: Pranay Singh

> fuse_dfs will not adhere to ACL permissions in some cases
> -
>
> Key: HDFS-6255
> URL: https://issues.apache.org/jira/browse/HDFS-6255
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fuse-dfs
>Affects Versions: 2.4.0, 3.0.0-alpha1
>Reporter: Stephen Chu
>Assignee: Pranay Singh
>Priority: Major
>
> As hdfs user, I created a directory /tmp/acl_dir/ and set permissions to 700. 
> Then I set a new acl group:jenkins:rwx on /tmp/acl_dir.
> {code}
> jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -getfacl /tmp/acl_dir
> # file: /tmp/acl_dir
> # owner: hdfs
> # group: supergroup
> user::rwx
> group::---
> group:jenkins:rwx
> mask::rwx
> other::---
> {code}
> Through the FsShell, the jenkins user can list /tmp/acl_dir as well as create 
> a file and directory inside.
> {code}
> [jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -touchz /tmp/acl_dir/testfile1
> [jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -mkdir /tmp/acl_dir/testdir1
> hdfs dfs -ls /tmp/acl[jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -ls /tmp/acl_dir/
> Found 2 items
> drwxr-xr-x   - jenkins supergroup  0 2014-04-17 19:11 
> /tmp/acl_dir/testdir1
> -rw-r--r--   1 jenkins supergroup  0 2014-04-17 19:11 
> /tmp/acl_dir/testfile1
> [jenkins@hdfs-vanilla-1 ~]$ 
> {code}
> However, as the same jenkins user, when I try to cd into /tmp/acl_dir using a 
> fuse_dfs mount, I get permission denied. Same permission denied when I try to 
> create or list files.
> {code}
> [jenkins@hdfs-vanilla-1 tmp]$ ls -l
> total 16
> drwxrwx--- 4 hdfsnobody 4096 Apr 17 19:11 acl_dir
> drwx-- 2 hdfsnobody 4096 Apr 17 18:30 acl_dir_2
> drwxr-xr-x 3 mapred  nobody 4096 Mar 11 03:53 mapred
> drwxr-xr-x 4 jenkins nobody 4096 Apr 17 07:25 testcli
> -rwx-- 1 hdfsnobody0 Apr  7 17:18 tf1
> [jenkins@hdfs-vanilla-1 tmp]$ cd acl_dir
> bash: cd: acl_dir: Permission denied
> [jenkins@hdfs-vanilla-1 tmp]$ touch acl_dir/testfile2
> touch: cannot touch `acl_dir/testfile2': Permission denied
> [jenkins@hdfs-vanilla-1 tmp]$ mkdir acl_dir/testdir2
> mkdir: cannot create directory `acl_dir/testdir2': Permission denied
> [jenkins@hdfs-vanilla-1 tmp]$ 
> {code}
> The fuse_dfs debug output doesn't show any error for the above operations:
> {code}
> unique: 18, opcode: OPENDIR (27), nodeid: 2, insize: 48
>unique: 18, success, outsize: 32
> unique: 19, opcode: READDIR (28), nodeid: 2, insize: 80
> readdir[0] from 0
>unique: 19, success, outsize: 312
> unique: 20, opcode: GETATTR (3), nodeid: 2, insize: 56
> getattr /tmp
>unique: 20, success, outsize: 120
> unique: 21, opcode: READDIR (28), nodeid: 2, insize: 80
>unique: 21, success, outsize: 16
> unique: 22, opcode: RELEASEDIR (29), nodeid: 2, insize: 64
>unique: 22, success, outsize: 16
> unique: 23, opcode: GETATTR (3), nodeid: 2, insize: 56
> getattr /tmp
>unique: 23, success, outsize: 120
> unique: 24, opcode: GETATTR (3), nodeid: 3, insize: 56
> getattr /tmp/acl_dir
>unique: 24, success, outsize: 120
> unique: 25, opcode: GETATTR (3), nodeid: 3, insize: 56
> getattr /tmp/acl_dir
>unique: 25, success, outsize: 120
> unique: 26, opcode: GETATTR (3), nodeid: 3, insize: 56
> getattr /tmp/acl_dir
>unique: 26, success, outsize: 120
> unique: 27, opcode: GETATTR (3), nodeid: 3, insize: 56
> getattr /tmp/acl_dir
>unique: 27, success, outsize: 120
> unique: 28, opcode: GETATTR (3), nodeid: 3, insize: 56
> getattr /tmp/acl_dir
>unique: 28, success, outsize: 120
> {code}
> In other scenarios, ACL permissions are enforced successfully. For example, 
> as hdfs user I create /tmp/acl_dir_2 and set permissions to 777. I then set 
> the acl user:jenkins:--- on the directory. On the fuse mount, I am not able 
> to ls, mkdir, or touch to that directory as jenkins user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-1915) fuse-dfs does not support append

2018-09-04 Thread Pranay Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranay Singh reassigned HDFS-1915:
--

Assignee: Pranay Singh

> fuse-dfs does not support append
> 
>
> Key: HDFS-1915
> URL: https://issues.apache.org/jira/browse/HDFS-1915
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fuse-dfs
>Affects Versions: 0.20.2
> Environment: Ubuntu 10.04 LTS on EC2
>Reporter: Sampath K
>Assignee: Pranay Singh
>Priority: Major
>
> Environment:  CloudEra CDH3, EC2 cluster with 2 data nodes and 1 name 
> node(Using ubuntu 10.04 LTS large instances), mounted hdfs in OS using 
> fuse-dfs. 
> Able to do HDFS fs -put but when I try to use a FTP client(ftp PUT) to do the 
> same, I get the following error. I am using vsFTPd on the server.
> Changed the mounted folder permissions to a+w to rule out any WRITE 
> permission issues. I was able to do a FTP GET on the same mounted 
> volume.
> Please advise
> FTPd Log
> ==
> Tue May 10 23:45:00 2011 [pid 2] CONNECT: Client "127.0.0.1"
> Tue May 10 23:45:09 2011 [pid 1] [ftpuser] OK LOGIN: Client "127.0.0.1"
> Tue May 10 23:48:41 2011 [pid 3] [ftpuser] OK DOWNLOAD: Client "127.0.0.1", 
> "/hfsmnt/upload/counter.txt", 10 bytes, 0.42Kbyte/sec
> Tue May 10 23:49:24 2011 [pid 3] [ftpuser] FAIL UPLOAD: Client "127.0.0.1", 
> "/hfsmnt/upload/counter1.txt", 0.00Kbyte/sec
> Error in Namenode Log (I did a ftp GET on counter.txt and PUT with 
> counter1.txt) 
> ===
> 2011-05-11 01:03:02,822 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser   
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:02,825 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root  
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:20,275 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root  
> ip=/10.32.77.36 cmd=listStatus  src=/upload dst=nullperm=null
> 2011-05-11 01:03:20,290 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser   
> ip=/10.32.77.36 cmd=opensrc=/upload/counter.txt dst=null
> perm=null
> 2011-05-11 01:03:31,115 WARN org.apache.hadoop.hdfs.StateChange: DIR* 
> NameSystem.startFile: failed to append to non-existent file 
> /upload/counter1.txt on client 10.32.77.36
> 2011-05-11 01:03:31,115 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 7 on 9000, call append(/upload/counter1.txt, DFSClient_1590956638) from 
> 10.32.77.36:56454: error: java.io.FileNotFoundException: failed to append to 
> non-existent file /upload/counter1.txt on client 10.32.77.36
> java.io.FileNotFoundException: failed to append to non-existent file 
> /upload/counter1.txt on client 10.32.77.36
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1166)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1336)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:596)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1415)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1411)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1409)
> No activity shows up in datanode logs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13744) OIV tool should better handle control characters present in file or directory names

2018-09-04 Thread Sean Mackrory (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603615#comment-16603615
 ] 

Sean Mackrory commented on HDFS-13744:
--

Looks good to me, except CR/LF was being escaped as LF, so I attached .003. 
with a trivial change that escapes both characters. If it's cool with you, I'll 
commit that version.

> OIV tool should better handle control characters present in file or directory 
> names
> ---
>
> Key: HDFS-13744
> URL: https://issues.apache.org/jira/browse/HDFS-13744
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, tools
>Affects Versions: 2.6.5, 2.9.1, 2.8.4, 2.7.6, 3.0.3
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>Priority: Critical
> Attachments: HDFS-13744.01.patch, HDFS-13744.02.patch, 
> HDFS-13744.03.patch
>
>
> In certain cases when control characters or white space is present in file or 
> directory names OIV tool processors can export data in a misleading format.
> In the below examples we have EXAMPLE_NAME as a file and a directory name 
> where the directory has a line feed character at the end (the actual 
> production case has multiple line feeds and multiple spaces)
>  * Delimited processor case:
>  ** misleading example:
> {code:java}
> /user/data/EXAMPLE_NAME
> ,0,2017-04-24 04:34,1969-12-31 16:00,0,0,0,-1,-1,drwxrwxr-x+,user,group
> /user/data/EXAMPLE_NAME,2016-08-26 03:00,2017-05-16 
> 10:05,134217728,1,520,0,0,-rw-rwxr--+,user,group
> {code}
>  * 
>  ** expected example as suggested by 
> [https://tools.ietf.org/html/rfc4180#section-2]:
> {code:java}
> "/user/data/EXAMPLE_NAME%x0A",0,2017-04-24 04:34,1969-12-31 
> 16:00,0,0,0,-1,-1,drwxrwxr-x+,user,group
> "/user/data/EXAMPLE_NAME",2016-08-26 03:00,2017-05-16 
> 10:05,134217728,1,520,0,0,-rw-rwxr--+,user,group
> {code}
>  * XML processor case:
>  ** misleading example:
> {code:java}
> 479867791DIRECTORYEXAMPLE_NAME
> 1493033668294user:group:0775
> 113632535FILEEXAMPLE_NAME314722056575041494954320141134217728user:group:0674
> {code}
>  * 
>  ** expected example as specified in 
> [https://www.w3.org/TR/REC-xml/#sec-line-ends]:
> {code:java}
> 479867791DIRECTORYEXAMPLE_NAME#xA1493033668294user:group:0775
> 113632535FILEEXAMPLE_NAME314722056575041494954320141134217728user:group:0674
> {code}
>  * JSON:
>  The OIV Web Processor behaves correctly and produces the following:
> {code:java}
> {
>   "FileStatuses": {
> "FileStatus": [
>   {
> "fileId": 113632535,
> "accessTime": 1494954320141,
> "replication": 3,
> "owner": "user",
> "length": 520,
> "permission": "674",
> "blockSize": 134217728,
> "modificationTime": 1472205657504,
> "type": "FILE",
> "group": "group",
> "childrenNum": 0,
> "pathSuffix": "EXAMPLE_NAME"
>   },
>   {
> "fileId": 479867791,
> "accessTime": 0,
> "replication": 0,
> "owner": "user",
> "length": 0,
> "permission": "775",
> "blockSize": 0,
> "modificationTime": 1493033668294,
> "type": "DIRECTORY",
> "group": "group",
> "childrenNum": 0,
> "pathSuffix": "EXAMPLE_NAME\n"
>   }
> ]
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13744) OIV tool should better handle control characters present in file or directory names

2018-09-04 Thread Sean Mackrory (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HDFS-13744:
-
Attachment: HDFS-13744.03.patch

> OIV tool should better handle control characters present in file or directory 
> names
> ---
>
> Key: HDFS-13744
> URL: https://issues.apache.org/jira/browse/HDFS-13744
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, tools
>Affects Versions: 2.6.5, 2.9.1, 2.8.4, 2.7.6, 3.0.3
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>Priority: Critical
> Attachments: HDFS-13744.01.patch, HDFS-13744.02.patch, 
> HDFS-13744.03.patch
>
>
> In certain cases when control characters or white space is present in file or 
> directory names OIV tool processors can export data in a misleading format.
> In the below examples we have EXAMPLE_NAME as a file and a directory name 
> where the directory has a line feed character at the end (the actual 
> production case has multiple line feeds and multiple spaces)
>  * Delimited processor case:
>  ** misleading example:
> {code:java}
> /user/data/EXAMPLE_NAME
> ,0,2017-04-24 04:34,1969-12-31 16:00,0,0,0,-1,-1,drwxrwxr-x+,user,group
> /user/data/EXAMPLE_NAME,2016-08-26 03:00,2017-05-16 
> 10:05,134217728,1,520,0,0,-rw-rwxr--+,user,group
> {code}
>  * 
>  ** expected example as suggested by 
> [https://tools.ietf.org/html/rfc4180#section-2]:
> {code:java}
> "/user/data/EXAMPLE_NAME%x0A",0,2017-04-24 04:34,1969-12-31 
> 16:00,0,0,0,-1,-1,drwxrwxr-x+,user,group
> "/user/data/EXAMPLE_NAME",2016-08-26 03:00,2017-05-16 
> 10:05,134217728,1,520,0,0,-rw-rwxr--+,user,group
> {code}
>  * XML processor case:
>  ** misleading example:
> {code:java}
> 479867791DIRECTORYEXAMPLE_NAME
> 1493033668294user:group:0775
> 113632535FILEEXAMPLE_NAME314722056575041494954320141134217728user:group:0674
> {code}
>  * 
>  ** expected example as specified in 
> [https://www.w3.org/TR/REC-xml/#sec-line-ends]:
> {code:java}
> 479867791DIRECTORYEXAMPLE_NAME#xA1493033668294user:group:0775
> 113632535FILEEXAMPLE_NAME314722056575041494954320141134217728user:group:0674
> {code}
>  * JSON:
>  The OIV Web Processor behaves correctly and produces the following:
> {code:java}
> {
>   "FileStatuses": {
> "FileStatus": [
>   {
> "fileId": 113632535,
> "accessTime": 1494954320141,
> "replication": 3,
> "owner": "user",
> "length": 520,
> "permission": "674",
> "blockSize": 134217728,
> "modificationTime": 1472205657504,
> "type": "FILE",
> "group": "group",
> "childrenNum": 0,
> "pathSuffix": "EXAMPLE_NAME"
>   },
>   {
> "fileId": 479867791,
> "accessTime": 0,
> "replication": 0,
> "owner": "user",
> "length": 0,
> "permission": "775",
> "blockSize": 0,
> "modificationTime": 1493033668294,
> "type": "DIRECTORY",
> "group": "group",
> "childrenNum": 0,
> "pathSuffix": "EXAMPLE_NAME\n"
>   }
> ]
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13713) Add specification of Multipart Upload API to FS specification, with contract tests

2018-09-04 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603599#comment-16603599
 ] 

Steve Loughran commented on HDFS-13713:
---

OK. 
h3.
* parent.isDirectory()
* Caller gets to do any mkdirs, so they can avoid calling it on every single 
upload of many files; MPU impl just makes sure that it is there. 


h3. commit: 
* dest path is not a dir.
* Skip checking the parent, at least on S3.

I think that balances  out efficiency with making the best of the state of a 
store. 



> Add specification of Multipart Upload API to FS specification, with contract 
> tests
> --
>
> Key: HDFS-13713
> URL: https://issues.apache.org/jira/browse/HDFS-13713
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs, test
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Ewan Higgs
>Priority: Blocker
> Attachments: HDFS-13713.001.patch, HDFS-13713.002.patch, 
> multipartuploader.md
>
>
> There's nothing in the FS spec covering the new API. Add it in a new .md file
> * add FS model with the notion of a function mapping (uploadID -> Upload), 
> the operations (list, commit, abort). The [TLA+ 
> mode|https://issues.apache.org/jira/secure/attachment/12865161/objectstore.pdf]l
>  of HADOOP-13786 shows how to do this.
> * Contract tests of not just the successful path, but all the invalid ones.
> * implementations of the contract tests of all FSs which support the new API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13358) RBF: Support for Delegation Token

2018-09-04 Thread CR Hota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CR Hota reassigned HDFS-13358:
--

Assignee: CR Hota  (was: Sherwood Zheng)

> RBF: Support for Delegation Token
> -
>
> Key: HDFS-13358
> URL: https://issues.apache.org/jira/browse/HDFS-13358
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Sherwood Zheng
>Assignee: CR Hota
>Priority: Major
>
> HDFS Router should support issuing / managing HDFS delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13532) RBF: Adding security

2018-09-04 Thread CR Hota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CR Hota reassigned HDFS-13532:
--

Assignee: CR Hota  (was: Sherwood Zheng)

> RBF: Adding security
> 
>
> Key: HDFS-13532
> URL: https://issues.apache.org/jira/browse/HDFS-13532
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Íñigo Goiri
>Assignee: CR Hota
>Priority: Major
> Attachments: RBF _ Security delegation token thoughts.pdf, 
> RBF-DelegationToken-Approach1b.pdf, Security_for_Router-based 
> Federation_design_doc.pdf
>
>
> HDFS Router based federation should support security. This includes 
> authentication and delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-383) Ozone Client should discard preallocated blocks from closed containers

2018-09-04 Thread Tsz Wo Nicholas Sze (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603583#comment-16603583
 ] 

Tsz Wo Nicholas Sze commented on HDDS-383:
--

Just a question: is it correct to always set offset to 0 in 
getLocationInfoList()?

If the question above is not a bug, the 03 patch looks good.  +1


> Ozone Client should discard preallocated blocks from closed containers
> --
>
> Key: HDDS-383
> URL: https://issues.apache.org/jira/browse/HDDS-383
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-383.00.patch, HDDS-383.01.patch, HDDS-383.02.patch, 
> HDDS-383.03.patch
>
>
> When key write happens in Ozone client, based on the initial size given, 
> preallocation of blocks happen. While write happens, containers can get 
> closed and if the remaining preallocated blocks  belong to closed containers 
> , they can be discarded right away instead of trying to write these blocks 
> and failing with exception. This Jira aims to address this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-369) Remove the containers of a dead node from the container state map

2018-09-04 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603535#comment-16603535
 ] 

Hanisha Koneru edited comment on HDDS-369 at 9/4/18 8:13 PM:
-

Thanks for updating the patch [~elek].
Patch v06 LGTM. +1.
I have created a follow-up Jira - HDDS-401 to address the update of storage 
information in case of dead node.
I will commit this shortly.



was (Author: hanishakoneru):
Thanks for updating the patch [~elek].
Patch v06 LGTM.
I have created a follow-up Jira - HDDS-401 to address the update of storage 
information in case of dead node.
I will commit this shortly.


> Remove the containers of a dead node from the container state map
> -
>
> Key: HDDS-369
> URL: https://issues.apache.org/jira/browse/HDDS-369
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-369.001.patch, HDDS-369.002.patch, 
> HDDS-369.003.patch, HDDS-369.004.patch, HDDS-369.005.patch, HDDS-369.006.patch
>
>
> In case of a node is dead we need to update the container replicas 
> information of the containerStateMap for all the containers from that 
> specific node.
> With removing the replica information we can detect the under replicated 
> state and start the replication.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-369) Remove the containers of a dead node from the container state map

2018-09-04 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603535#comment-16603535
 ] 

Hanisha Koneru commented on HDDS-369:
-

Thanks for updating the patch [~elek].
Patch v06 LGTM.
I have created a follow-up Jira - HDDS-401 to address the update of storage 
information in case of dead node.
I will commit this shortly.


> Remove the containers of a dead node from the container state map
> -
>
> Key: HDDS-369
> URL: https://issues.apache.org/jira/browse/HDDS-369
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-369.001.patch, HDDS-369.002.patch, 
> HDDS-369.003.patch, HDDS-369.004.patch, HDDS-369.005.patch, HDDS-369.006.patch
>
>
> In case of a node is dead we need to update the container replicas 
> information of the containerStateMap for all the containers from that 
> specific node.
> With removing the replica information we can detect the under replicated 
> state and start the replication.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-401) Update storage statistics on dead node

2018-09-04 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-401:
---

 Summary: Update storage statistics on dead node 
 Key: HDDS-401
 URL: https://issues.apache.org/jira/browse/HDDS-401
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Hanisha Koneru
 Fix For: 0.3.0


This is a follow-up Jira for HDDS-369.
As per [~ajayydv]'s 
[comment|https://issues.apache.org/jira/browse/HDDS-369?focusedCommentId=16594120=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16594120],
 on detecting a dead node in the cluster, we should update the storage stats 
such as usage, space left.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11434) Remove PathUtils#getTestPath() as it's not usable for hdfs path in windows

2018-09-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-11434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603519#comment-16603519
 ] 

Hadoop QA commented on HDFS-11434:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-11434 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-11434 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12855347/HDFS-11434-004.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24955/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Remove PathUtils#getTestPath() as it's not usable for hdfs path in windows
> --
>
> Key: HDFS-11434
> URL: https://issues.apache.org/jira/browse/HDFS-11434
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test, windows
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HDFS-11434-002.patch, HDFS-11434-003.patch, 
> HDFS-11434-004.patch, HDFS-11434.patch
>
>
> {noformat}
> java.lang.IllegalArgumentException: Pathname 
> /D:/hadoop-trunk/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/qTr6mgA9Yv/TestQuota/testQuotaCommands
>  from 
> D:/hadoop-trunk/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/qTr6mgA9Yv/TestQuota/testQuotaCommands
>  is not a valid DFS filename.
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:212)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1119)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1116)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1133)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1108)
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2218)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-396) Remove openContainers.db from SCM

2018-09-04 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603512#comment-16603512
 ] 

Hadoop QA commented on HDDS-396:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
42s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
16s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} tools in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 97m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDDS-396 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938286/HDDS-396.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7143ea683441 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / b993216 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 

[jira] [Commented] (HDFS-13713) Add specification of Multipart Upload API to FS specification, with contract tests

2018-09-04 Thread Ewan Higgs (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603504#comment-16603504
 ] 

Ewan Higgs commented on HDFS-13713:
---

{quote}duplicate entries: should the s3a one do a check & fail consistently? Or 
call out that it's a MUST fail with IllegalArgumentException or IOE. (I'd 
prefer a consistent IllegalArgumentException as this check is straightforward 
to do client side){quote}
Agreed. I think {{IllegalArgumentException}} fits best here.

{quote}add a marker to stop a file going in there later. {quote} Goodness, I 
hope we don't have to do that. A feature here is that no file exists until the 
complete method is called! 

If you're working on a system where people will trample your destination files 
with directories, I would prefer the onus to be on the client to create names 
that won't be interfered with (e.g. containing a UUID).

{quote}Or, and its an interesting thought: don't do the checks at init time, 
but postpone them until commit. {quote}
Yes, It's a feature here that the destination file doesn't exist until the 
complete method is called. So it makes sense that this is when all the checks 
happen. The parent directory for a file needs to exist at init time though 
because that's where we put the temp directory with the parts.

{quote}All this is straightforward to test, obviously, which is why having 
consistent exceptions is nice. (and its why I like these FS specs, they 
identify those corner cases we can trivially derive tests from, and, use as the 
reference when trying to decide whether a test failure is a bug in the FS vs 
the test itself{quote} Absolutely!

> Add specification of Multipart Upload API to FS specification, with contract 
> tests
> --
>
> Key: HDFS-13713
> URL: https://issues.apache.org/jira/browse/HDFS-13713
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs, test
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Ewan Higgs
>Priority: Blocker
> Attachments: HDFS-13713.001.patch, HDFS-13713.002.patch, 
> multipartuploader.md
>
>
> There's nothing in the FS spec covering the new API. Add it in a new .md file
> * add FS model with the notion of a function mapping (uploadID -> Upload), 
> the operations (list, commit, abort). The [TLA+ 
> mode|https://issues.apache.org/jira/secure/attachment/12865161/objectstore.pdf]l
>  of HADOOP-13786 shows how to do this.
> * Contract tests of not just the successful path, but all the invalid ones.
> * implementations of the contract tests of all FSs which support the new API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13857) RBF: Choose to enable the default nameservice to read/write files

2018-09-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603502#comment-16603502
 ] 

Hudson commented on HDFS-13857:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #14874 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14874/])
HDFS-13857. RBF: Choose to enable the default nameservice to read/write 
(inigoiri: rev 54f2044595206455484284b43e5976c8a1982aaf)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestMountTableResolver.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/resources/hdfs-rbf-default.xml


> RBF: Choose to enable the default nameservice to read/write files
> -
>
> Key: HDFS-13857
> URL: https://issues.apache.org/jira/browse/HDFS-13857
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation, hdfs
>Affects Versions: 3.0.0, 3.1.0, 2.9.1
>Reporter: yanghuafeng
>Assignee: yanghuafeng
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 2.9.2, 3.0.4, 3.1.2
>
> Attachments: HDFS-13857.001.patch, HDFS-13857.002.patch, 
> HDFS-13857.003.patch, HDFS-13857.004.patch, HDFS-13857.005.patch
>
>
> The default nameservice can provide some default properties for the namenode 
> protocol. And if we cannot find the path, we will get a location in default 
> nameservice. From my side as cluster administrator, we need all files to be 
> written in the location from the MountTableEntry. If no responding location, 
> some error will return. It is not better to happen some files are written in 
> some unknown location. We should provide a specific parameter to enable the 
> default nameservice to store files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11434) Remove PathUtils#getTestPath() as it's not usable for hdfs path in windows

2018-09-04 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-11434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603491#comment-16603491
 ] 

Brahma Reddy Battula commented on HDFS-11434:
-

[~elgoiri] and [~huanbang1993] thanks for taking look..

bq.I'd like to get this for branch-2.9 too; can we provide a patch for that one 
too?

Sure, I can provide brach-2.9 patch also.

 

[~huanbang1993], can handle separate Jira for tests which you reported.

> Remove PathUtils#getTestPath() as it's not usable for hdfs path in windows
> --
>
> Key: HDFS-11434
> URL: https://issues.apache.org/jira/browse/HDFS-11434
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test, windows
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HDFS-11434-002.patch, HDFS-11434-003.patch, 
> HDFS-11434-004.patch, HDFS-11434.patch
>
>
> {noformat}
> java.lang.IllegalArgumentException: Pathname 
> /D:/hadoop-trunk/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/qTr6mgA9Yv/TestQuota/testQuotaCommands
>  from 
> D:/hadoop-trunk/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/qTr6mgA9Yv/TestQuota/testQuotaCommands
>  is not a valid DFS filename.
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:212)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1119)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1116)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1133)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1108)
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2218)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13713) Add specification of Multipart Upload API to FS specification, with contract tests

2018-09-04 Thread Ewan Higgs (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603489#comment-16603489
 ] 

Ewan Higgs commented on HDFS-13713:
---

[~goiri], yes there is a HDFS implementation. See 
{{org.apache.hadoop.fs.FileSystemMultipartUploader}}. There is no example yet 
of a use since this is a primitive that will be used by forthcoming work 
(HDFS-12090). But it has wide applicability (e.g. DistCP) so it was submitted 
to trunk and not on the HDFS-12090 branch.

{quote}Does it make sense to add a pointer to the S3 implementation as an 
example?{quote}
Maybe, but would pointing to the S3 implementation preempt the possibilities of 
having a wasb and/or adl implementation?

> Add specification of Multipart Upload API to FS specification, with contract 
> tests
> --
>
> Key: HDFS-13713
> URL: https://issues.apache.org/jira/browse/HDFS-13713
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs, test
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Ewan Higgs
>Priority: Blocker
> Attachments: HDFS-13713.001.patch, HDFS-13713.002.patch, 
> multipartuploader.md
>
>
> There's nothing in the FS spec covering the new API. Add it in a new .md file
> * add FS model with the notion of a function mapping (uploadID -> Upload), 
> the operations (list, commit, abort). The [TLA+ 
> mode|https://issues.apache.org/jira/secure/attachment/12865161/objectstore.pdf]l
>  of HADOOP-13786 shows how to do this.
> * Contract tests of not just the successful path, but all the invalid ones.
> * implementations of the contract tests of all FSs which support the new API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13791) Limit logging frequency of edit tail related statements

2018-09-04 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603485#comment-16603485
 ] 

Erik Krogen edited comment on HDFS-13791 at 9/4/18 7:32 PM:


Attached v001 patch with the above discussed changes. I decided that to link 
together two limited loggers, you pass one logger into the constructor of the 
other, rather than having to pass around the {{LogAction}}s... Let me know your 
thoughts [~csun].

Also, since this has refactoring, we probably need to put the relevant portion 
of it into trunk (i.e. the new class and the {{FSNamesystemLock}} changes).


was (Author: xkrogen):
Attached v001 patch with the above discussed changes. I decided that to link 
together two limited loggers, you pass one logger into the constructor of the 
other, rather than having to pass around the {{LogAction}}s... Let me know your 
thoughts.

Also, since this has refactoring, we probably need to put the relevant portion 
of it into trunk (i.e. the new class and the {{FSNamesystemLock}} changes).

> Limit logging frequency of edit tail related statements
> ---
>
> Key: HDFS-13791
> URL: https://issues.apache.org/jira/browse/HDFS-13791
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs, qjm
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13791-HDFS-12943.000.patch, 
> HDFS-13791-HDFS-12943.001.patch
>
>
> There are a number of log statements that occur every time new edits are 
> tailed by a Standby NameNode. When edits are tailing only on the order of 
> every tens of seconds, this is fine. With the work in HDFS-13150, however, 
> edits may be tailed every few milliseconds, which can flood the logs with 
> tailing-related statements. We should throttle it to limit it to printing at 
> most, say, once per 5 seconds.
> We can implement logic similar to that used in HDFS-10713. This may be 
> slightly more tricky since the log statements are distributed across a few 
> classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13791) Limit logging frequency of edit tail related statements

2018-09-04 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603485#comment-16603485
 ] 

Erik Krogen commented on HDFS-13791:


Attached v001 patch with the above discussed changes. I decided that to link 
together two limited loggers, you pass one logger into the constructor of the 
other, rather than having to pass around the {{LogAction}}s... Let me know your 
thoughts.

Also, since this has refactoring, we probably need to put the relevant portion 
of it into trunk (i.e. the new class and the {{FSNamesystemLock}} changes).

> Limit logging frequency of edit tail related statements
> ---
>
> Key: HDFS-13791
> URL: https://issues.apache.org/jira/browse/HDFS-13791
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs, qjm
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13791-HDFS-12943.000.patch, 
> HDFS-13791-HDFS-12943.001.patch
>
>
> There are a number of log statements that occur every time new edits are 
> tailed by a Standby NameNode. When edits are tailing only on the order of 
> every tens of seconds, this is fine. With the work in HDFS-13150, however, 
> edits may be tailed every few milliseconds, which can flood the logs with 
> tailing-related statements. We should throttle it to limit it to printing at 
> most, say, once per 5 seconds.
> We can implement logic similar to that used in HDFS-10713. This may be 
> slightly more tricky since the log statements are distributed across a few 
> classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13791) Limit logging frequency of edit tail related statements

2018-09-04 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603485#comment-16603485
 ] 

Erik Krogen edited comment on HDFS-13791 at 9/4/18 7:32 PM:


Attached v001 patch with the above discussed changes. I decided that to link 
together two limited loggers, you pass one logger into the constructor of the 
other, rather than having to pass around the {{LogAction}}... Let me know your 
thoughts [~csun].

Also, since this has refactoring, we probably need to put the relevant portion 
of it into trunk (i.e. the new class and the {{FSNamesystemLock}} changes).


was (Author: xkrogen):
Attached v001 patch with the above discussed changes. I decided that to link 
together two limited loggers, you pass one logger into the constructor of the 
other, rather than having to pass around the {{LogAction}}s... Let me know your 
thoughts [~csun].

Also, since this has refactoring, we probably need to put the relevant portion 
of it into trunk (i.e. the new class and the {{FSNamesystemLock}} changes).

> Limit logging frequency of edit tail related statements
> ---
>
> Key: HDFS-13791
> URL: https://issues.apache.org/jira/browse/HDFS-13791
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs, qjm
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13791-HDFS-12943.000.patch, 
> HDFS-13791-HDFS-12943.001.patch
>
>
> There are a number of log statements that occur every time new edits are 
> tailed by a Standby NameNode. When edits are tailing only on the order of 
> every tens of seconds, this is fine. With the work in HDFS-13150, however, 
> edits may be tailed every few milliseconds, which can flood the logs with 
> tailing-related statements. We should throttle it to limit it to printing at 
> most, say, once per 5 seconds.
> We can implement logic similar to that used in HDFS-10713. This may be 
> slightly more tricky since the log statements are distributed across a few 
> classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13791) Limit logging frequency of edit tail related statements

2018-09-04 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-13791:
---
Attachment: HDFS-13791-HDFS-12943.001.patch

> Limit logging frequency of edit tail related statements
> ---
>
> Key: HDFS-13791
> URL: https://issues.apache.org/jira/browse/HDFS-13791
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs, qjm
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13791-HDFS-12943.000.patch, 
> HDFS-13791-HDFS-12943.001.patch
>
>
> There are a number of log statements that occur every time new edits are 
> tailed by a Standby NameNode. When edits are tailing only on the order of 
> every tens of seconds, this is fine. With the work in HDFS-13150, however, 
> edits may be tailed every few milliseconds, which can flood the logs with 
> tailing-related statements. We should throttle it to limit it to printing at 
> most, say, once per 5 seconds.
> We can implement logic similar to that used in HDFS-10713. This may be 
> slightly more tricky since the log statements are distributed across a few 
> classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13857) RBF: Choose to enable the default nameservice to read/write files

2018-09-04 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603477#comment-16603477
 ] 

Íñigo Goiri commented on HDFS-13857:


Thanks [~hfyang20071] for the work.
Committed to trunk, branch-3.1, branch-3.0, branch-2, and branch-2.9

> RBF: Choose to enable the default nameservice to read/write files
> -
>
> Key: HDFS-13857
> URL: https://issues.apache.org/jira/browse/HDFS-13857
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation, hdfs
>Affects Versions: 3.0.0, 3.1.0, 2.9.1
>Reporter: yanghuafeng
>Assignee: yanghuafeng
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 2.9.2, 3.0.4, 3.1.2
>
> Attachments: HDFS-13857.001.patch, HDFS-13857.002.patch, 
> HDFS-13857.003.patch, HDFS-13857.004.patch, HDFS-13857.005.patch
>
>
> The default nameservice can provide some default properties for the namenode 
> protocol. And if we cannot find the path, we will get a location in default 
> nameservice. From my side as cluster administrator, we need all files to be 
> written in the location from the MountTableEntry. If no responding location, 
> some error will return. It is not better to happen some files are written in 
> some unknown location. We should provide a specific parameter to enable the 
> default nameservice to store files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13857) RBF: Choose to enable the default nameservice to read/write files

2018-09-04 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13857:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.1.2
   3.0.4
   2.9.2
   3.2.0
   2.10.0
   Status: Resolved  (was: Patch Available)

> RBF: Choose to enable the default nameservice to read/write files
> -
>
> Key: HDFS-13857
> URL: https://issues.apache.org/jira/browse/HDFS-13857
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation, hdfs
>Affects Versions: 3.0.0, 3.1.0, 2.9.1
>Reporter: yanghuafeng
>Assignee: yanghuafeng
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 2.9.2, 3.0.4, 3.1.2
>
> Attachments: HDFS-13857.001.patch, HDFS-13857.002.patch, 
> HDFS-13857.003.patch, HDFS-13857.004.patch, HDFS-13857.005.patch
>
>
> The default nameservice can provide some default properties for the namenode 
> protocol. And if we cannot find the path, we will get a location in default 
> nameservice. From my side as cluster administrator, we need all files to be 
> written in the location from the MountTableEntry. If no responding location, 
> some error will return. It is not better to happen some files are written in 
> some unknown location. We should provide a specific parameter to enable the 
> default nameservice to store files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-400) Check global replication state of the reported containers on SCM

2018-09-04 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603471#comment-16603471
 ] 

Ajay Kumar commented on HDDS-400:
-

[~elek] thanks for submitting the patch. Seems in this patch we are checking 
replication status of all reported container ids instead of only missing and 
new containers as was done previously. I think i am missing something here, 
could you please share why we should not keep the ContainerHandler code same 
and emit any required replication events when DataNode is detected dead?

> Check global replication state of the reported containers on SCM
> 
>
> Key: HDDS-400
> URL: https://issues.apache.org/jira/browse/HDDS-400
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-400.001.patch, HDDS-400.002.patch
>
>
> Current container replication handler compare the reported containers with 
> the previous report. It handles over an under replicated state.
> But there is no logic to check the cluster-wide replication count. If a node 
> is went down it won't be detected.
> For the sake of simplicity I would add this check to the 
> ContainerReportHandler (as of now). So all the reported container should have 
> enough replicas. 
> We can check the performance implication with genesis, but as a first 
> implementation I think it could be good enough. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13860) Space character in the path is shown as "+" while creating dirs in WebHDFS

2018-09-04 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603469#comment-16603469
 ] 

Hanisha Koneru commented on HDFS-13860:
---

Thanks for working on this [~shashikant].
Patch v02 LGTM. Triggered a jenkins run.

> Space character in the path is shown as "+" while creating dirs in WebHDFS 
> ---
>
> Key: HDFS-13860
> URL: https://issues.apache.org/jira/browse/HDFS-13860
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.10.0, 3.2.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-13860.00.patch, HDFS-13860.01.patch
>
>
> $ ./hdfs dfs -mkdir webhdfs://127.0.0.1/tmp1/"file 1"
> 2018-08-23 15:16:08,258 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> $ ./hdfs dfs -ls webhdfs://127.0.0.1/tmp1
> 2018-08-23 15:16:21,244 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> Found 1 items
> drwxr-xr-x   - sbanerjee hadoop          0 2018-08-23 15:16 
> webhdfs://127.0.0.1/tmp1/file+1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13838) WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" status

2018-09-04 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13838:
--
Attachment: HDFS-13838.003.patch
Status: Patch Available  (was: In Progress)

rev 002 test case is related.
It turns out that the "snapshot enabled" JSON key in JsonUtil is not consistent 
as well. Fixed in rev 003.

> WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" 
> status
> 
>
> Key: HDFS-13838
> URL: https://issues.apache.org/jira/browse/HDFS-13838
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, webhdfs
>Affects Versions: 3.0.3, 3.1.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13838.001.patch, HDFS-13838.002.patch, 
> HDFS-13838.003.patch
>
>
> "Snapshot enabled" status has been added in HDFS-12455 by [~ajaykumar].
> However, it is found by [~jojochuang] that WebHdfsFileSystem.getFileStatus() 
> won't return the correct "snapshot enabled" status. The reason is that 
> JsonUtilClient.toFileStatus() did not check and append the "snapshot enabled" 
> flag to the resulting HdfsFileStatus object.
> Proof:
> In TestWebHDFS#testWebHdfsAllowandDisallowSnapshots(), add the following 
> lines indicated by prepending "+":
> {code:java}
> // allow snapshots on /bar using webhdfs
> webHdfs.allowSnapshot(bar);
> +// check if snapshot status is enabled
> +assertTrue(dfs.getFileStatus(bar).isSnapshotEnabled());
> +assertTrue(webHdfs.getFileStatus(bar).isSnapshotEnabled());
> {code} 
> The first assertion will pass, as expected, while the second assertion will 
> fail because of the reason above.
> Update:
> A further investigation shows that FSOperations.toJsonInner() also doesn't 
> check the "snapshot enabled" bit. Therefore, 
> "fs.getFileStatus(path).isSnapshotEnabled()" will always return false for fs 
> type HttpFSFileSystem/WebHdfsFileSystem/SWebhdfsFileSystem. This will be 
> addressed in a separate jira HDFS-13886.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-98) Adding Ozone Manager Audit Log

2018-09-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-98?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603463#comment-16603463
 ] 

Hudson commented on HDDS-98:


SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14873 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14873/])
HDDS-98. Adding Ozone Manager Audit Log. Contributed by Dinesh (nanda: rev 
6bbd2490111e0c90a4392a09f3af4a11a80d579c)
* (add) hadoop-ozone/common/src/main/conf/om-audit-log4j2.properties
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmVolumeArgs.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
* (edit) hadoop-dist/src/main/compose/ozone/docker-config
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmBucketArgs.java
* (edit) hadoop-ozone/common/src/main/bin/ozone
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmBucketInfo.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/audit/OMAction.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmKeyArgs.java


> Adding Ozone Manager Audit Log
> --
>
> Key: HDDS-98
> URL: https://issues.apache.org/jira/browse/HDDS-98
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: Logging, audit
> Fix For: 0.2.1
>
> Attachments: HDDS-98.001.patch, HDDS-98.002.patch, HDDS-98.003.patch, 
> HDDS-98.004.patch, HDDS-98.005.patch, HDDS-98.006.patch, HDDS-98.007.patch, 
> HDDS-98.008.patch, audit.log, log4j2.properties
>
>
> This ticket is opened to add ozone manager's audit log. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-98) Adding Ozone Manager Audit Log

2018-09-04 Thread Dinesh Chitlangia (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-98?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603461#comment-16603461
 ] 

Dinesh Chitlangia commented on HDDS-98:
---

[~nandakumar131] - Thank you for committing this. 

> Adding Ozone Manager Audit Log
> --
>
> Key: HDDS-98
> URL: https://issues.apache.org/jira/browse/HDDS-98
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: Logging, audit
> Fix For: 0.2.1
>
> Attachments: HDDS-98.001.patch, HDDS-98.002.patch, HDDS-98.003.patch, 
> HDDS-98.004.patch, HDDS-98.005.patch, HDDS-98.006.patch, HDDS-98.007.patch, 
> HDDS-98.008.patch, audit.log, log4j2.properties
>
>
> This ticket is opened to add ozone manager's audit log. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13838) WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" status

2018-09-04 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13838:
--
Status: In Progress  (was: Patch Available)

> WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" 
> status
> 
>
> Key: HDFS-13838
> URL: https://issues.apache.org/jira/browse/HDFS-13838
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, webhdfs
>Affects Versions: 3.0.3, 3.1.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13838.001.patch, HDFS-13838.002.patch
>
>
> "Snapshot enabled" status has been added in HDFS-12455 by [~ajaykumar].
> However, it is found by [~jojochuang] that WebHdfsFileSystem.getFileStatus() 
> won't return the correct "snapshot enabled" status. The reason is that 
> JsonUtilClient.toFileStatus() did not check and append the "snapshot enabled" 
> flag to the resulting HdfsFileStatus object.
> Proof:
> In TestWebHDFS#testWebHdfsAllowandDisallowSnapshots(), add the following 
> lines indicated by prepending "+":
> {code:java}
> // allow snapshots on /bar using webhdfs
> webHdfs.allowSnapshot(bar);
> +// check if snapshot status is enabled
> +assertTrue(dfs.getFileStatus(bar).isSnapshotEnabled());
> +assertTrue(webHdfs.getFileStatus(bar).isSnapshotEnabled());
> {code} 
> The first assertion will pass, as expected, while the second assertion will 
> fail because of the reason above.
> Update:
> A further investigation shows that FSOperations.toJsonInner() also doesn't 
> check the "snapshot enabled" bit. Therefore, 
> "fs.getFileStatus(path).isSnapshotEnabled()" will always return false for fs 
> type HttpFSFileSystem/WebHdfsFileSystem/SWebhdfsFileSystem. This will be 
> addressed in a separate jira HDFS-13886.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-98) Adding Ozone Manager Audit Log

2018-09-04 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-98?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-98:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Adding Ozone Manager Audit Log
> --
>
> Key: HDDS-98
> URL: https://issues.apache.org/jira/browse/HDDS-98
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: Logging, audit
> Fix For: 0.2.1
>
> Attachments: HDDS-98.001.patch, HDDS-98.002.patch, HDDS-98.003.patch, 
> HDDS-98.004.patch, HDDS-98.005.patch, HDDS-98.006.patch, HDDS-98.007.patch, 
> HDDS-98.008.patch, audit.log, log4j2.properties
>
>
> This ticket is opened to add ozone manager's audit log. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-98) Adding Ozone Manager Audit Log

2018-09-04 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-98?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603437#comment-16603437
 ] 

Nanda kumar commented on HDDS-98:
-

Thanks [~dineshchitlangia] for the contribution and [~xyao], [~jnp], [~anu] & 
[~elek] for the review. I have committed this to trunk.

> Adding Ozone Manager Audit Log
> --
>
> Key: HDDS-98
> URL: https://issues.apache.org/jira/browse/HDDS-98
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: Logging, audit
> Fix For: 0.2.1
>
> Attachments: HDDS-98.001.patch, HDDS-98.002.patch, HDDS-98.003.patch, 
> HDDS-98.004.patch, HDDS-98.005.patch, HDDS-98.006.patch, HDDS-98.007.patch, 
> HDDS-98.008.patch, audit.log, log4j2.properties
>
>
> This ticket is opened to add ozone manager's audit log. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-98) Adding Ozone Manager Audit Log

2018-09-04 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-98?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603432#comment-16603432
 ] 

Nanda kumar edited comment on HDDS-98 at 9/4/18 6:40 PM:
-

+1, the patch looks good to me. Ran the tests locally, no failures.

I will commit this shortly.
{code:java}
HDDS: Tests run: 97, Failures: 0, Errors: 0, Skipped: 3
Ozone: Tests run: 283, Failures: 0, Errors: 0, Skipped: 19
OzoneFS: Tests run: 10, Failures: 0, Errors: 0, Skipped: 0
{code}


was (Author: nandakumar131):
Ran the tests locally, no failures. I will commit this shortly.
{code:java}
HDDS: Tests run: 97, Failures: 0, Errors: 0, Skipped: 3
Ozone: Tests run: 283, Failures: 0, Errors: 0, Skipped: 19
OzoneFS: Tests run: 10, Failures: 0, Errors: 0, Skipped: 0
{code}

> Adding Ozone Manager Audit Log
> --
>
> Key: HDDS-98
> URL: https://issues.apache.org/jira/browse/HDDS-98
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: Logging, audit
> Fix For: 0.2.1
>
> Attachments: HDDS-98.001.patch, HDDS-98.002.patch, HDDS-98.003.patch, 
> HDDS-98.004.patch, HDDS-98.005.patch, HDDS-98.006.patch, HDDS-98.007.patch, 
> HDDS-98.008.patch, audit.log, log4j2.properties
>
>
> This ticket is opened to add ozone manager's audit log. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-98) Adding Ozone Manager Audit Log

2018-09-04 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-98?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603432#comment-16603432
 ] 

Nanda kumar commented on HDDS-98:
-

Ran the tests locally, no failures. I will commit this shortly.
{code:java}
HDDS: Tests run: 97, Failures: 0, Errors: 0, Skipped: 3
Ozone: Tests run: 283, Failures: 0, Errors: 0, Skipped: 19
OzoneFS: Tests run: 10, Failures: 0, Errors: 0, Skipped: 0
{code}

> Adding Ozone Manager Audit Log
> --
>
> Key: HDDS-98
> URL: https://issues.apache.org/jira/browse/HDDS-98
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: Logging, audit
> Fix For: 0.2.1
>
> Attachments: HDDS-98.001.patch, HDDS-98.002.patch, HDDS-98.003.patch, 
> HDDS-98.004.patch, HDDS-98.005.patch, HDDS-98.006.patch, HDDS-98.007.patch, 
> HDDS-98.008.patch, audit.log, log4j2.properties
>
>
> This ticket is opened to add ozone manager's audit log. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-75) Support for CopyContainer

2018-09-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-75?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603430#comment-16603430
 ] 

Hudson commented on HDDS-75:


SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14872 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14872/])
HDDS-75. Support for CopyContainer. Contributed by Elek, Marton. (nanda: rev 
b9932162e9eb4acc9c790fc3c4938a5057fc1658)
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/interfaces/Handler.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java
* (delete) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestReplicateContainerHandler.java
* (add) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestReplicateContainerCommandHandler.java
* (add) 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/package-info.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainer.java
* (add) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/replication/GrpcReplicationClient.java
* (add) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/TestContainerReplication.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/XceiverServerGrpc.java
* (edit) hadoop-hdds/common/src/main/resources/ozone-default.xml
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/OzoneContainer.java
* (add) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/replication/ContainerDownloader.java
* (add) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/replication/ContainerReplicationSource.java
* (add) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/replication/SimpleContainerDownloader.java
* (add) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/replication/package-info.java
* (add) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/replication/GrpcReplicationService.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java
* (add) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/replication/OnDemandContainerReplicationSource.java
* (edit) hadoop-hdds/common/src/main/proto/DatanodeContainerProtocol.proto
* (add) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/replication/ContainerStreamingOutput.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/ReplicateContainerCommandHandler.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestOzoneContainer.java


> Support for CopyContainer
> -
>
> Key: HDDS-75
> URL: https://issues.apache.org/jira/browse/HDDS-75
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Anu Engineer
>Assignee: Elek, Marton
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-75.005.patch, HDDS-75.006.patch, HDDS-75.007.patch, 
> HDDS-75.009.patch, HDDS-75.010.patch, HDDS-75.011.patch, HDDS-75.012.patch, 
> HDDS-75.013.patch, HDDS-75.014.patch, HDDS-75.015.patch, 
> HDFS-11686-HDFS-7240.001.patch, HDFS-11686-HDFS-7240.002.patch, 
> HDFS-11686-HDFS-7240.003.patch, HDFS-11686-HDFS-7240.004.patch
>
>
> Once a container is closed we need to copy the container to the correct pool 
> or re-encode the container to use erasure coding. The copyContainer allows 
> users to get the container as a tarball from the remote machine.
> The copyContainer is a basic step to move the raw container data from one 
> datanode to an other node. It could be used by higher level components such 
> like the scm which ensures that the replication rules are satisfied.
> The CopyContainer by default works in pull model: the destination datanode 
> could read the raw data from one or more source datanode where the container 
> exists.
> The source provides a binary representation of the container over a common 
> interface which has two method:
>  # prepare(containerName)
>  # copyData(String containerName, OutputStream destination)
> Prepare phase is called right after the closing event and the 

  1   2   >