[jira] [Updated] (HDFS-15079) RBF: Client maybe get an unexpected result with network anomaly

2019-12-24 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He updated HDFS-15079:
---
Parent: HDFS-14603
Issue Type: Sub-task  (was: Bug)

> RBF: Client maybe get an unexpected result with network anomaly 
> 
>
> Key: HDFS-15079
> URL: https://issues.apache.org/jira/browse/HDFS-15079
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Priority: Critical
>
>  I find there is a critical problem on RBF, HDFS-15078 can resolve it on some 
> Scenarios, but i have no idea about the overall resolution.
> The problem is that
> Client with RBF(r0, r1) create a file HDFS file via r0, it gets Exception and 
> failovers to r1
> r0 has been send create rpc to namenode(1st create)
> Client create a HDFS file via r1(2nd create)
> Client writes the HDFS file and close it finally(3rd close)
> Maybe namenode receiving the rpc in order as follow
> 2nd create
> 3rd close
> 1st create
> And overwrite is true by default, this would make the file had been written 
> an empty file. This is an critical problem 
> We had encountered this problem. There are many hive and spark jobs running 
> on our cluster,   sometimes it occurs



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14740) Recover data blocks from persistent memory read cache during datanode restarts

2019-12-24 Thread Feilong He (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17003132#comment-17003132
 ] 

Feilong He commented on HDFS-14740:
---

[^HDFS-14740.009.patch], [^HDFS-14740-branch-3.1-001.patch], 
[^HDFS-14740-branch-3.2-001.patch] were loaded with some code refactor. We will 
consider to check in in the following days. If you have any suggestion, please 
feel free to post it.

> Recover data blocks from persistent memory read cache during datanode restarts
> --
>
> Key: HDFS-14740
> URL: https://issues.apache.org/jira/browse/HDFS-14740
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: caching, datanode
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
> Attachments: HDFS-14740-branch-3.1-000.patch, 
> HDFS-14740-branch-3.1-001.patch, HDFS-14740-branch-3.2-000.patch, 
> HDFS-14740-branch-3.2-001.patch, HDFS-14740.000.patch, HDFS-14740.001.patch, 
> HDFS-14740.002.patch, HDFS-14740.003.patch, HDFS-14740.004.patch, 
> HDFS-14740.005.patch, HDFS-14740.006.patch, HDFS-14740.007.patch, 
> HDFS-14740.008.patch, HDFS-14740.009.patch, 
> HDFS_Persistent_Read-Cache_Design-v1.pdf, 
> HDFS_Persistent_Read-Cache_Test-v1.1.pdf, 
> HDFS_Persistent_Read-Cache_Test-v1.pdf, HDFS_Persistent_Read-Cache_Test-v2.pdf
>
>
> In HDFS-13762, persistent memory (PM) is enabled in HDFS centralized cache 
> management. Even though PM can persist cache data, for simplifying the 
> initial implementation, the previous cache data will be cleaned up during 
> DataNode restarts. Here, we are proposing to improve HDFS PM cache by taking 
> advantage of PM's data persistence characteristic, i.e., recovering the 
> status for cached data, if any, when DataNode restarts, thus, cache warm up 
> time can be saved for user.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15054) Delete Snapshot not updating new modification time

2019-12-24 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17003130#comment-17003130
 ] 

Hudson commented on HDFS-15054:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17793 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17793/])
HDFS-15054. Delete Snapshot not updating new modification time. (ayushsaxena: 
rev 300505c56277982ea4369dce1a2b323b4822fe47)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectorySnapshottableFeature.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirSnapshotOp.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshot.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored


> Delete Snapshot not updating new modification time
> --
>
> Key: HDFS-15054
> URL: https://issues.apache.org/jira/browse/HDFS-15054
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-15054.001.patch, HDFS-15054.002.patch
>
>
> on creating a snapshot , we set modifcation time for the snapshot along with 
> that we update modification time of snapshot created directory
> {code:java}
>   snapshotRoot.updateModificationTime(now, Snapshot.CURRENT_STATE_ID);
>   s.getRoot().setModificationTime(now, Snapshot.CURRENT_STATE_ID); {code}
> So on deleting snapshot , we should update the modification time for snapshot 
> created directory .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14993) checkDiskError doesn't work during datanode startup

2019-12-24 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17003125#comment-17003125
 ] 

Ayush Saxena commented on HDFS-14993:
-

Build results aren't available now.
I have retriggered it. If everything seems still fine. We can push this.

> checkDiskError doesn't work during datanode startup
> ---
>
> Key: HDFS-14993
> URL: https://issues.apache.org/jira/browse/HDFS-14993
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Major
> Attachments: HDFS-14993.patch, HDFS-14993.patch, HDFS-14993.patch
>
>
> the function checkDiskError() is called before addBlockPool, but list 
> bpSlices is empty this time. So the function check() in FsVolumeImpl.java 
> does nothing.
> @Override
> public VolumeCheckResult check(VolumeCheckContext ignored)
>  throws DiskErrorException {
>  // TODO:FEDERATION valid synchronization
>  for (BlockPoolSlice s : bpSlices.values()) {
>  s.checkDirs();
>  }
>  return VolumeCheckResult.HEALTHY;
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15054) Delete Snapshot not updating new modification time

2019-12-24 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-15054:

Fix Version/s: 3.3.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to trunk.
Thanx [~hemanthboyina] for the contribution and [~elgoiri] for the review!!!

> Delete Snapshot not updating new modification time
> --
>
> Key: HDFS-15054
> URL: https://issues.apache.org/jira/browse/HDFS-15054
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-15054.001.patch, HDFS-15054.002.patch
>
>
> on creating a snapshot , we set modifcation time for the snapshot along with 
> that we update modification time of snapshot created directory
> {code:java}
>   snapshotRoot.updateModificationTime(now, Snapshot.CURRENT_STATE_ID);
>   s.getRoot().setModificationTime(now, Snapshot.CURRENT_STATE_ID); {code}
> So on deleting snapshot , we should update the modification time for snapshot 
> created directory .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15054) Delete Snapshot not updating new modification time

2019-12-24 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17003119#comment-17003119
 ] 

Ayush Saxena commented on HDFS-15054:
-

v002 LGTM +1

> Delete Snapshot not updating new modification time
> --
>
> Key: HDFS-15054
> URL: https://issues.apache.org/jira/browse/HDFS-15054
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-15054.001.patch, HDFS-15054.002.patch
>
>
> on creating a snapshot , we set modifcation time for the snapshot along with 
> that we update modification time of snapshot created directory
> {code:java}
>   snapshotRoot.updateModificationTime(now, Snapshot.CURRENT_STATE_ID);
>   s.getRoot().setModificationTime(now, Snapshot.CURRENT_STATE_ID); {code}
> So on deleting snapshot , we should update the modification time for snapshot 
> created directory .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15003) RBF: Make Router support storage type quota.

2019-12-24 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17003112#comment-17003112
 ] 

Ayush Saxena commented on HDFS-15003:
-

Thanx [~LiJinglun] for the update.
I suppose it should have been added here too :
https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html#dfsrouteradmin

+1, once added

> RBF: Make Router support storage type quota.
> 
>
> Key: HDFS-15003
> URL: https://issues.apache.org/jira/browse/HDFS-15003
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-15003.001.patch, HDFS-15003.002.patch, 
> HDFS-15003.003.patch, HDFS-15003.004.patch, HDFS-15003.005.patch, 
> HDFS-15003.006.patch
>
>
> Make Router support storage type quota.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15080) Fix the issue in reading persistent memory cache with an offset

2019-12-24 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17003109#comment-17003109
 ] 

Hadoop QA commented on HDFS-15080:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 17s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}177m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestFileChecksum |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.TestDeadNodeDetection |
|   | hadoop.hdfs.server.namenode.TestRedudantBlocks |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.TestFileChecksumCompositeCrc |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | HDFS-15080 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12989443/HDFS-15080-000.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e5b4b3902ee7 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d8cd709 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_232 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28567/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results 

[jira] [Commented] (HDFS-12999) When reach the end of the block group, it may not need to flush all the data packets(flushAllInternals) twice.

2019-12-24 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-12999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17003094#comment-17003094
 ] 

Hudson commented on HDFS-12999:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17792 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17792/])
HDFS-12999. When reach the end of the block group, it may not need to 
(ayushsaxena: rev df622cf4a32ee172ded6c4b3b97a1e49befc4f10)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java


> When reach the end of the block group, it may not need to flush all the data 
> packets(flushAllInternals) twice. 
> ---
>
> Key: HDFS-12999
> URL: https://issues.apache.org/jira/browse/HDFS-12999
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, hdfs-client
>Affects Versions: 3.0.0-beta1, 3.1.0
>Reporter: lufei
>Assignee: lufei
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-12999.001.patch, HDFS-12999.002.patch, 
> HDFS-12999.003.patch
>
>
> In order to make the process simplification. It's no need to flush all the 
> data packets(flushAllInternals) twice,when reach the end of the block group.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12999) When reach the end of the block group, it may not need to flush all the data packets(flushAllInternals) twice.

2019-12-24 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-12999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-12999:

Fix Version/s: 3.3.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to trunk.
Thanx [~figo] and [~ferhui] for the work here!!!

> When reach the end of the block group, it may not need to flush all the data 
> packets(flushAllInternals) twice. 
> ---
>
> Key: HDFS-12999
> URL: https://issues.apache.org/jira/browse/HDFS-12999
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, hdfs-client
>Affects Versions: 3.0.0-beta1, 3.1.0
>Reporter: lufei
>Assignee: lufei
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-12999.001.patch, HDFS-12999.002.patch, 
> HDFS-12999.003.patch
>
>
> In order to make the process simplification. It's no need to flush all the 
> data packets(flushAllInternals) twice,when reach the end of the block group.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12999) When reach the end of the block group, it may not need to flush all the data packets(flushAllInternals) twice.

2019-12-24 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-12999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17003081#comment-17003081
 ] 

Ayush Saxena commented on HDFS-12999:
-

Thanx [~ferhui] for the help.
v003 LGTM +1

> When reach the end of the block group, it may not need to flush all the data 
> packets(flushAllInternals) twice. 
> ---
>
> Key: HDFS-12999
> URL: https://issues.apache.org/jira/browse/HDFS-12999
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, hdfs-client
>Affects Versions: 3.0.0-beta1, 3.1.0
>Reporter: lufei
>Assignee: lufei
>Priority: Major
> Attachments: HDFS-12999.001.patch, HDFS-12999.002.patch, 
> HDFS-12999.003.patch
>
>
> In order to make the process simplification. It's no need to flush all the 
> data packets(flushAllInternals) twice,when reach the end of the block group.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15080) Fix the issue in reading persistent memory cache with an offset

2019-12-24 Thread Feilong He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feilong He updated HDFS-15080:
--
Fix Version/s: 3.2.2
   3.1.4
   3.3.0

> Fix the issue in reading persistent memory cache with an offset
> ---
>
> Key: HDFS-15080
> URL: https://issues.apache.org/jira/browse/HDFS-15080
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching, datanode
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-15080-000.patch
>
>
> Some applications can read a segment of pmem cache with an offset specified. 
> The previous implementation for pmem cache read with DirectByteBuffer didn't 
> cover this situation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15080) Fix the issue in reading persistent memory cache with an offset

2019-12-24 Thread Feilong He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feilong He updated HDFS-15080:
--
Description: Some applications can read a segment of pmem cache with an 
offset specified. The previous implementation for pmem cache read with 
DirectByteBuffer didn't cover this situation.  (was: Some applications can read 
a segment of pmem cache with an offset specified. The previous implementation 
didn't cover this situation.)

> Fix the issue in reading persistent memory cache with an offset
> ---
>
> Key: HDFS-15080
> URL: https://issues.apache.org/jira/browse/HDFS-15080
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching, datanode
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
> Attachments: HDFS-15080-000.patch
>
>
> Some applications can read a segment of pmem cache with an offset specified. 
> The previous implementation for pmem cache read with DirectByteBuffer didn't 
> cover this situation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15080) Fix the issue in reading persistent memory cache with an offset

2019-12-24 Thread Feilong He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feilong He updated HDFS-15080:
--
Description: Some applications can read a segment of pmem cache with an 
offset specified. The previous implementation didn't cover this situation.

> Fix the issue in reading persistent memory cache with an offset
> ---
>
> Key: HDFS-15080
> URL: https://issues.apache.org/jira/browse/HDFS-15080
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching, datanode
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
> Attachments: HDFS-15080-000.patch
>
>
> Some applications can read a segment of pmem cache with an offset specified. 
> The previous implementation didn't cover this situation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15080) Fix the issue in reading persistent memory cache with an offset

2019-12-24 Thread Feilong He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feilong He updated HDFS-15080:
--
Attachment: HDFS-15080-000.patch
Status: Patch Available  (was: Open)

> Fix the issue in reading persistent memory cache with an offset
> ---
>
> Key: HDFS-15080
> URL: https://issues.apache.org/jira/browse/HDFS-15080
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching, datanode
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
> Attachments: HDFS-15080-000.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15080) Fix the issue in reading persistent memory cache with an offset

2019-12-24 Thread Feilong He (Jira)
Feilong He created HDFS-15080:
-

 Summary: Fix the issue in reading persistent memory cache with an 
offset
 Key: HDFS-15080
 URL: https://issues.apache.org/jira/browse/HDFS-15080
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: caching, datanode
Reporter: Feilong He
Assignee: Feilong He






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15073) Replace curator-shaded guava import with the standard one

2019-12-24 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HDFS-15073.
--
Resolution: Fixed

> Replace curator-shaded guava import with the standard one
> -
>
> Key: HDFS-15073
> URL: https://issues.apache.org/jira/browse/HDFS-15073
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Akira Ajisaka
>Assignee: Chandra Sanivarapu
>Priority: Minor
>  Labels: newbie
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
>
> In SnapshotDiffReportListing.java, 
> {code}
> import org.apache.curator.shaded.com.google.common.base.Preconditions;
> {code}
> should be
> {code}
> import com.google.common.base.Preconditions;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15073) Replace curator-shaded guava import with the standard one

2019-12-24 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-15073:
-
Fix Version/s: 3.2.2
   3.1.4
   3.3.0
 Hadoop Flags: Reviewed

> Replace curator-shaded guava import with the standard one
> -
>
> Key: HDFS-15073
> URL: https://issues.apache.org/jira/browse/HDFS-15073
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Akira Ajisaka
>Assignee: Chandra Sanivarapu
>Priority: Minor
>  Labels: newbie
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
>
> In SnapshotDiffReportListing.java, 
> {code}
> import org.apache.curator.shaded.com.google.common.base.Preconditions;
> {code}
> should be
> {code}
> import com.google.common.base.Preconditions;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15073) Replace curator-shaded guava import with the standard one

2019-12-24 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17003048#comment-17003048
 ] 

Akira Ajisaka commented on HDFS-15073:
--

Merged the PR into trunk, branch-3.2, and branch-3.1. Thanks [~csanivar] for 
the contribution!

> Replace curator-shaded guava import with the standard one
> -
>
> Key: HDFS-15073
> URL: https://issues.apache.org/jira/browse/HDFS-15073
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Akira Ajisaka
>Assignee: Chandra Sanivarapu
>Priority: Minor
>  Labels: newbie
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
>
> In SnapshotDiffReportListing.java, 
> {code}
> import org.apache.curator.shaded.com.google.common.base.Preconditions;
> {code}
> should be
> {code}
> import com.google.common.base.Preconditions;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15073) Replace curator-shaded guava import with the standard one

2019-12-24 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17003043#comment-17003043
 ] 

Hudson commented on HDFS-15073:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17790 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17790/])
HDFS-15073. Replace curator-shaded guava import with the standard one 
(aajisaka: rev d8cd7098b4bcfbfd76915b9ecefb2c7ea320e149)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshotDiffReportListing.java


> Replace curator-shaded guava import with the standard one
> -
>
> Key: HDFS-15073
> URL: https://issues.apache.org/jira/browse/HDFS-15073
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Akira Ajisaka
>Assignee: Chandra Sanivarapu
>Priority: Minor
>  Labels: newbie
>
> In SnapshotDiffReportListing.java, 
> {code}
> import org.apache.curator.shaded.com.google.common.base.Preconditions;
> {code}
> should be
> {code}
> import com.google.common.base.Preconditions;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15073) Replace curator-shaded guava import with the standard one

2019-12-24 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-15073:
-
Summary: Replace curator-shaded guava import with the standard one  (was: 
Remove the usage of curator-shaded guava in SnapshotDiffReportListing.java)

> Replace curator-shaded guava import with the standard one
> -
>
> Key: HDFS-15073
> URL: https://issues.apache.org/jira/browse/HDFS-15073
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Akira Ajisaka
>Assignee: Chandra Sanivarapu
>Priority: Minor
>  Labels: newbie
>
> In SnapshotDiffReportListing.java, 
> {code}
> import org.apache.curator.shaded.com.google.common.base.Preconditions;
> {code}
> should be
> {code}
> import com.google.common.base.Preconditions;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-15073) Remove the usage of curator-shaded guava in SnapshotDiffReportListing.java

2019-12-24 Thread Chandra Sanivarapu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-15073 started by Chandra Sanivarapu.
-
> Remove the usage of curator-shaded guava in SnapshotDiffReportListing.java
> --
>
> Key: HDFS-15073
> URL: https://issues.apache.org/jira/browse/HDFS-15073
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Akira Ajisaka
>Assignee: Chandra Sanivarapu
>Priority: Minor
>  Labels: newbie
>
> In SnapshotDiffReportListing.java, 
> {code}
> import org.apache.curator.shaded.com.google.common.base.Preconditions;
> {code}
> should be
> {code}
> import com.google.common.base.Preconditions;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15076) Fix tests that hold FSDirectory lock, without holding FSNamesystem lock.

2019-12-24 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17002994#comment-17002994
 ] 

Hudson commented on HDFS-15076:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17789 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17789/])
HDFS-15076. Fix tests that hold FSDirectory lock, without holding (shv: rev 
b98ac2a3af50ccf2af07790ab0760d4c51820836)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDiskspaceQuotaUpdate.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestGetBlockLocations.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSNamesystem.java


> Fix tests that hold FSDirectory lock, without holding FSNamesystem lock.
> 
>
> Key: HDFS-15076
> URL: https://issues.apache.org/jira/browse/HDFS-15076
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2, 2.10.1
>
> Attachments: HDFS-15076.001.patch
>
>
> Three tests {{TestGetBlockLocations}}, {{TestFSNamesystem}}, 
> {{TestDiskspaceQuotaUpdate}} use {{FSDirectory}} methods, which hold 
> FSDirectory lock. They should also hold the global Namesystem lock.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15076) Fix tests that hold FSDirectory lock, without holding FSNamesystem lock.

2019-12-24 Thread Konstantin Shvachko (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-15076:
---
Fix Version/s: 2.10.1
   3.2.2
   3.1.4
   3.3.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Thanks [~ayushtkn] for the review.
I just committed this to branches up to 2.10.

> Fix tests that hold FSDirectory lock, without holding FSNamesystem lock.
> 
>
> Key: HDFS-15076
> URL: https://issues.apache.org/jira/browse/HDFS-15076
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2, 2.10.1
>
> Attachments: HDFS-15076.001.patch
>
>
> Three tests {{TestGetBlockLocations}}, {{TestFSNamesystem}}, 
> {{TestDiskspaceQuotaUpdate}} use {{FSDirectory}} methods, which hold 
> FSDirectory lock. They should also hold the global Namesystem lock.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15063) HttpFS : getFileStatus doesn't return ecPolicy

2019-12-24 Thread hemanthboyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17002918#comment-17002918
 ] 

hemanthboyina commented on HDFS-15063:
--

thanks for the review [~tasanuma] 
{quote}Why does it change to use JsonUtilClient in HttpFSFileSystem?
{quote}
We get a json map of HdfsFileStatus object from NN . When converted to 
FileStatus object in HttpFSFileSystem some fields were getting missed.
WebHdfs uses JsonUtilClient.toFileStatus(json, true) to convert Map to 
HdfsFileStatus then to FileStatus . Did the same for HttpFSFileSystem.
{quote} * {{testECPolicy}} may be better for the unit test name.
 * We can reuse getStatus() in the unit test like other unit tests in the 
class.{quote}
will update the patch . 

> HttpFS : getFileStatus doesn't return ecPolicy
> --
>
> Key: HDFS-15063
> URL: https://issues.apache.org/jira/browse/HDFS-15063
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-15063.001.patch
>
>
> Currently LISTSTATUS call to HttpFS returns a json. These jsonArray elements  
> have the ecPolicy name.
> But when HttpFsFileSystem converts it back into a FileStatus object, then 
> ecPolicy is not added



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14740) Recover data blocks from persistent memory read cache during datanode restarts

2019-12-24 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17002793#comment-17002793
 ] 

Hadoop QA commented on HDFS-14740:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m  
0s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
4s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
18s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
0s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
52s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
57s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
19s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
0s{color} | {color:green} branch-3.2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 15m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
44s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}116m 12s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
47s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}247m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSShell |
|   | hadoop.hdfs.TestDFSClientRetries |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.server.diskbalancer.TestDiskBalancer |
|   | hadoop.hdfs.server.namenode.ha.TestAddBlockTailing |
|   | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:0f25cbbb251 |
| JIRA Issue | HDFS-14740 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12989413/HDFS-14740-branch-3.2-001.patch
 |
| 

[jira] [Commented] (HDFS-14740) Recover data blocks from persistent memory read cache during datanode restarts

2019-12-24 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17002792#comment-17002792
 ] 

Hadoop QA commented on HDFS-14740:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.1 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 2s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m  
7s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
28s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
28s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m  0s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
54s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
11s{color} | {color:green} branch-3.1 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 12m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 23s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
40s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 34s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
52s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}211m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestRedudantBlocks |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics |
|   | hadoop.hdfs.server.diskbalancer.TestDiskBalancer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:70a0ef5d4a6 |
| JIRA Issue | HDFS-14740 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12989416/HDFS-14740-branch-3.1-001.patch
 |
| Optional Tests 

[jira] [Commented] (HDFS-14740) Recover data blocks from persistent memory read cache during datanode restarts

2019-12-24 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17002777#comment-17002777
 ] 

Hadoop QA commented on HDFS-14740:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
44s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
2s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
43s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
23s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 17m 23s{color} | 
{color:red} root generated 1 new + 25 unchanged - 1 fixed = 26 total (was 26) 
{color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
15s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}121m 57s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
10s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}247m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestFileChecksum |
|   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
|   | hadoop.hdfs.TestDatanodeRegistration |
|   | hadoop.hdfs.TestMultipleNNPortQOP |
|   | hadoop.hdfs.server.namenode.TestRedudantBlocks |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.TestFileChecksumCompositeCrc |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | HDFS-14740 |
| JIRA Patch URL | 

[jira] [Comment Edited] (HDFS-15078) RBF: Should check connection channel before sending rpc to namenode

2019-12-24 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17002739#comment-17002739
 ] 

Fei Hui edited comment on HDFS-15078 at 12/24/19 10:23 AM:
---

{quote}
The issue is the first router which sent the request that late, That client did 
failover to another router, triggered a new call and the second router 
completed the call, and the first call came after this. 
{quote}
Getting EOFException makes client failover to another router. 
And later the second router completed the call,  the first router sent the 
request late. If just the first router sent the request late, client doesn't 
get exception, it will not failover

{quote}
If the client crashed post the check, this scenario will again come, This 
doesn't seems to be a problem with the client crashing and the Router sending 
the request still to Namenode,

If such a case where one Router is delaying, I think without client connection 
crashing still issues like these can come up.
{quote}
Yes. This issue only can resolve the problem on some scenarios and it's just an 
improvement. HDFS-15079 tracks the high level problem.

In our  scenarios. This fix works.



was (Author: ferhui):
{quote}
The issue is the first router which sent the request that late, That client did 
failover to another router, triggered a new call and the second router 
completed the call, and the first call came after this. 
{quote}
Getting EOFException makes client failover to another router. 
And later the second router completed the call,  the first router sent the 
request late. If just the first router sent the request late, client doesn't 
get exception, it will not failover

{quote}
If such a case where one Router is delaying, I think without client connection 
crashing still issues like these can come up.
{quote}
Yes. This issue only can resolve the problem on some scenarios. HDFS-15079 
tracks the high level problem.

In our  scenarios. This fix works.


> RBF: Should check connection channel before sending rpc to namenode
> ---
>
> Key: HDFS-15078
> URL: https://issues.apache.org/jira/browse/HDFS-15078
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-15078.001.patch, HDFS-15078.002.patch
>
>
> dfsrouter logs show that
> {quote}
> 2019-12-20 04:11:26,724 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 6400 on , call org.apache.hadoop.hdfs.protocol.ClientProtocol.create from 
> 10.83.164.11:56908 Call#2 Retry#0: output error
> 2019-12-20 04:11:26,724 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 125 on  caught an exception
> java.nio.channels.ClosedChannelException
> at 
> sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:270)
> at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:461)
> at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2731)
> at org.apache.hadoop.ipc.Server.access$2100(Server.java:134)
> at 
> org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:1089)
> at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:1161)
> at 
> org.apache.hadoop.ipc.Server$Connection.sendResponse(Server.java:2109)
> at 
> org.apache.hadoop.ipc.Server$Connection.access$400(Server.java:1229)
> at org.apache.hadoop.ipc.Server$Call.sendResponse(Server.java:631)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2245)
> {quote}
> Maybe checking connection between client and router is better before 
> sendingrpc to namenode



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15078) RBF: Should check connection channel before sending rpc to namenode

2019-12-24 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17002739#comment-17002739
 ] 

Fei Hui edited comment on HDFS-15078 at 12/24/19 10:02 AM:
---

{quote}
The issue is the first router which sent the request that late, That client did 
failover to another router, triggered a new call and the second router 
completed the call, and the first call came after this. 
{quote}
Getting EOFException makes client failover to another router. 
And later the second router completed the call,  the first router sent the 
request late. If just the first router sent the request late, client doesn't 
get exception, it will not failover

{quote}
If such a case where one Router is delaying, I think without client connection 
crashing still issues like these can come up.
{quote}
Yes. This issue only can resolve the problem on some scenarios. HDFS-15079 
tracks the high level problem.

In our  scenarios. This fix works.



was (Author: ferhui):
{quote}
The issue is the first router which c, That client did failover to another 
router, triggered a new call and the second router completed the call, and the 
first call came after this. 
{quote}
Getting EOFException makes client failover to another router. 
And later and the second router completed the call,  the first router the first 
router.

{quote}
If such a case where one Router is delaying, I think without client connection 
crashing still issues like these can come up.
{quote}
Yes. This issue only can resolve the problem on some scenarios. HDFS-15079 
tracks the high level problem.

In our  scenarios. This fix works.


> RBF: Should check connection channel before sending rpc to namenode
> ---
>
> Key: HDFS-15078
> URL: https://issues.apache.org/jira/browse/HDFS-15078
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-15078.001.patch, HDFS-15078.002.patch
>
>
> dfsrouter logs show that
> {quote}
> 2019-12-20 04:11:26,724 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 6400 on , call org.apache.hadoop.hdfs.protocol.ClientProtocol.create from 
> 10.83.164.11:56908 Call#2 Retry#0: output error
> 2019-12-20 04:11:26,724 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 125 on  caught an exception
> java.nio.channels.ClosedChannelException
> at 
> sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:270)
> at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:461)
> at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2731)
> at org.apache.hadoop.ipc.Server.access$2100(Server.java:134)
> at 
> org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:1089)
> at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:1161)
> at 
> org.apache.hadoop.ipc.Server$Connection.sendResponse(Server.java:2109)
> at 
> org.apache.hadoop.ipc.Server$Connection.access$400(Server.java:1229)
> at org.apache.hadoop.ipc.Server$Call.sendResponse(Server.java:631)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2245)
> {quote}
> Maybe checking connection between client and router is better before 
> sendingrpc to namenode



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15079) RBF: Client maybe get an unexpected result with network anomaly

2019-12-24 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17002762#comment-17002762
 ] 

Fei Hui commented on HDFS-15079:


General Idea:
* client generate id and send it with call to namenode
* namenode keeps last id for the file of each lease
* drop the call if its id less than last id

[~ayushtkn]  [~elgoiri] [~hexiaoqiao] Any thoughts?

> RBF: Client maybe get an unexpected result with network anomaly 
> 
>
> Key: HDFS-15079
> URL: https://issues.apache.org/jira/browse/HDFS-15079
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Priority: Critical
>
>  I find there is a critical problem on RBF, HDFS-15078 can resolve it on some 
> Scenarios, but i have no idea about the overall resolution.
> The problem is that
> Client with RBF(r0, r1) create a file HDFS file via r0, it gets Exception and 
> failovers to r1
> r0 has been send create rpc to namenode(1st create)
> Client create a HDFS file via r1(2nd create)
> Client writes the HDFS file and close it finally(3rd close)
> Maybe namenode receiving the rpc in order as follow
> 2nd create
> 3rd close
> 1st create
> And overwrite is true by default, this would make the file had been written 
> an empty file. This is an critical problem 
> We had encountered this problem. There are many hive and spark jobs running 
> on our cluster,   sometimes it occurs



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12999) When reach the end of the block group, it may not need to flush all the data packets(flushAllInternals) twice.

2019-12-24 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-12999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17002760#comment-17002760
 ] 

Hadoop QA commented on HDFS-12999:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 23m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
48s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 82m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.4 Server=19.03.4 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | HDFS-12999 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12989419/HDFS-12999.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ceafc945e910 4.15.0-70-generic #79-Ubuntu SMP Tue Nov 12 
10:36:11 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 34ff7db |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_232 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28566/testReport/ |
| Max. process+thread count | 310 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28566/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> When reach the end of the block 

[jira] [Commented] (HDFS-15063) HttpFS : getFileStatus doesn't return ecPolicy

2019-12-24 Thread Takanobu Asanuma (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17002752#comment-17002752
 ] 

Takanobu Asanuma commented on HDFS-15063:
-

Thanks for working on this, [~hemanthboyina].
 * {{testECPolicy}} may be better for the unit test name.
 * We can reuse getStatus() in the unit test like other unit tests in the class.
 * Why does it change to use JsonUtilClient in HttpFSFileSystem?

> HttpFS : getFileStatus doesn't return ecPolicy
> --
>
> Key: HDFS-15063
> URL: https://issues.apache.org/jira/browse/HDFS-15063
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-15063.001.patch
>
>
> Currently LISTSTATUS call to HttpFS returns a json. These jsonArray elements  
> have the ecPolicy name.
> But when HttpFsFileSystem converts it back into a FileStatus object, then 
> ecPolicy is not added



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15079) RBF: Client maybe get an unexpected result with network anomaly

2019-12-24 Thread Fei Hui (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-15079:
---
Summary: RBF: Client maybe get an unexpected result with network anomaly   
(was: RBF: Client may get an unexpected result with network anomaly )

> RBF: Client maybe get an unexpected result with network anomaly 
> 
>
> Key: HDFS-15079
> URL: https://issues.apache.org/jira/browse/HDFS-15079
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Priority: Critical
>
>  I find there is a critical problem on RBF, HDFS-15078 can resolve it on some 
> Scenarios, but i have no idea about the overall resolution.
> The problem is that
> Client with RBF(r0, r1) create a file HDFS file via r0, it gets Exception and 
> failovers to r1
> r0 has been send create rpc to namenode(1st create)
> Client create a HDFS file via r1(2nd create)
> Client writes the HDFS file and close it finally(3rd close)
> Maybe namenode receiving the rpc in order as follow
> 2nd create
> 3rd close
> 1st create
> And overwrite is true by default, this would make the file had been written 
> an empty file. This is an critical problem 
> We had encountered this problem. There are many hive and spark jobs running 
> on our cluster,   sometimes it occurs



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15078) RBF: Should check connection channel before sending rpc to namenode

2019-12-24 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17002739#comment-17002739
 ] 

Fei Hui commented on HDFS-15078:


{quote}
The issue is the first router which c, That client did failover to another 
router, triggered a new call and the second router completed the call, and the 
first call came after this. 
{quote}
Getting EOFException makes client failover to another router. 
And later and the second router completed the call,  the first router the first 
router.

{quote}
If such a case where one Router is delaying, I think without client connection 
crashing still issues like these can come up.
{quote}
Yes. This issue only can resolve the problem on some scenarios. HDFS-15079 
tracks the high level problem.

In our  scenarios. This fix works.


> RBF: Should check connection channel before sending rpc to namenode
> ---
>
> Key: HDFS-15078
> URL: https://issues.apache.org/jira/browse/HDFS-15078
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-15078.001.patch, HDFS-15078.002.patch
>
>
> dfsrouter logs show that
> {quote}
> 2019-12-20 04:11:26,724 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 6400 on , call org.apache.hadoop.hdfs.protocol.ClientProtocol.create from 
> 10.83.164.11:56908 Call#2 Retry#0: output error
> 2019-12-20 04:11:26,724 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 125 on  caught an exception
> java.nio.channels.ClosedChannelException
> at 
> sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:270)
> at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:461)
> at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2731)
> at org.apache.hadoop.ipc.Server.access$2100(Server.java:134)
> at 
> org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:1089)
> at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:1161)
> at 
> org.apache.hadoop.ipc.Server$Connection.sendResponse(Server.java:2109)
> at 
> org.apache.hadoop.ipc.Server$Connection.access$400(Server.java:1229)
> at org.apache.hadoop.ipc.Server$Call.sendResponse(Server.java:631)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2245)
> {quote}
> Maybe checking connection between client and router is better before 
> sendingrpc to namenode



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15078) RBF: Should check connection channel before sending rpc to namenode

2019-12-24 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17002729#comment-17002729
 ] 

Ayush Saxena commented on HDFS-15078:
-

{quote}And overwrite is true by default, this would make the file had been 
written an empty file. This is an critical problem and we had encountered it
{quote}
This wouldn't be solved with your fix too, If the client crashed post the 
check, this scenario will again come, This doesn't seems to be a problem with 
the client crashing and the Router sending the request still to Namenode, The 
issue is the first router which sent the request that late, That client did 
failover to another router, triggered a new call and the second router 
completed the call, and the first call came after this. 

The problem is RBF can't ensure perfect sequential behavior, since there are 
multiple routers, accepting calls, if any one router is slow and others are 
fast, this type of problem can come. If such a case where one Router is 
delaying, I think without client connection crashing still issues like these 
can come up.

> RBF: Should check connection channel before sending rpc to namenode
> ---
>
> Key: HDFS-15078
> URL: https://issues.apache.org/jira/browse/HDFS-15078
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-15078.001.patch, HDFS-15078.002.patch
>
>
> dfsrouter logs show that
> {quote}
> 2019-12-20 04:11:26,724 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 6400 on , call org.apache.hadoop.hdfs.protocol.ClientProtocol.create from 
> 10.83.164.11:56908 Call#2 Retry#0: output error
> 2019-12-20 04:11:26,724 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 125 on  caught an exception
> java.nio.channels.ClosedChannelException
> at 
> sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:270)
> at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:461)
> at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2731)
> at org.apache.hadoop.ipc.Server.access$2100(Server.java:134)
> at 
> org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:1089)
> at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:1161)
> at 
> org.apache.hadoop.ipc.Server$Connection.sendResponse(Server.java:2109)
> at 
> org.apache.hadoop.ipc.Server$Connection.access$400(Server.java:1229)
> at org.apache.hadoop.ipc.Server$Call.sendResponse(Server.java:631)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2245)
> {quote}
> Maybe checking connection between client and router is better before 
> sendingrpc to namenode



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12999) When reach the end of the block group, it may not need to flush all the data packets(flushAllInternals) twice.

2019-12-24 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-12999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17002720#comment-17002720
 ] 

Fei Hui commented on HDFS-12999:


Yes, [~figo] doesn't seems active nowdays. Upload v003 patch on his behalf
[~ayushtkn] please review

> When reach the end of the block group, it may not need to flush all the data 
> packets(flushAllInternals) twice. 
> ---
>
> Key: HDFS-12999
> URL: https://issues.apache.org/jira/browse/HDFS-12999
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, hdfs-client
>Affects Versions: 3.0.0-beta1, 3.1.0
>Reporter: lufei
>Assignee: lufei
>Priority: Major
> Attachments: HDFS-12999.001.patch, HDFS-12999.002.patch, 
> HDFS-12999.003.patch
>
>
> In order to make the process simplification. It's no need to flush all the 
> data packets(flushAllInternals) twice,when reach the end of the block group.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12999) When reach the end of the block group, it may not need to flush all the data packets(flushAllInternals) twice.

2019-12-24 Thread Fei Hui (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-12999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-12999:
---
Attachment: HDFS-12999.003.patch

> When reach the end of the block group, it may not need to flush all the data 
> packets(flushAllInternals) twice. 
> ---
>
> Key: HDFS-12999
> URL: https://issues.apache.org/jira/browse/HDFS-12999
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, hdfs-client
>Affects Versions: 3.0.0-beta1, 3.1.0
>Reporter: lufei
>Assignee: lufei
>Priority: Major
> Attachments: HDFS-12999.001.patch, HDFS-12999.002.patch, 
> HDFS-12999.003.patch
>
>
> In order to make the process simplification. It's no need to flush all the 
> data packets(flushAllInternals) twice,when reach the end of the block group.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15079) RBF: Client may get an unexpected result with network anomaly

2019-12-24 Thread Fei Hui (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-15079:
---
Issue Type: Bug  (was: Improvement)

> RBF: Client may get an unexpected result with network anomaly 
> --
>
> Key: HDFS-15079
> URL: https://issues.apache.org/jira/browse/HDFS-15079
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Priority: Critical
>
>  I find there is a critical problem on RBF, HDFS-15078 can resolve it on some 
> Scenarios, but i have no idea about the overall resolution.
> The problem is that
> Client with RBF(r0, r1) create a file HDFS file via r0, it gets Exception and 
> failovers to r1
> r0 has been send create rpc to namenode(1st create)
> Client create a HDFS file via r1(2nd create)
> Client writes the HDFS file and close it finally(3rd close)
> Maybe namenode receiving the rpc in order as follow
> 2nd create
> 3rd close
> 1st create
> And overwrite is true by default, this would make the file had been written 
> an empty file. This is an critical problem 
> We had encountered this problem. There are many hive and spark jobs running 
> on our cluster,   sometimes it occurs



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15079) RBF: Client may get an unexpected result with network anomaly

2019-12-24 Thread Fei Hui (Jira)
Fei Hui created HDFS-15079:
--

 Summary: RBF: Client may get an unexpected result with network 
anomaly 
 Key: HDFS-15079
 URL: https://issues.apache.org/jira/browse/HDFS-15079
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: rbf
Affects Versions: 3.3.0
Reporter: Fei Hui


 I find there is a critical problem on RBF, HDFS-15078 can resolve it on some 
Scenarios, but i have no idea about the overall resolution.
The problem is that

Client with RBF(r0, r1) create a file HDFS file via r0, it gets Exception and 
failovers to r1
r0 has been send create rpc to namenode(1st create)
Client create a HDFS file via r1(2nd create)
Client writes the HDFS file and close it finally(3rd close)
Maybe namenode receiving the rpc in order as follow

2nd create
3rd close
1st create
And overwrite is true by default, this would make the file had been written an 
empty file. This is an critical problem 
We had encountered this problem. There are many hive and spark jobs running on 
our cluster,   sometimes it occurs



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org