[jira] [Updated] (HDFS-12459) Fix revert: Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API

2017-11-10 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12459:
---
Attachment: HDFS-12459.006.patch

> Fix revert: Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API
> 
>
> Key: HDFS-12459
> URL: https://issues.apache.org/jira/browse/HDFS-12459
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12459.001.patch, HDFS-12459.002.patch, 
> HDFS-12459.003.patch, HDFS-12459.004.patch, HDFS-12459.005.patch, 
> HDFS-12459.006.patch, HDFS-12459.006.patch
>
>
> HDFS-11156 was reverted because the implementation was non optimal, based on 
> the suggestion from [~shahrs87], we should avoid creating a dfs client to get 
> block locations because that create extra RPC call. Instead we should use 
> {{NamenodeProtocols#getBlockLocations}} then covert {{LocatedBlocks}} to 
> {{BlockLocation[]}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12775) [READ] Fix reporting of Provided volumes

2017-11-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248342#comment-16248342
 ] 

Hadoop QA commented on HDFS-12775:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-9806 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 9s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
13s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
12s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
31s{color} | {color:red} hadoop-tools/hadoop-fs2img in HDFS-9806 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} HDFS-9806 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
15s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 10s{color} | {color:orange} root: The patch generated 12 new + 871 unchanged 
- 2 fixed = 883 total (was 873) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 46s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 13s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
44s{color} | {color:green} hadoop-fs2img in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}189m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.tools.TestJMXGet |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestProvidedImpl |
|   | hadoop.hdfs.server.namenode.TestFSNamesystemMBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12775 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12897167/HDFS-12775-HDFS-9806.001.patch
 |
| Optional Tests |  asflicense  compile 

[jira] [Commented] (HDFS-12801) RBF: Set MountTableResolver as default file resolver

2017-11-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248316#comment-16248316
 ] 

Hadoop QA commented on HDFS-12801:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
28m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m  2s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}118m  6s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}175m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.TestUnbuffer |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12801 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12897165/HDFS-12801.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  |
| uname | Linux 89966987e901 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6d201f7 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22049/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22049/testReport/ |
| Max. process+thread count | 3929 (vs. ulimit of 5000) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22049/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RBF: Set MountTableResolver as default file resolver
> 
>
>   

[jira] [Commented] (HDFS-12618) fsck -includeSnapshots reports wrong amount of total blocks

2017-11-10 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248306#comment-16248306
 ] 

Xiao Chen commented on HDFS-12618:
--

Yup, append is definitely a valid scenario. We can also do a truncate. 

> fsck -includeSnapshots reports wrong amount of total blocks
> ---
>
> Key: HDFS-12618
> URL: https://issues.apache.org/jira/browse/HDFS-12618
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0-alpha3
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Minor
> Attachments: HDFS-121618.initial, HDFS-12618.001.patch, 
> HDFS-12618.002.patch, HDFS-12618.003.patch
>
>
> When snapshot is enabled, if a file is deleted but is contained by a 
> snapshot, *fsck* will not reported blocks for such file, showing different 
> number of *total blocks* than what is exposed in the Web UI. 
> This should be fine, as *fsck* provides *-includeSnapshots* option. The 
> problem is that *-includeSnapshots* option causes *fsck* to count blocks for 
> every occurrence of a file on snapshots, which is wrong because these blocks 
> should be counted only once (for instance, if a 100MB file is present on 3 
> snapshots, it would still map to one block only in hdfs). This causes fsck to 
> report much more blocks than what actually exist in hdfs and is reported in 
> the Web UI.
> Here's an example:
> 1) HDFS has two files of 2 blocks each:
> {noformat}
> $ hdfs dfs -ls -R /
> drwxr-xr-x   - root supergroup  0 2017-10-07 21:21 /snap-test
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:16 /snap-test/file1
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:17 /snap-test/file2
> drwxr-xr-x   - root supergroup  0 2017-05-13 13:03 /test
> {noformat} 
> 2) There are two snapshots, with the two files present on each of the 
> snapshots:
> {noformat}
> $ hdfs dfs -ls -R /snap-test/.snapshot
> drwxr-xr-x   - root supergroup  0 2017-10-07 21:21 
> /snap-test/.snapshot/snap1
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:16 
> /snap-test/.snapshot/snap1/file1
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:17 
> /snap-test/.snapshot/snap1/file2
> drwxr-xr-x   - root supergroup  0 2017-10-07 21:21 
> /snap-test/.snapshot/snap2
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:16 
> /snap-test/.snapshot/snap2/file1
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:17 
> /snap-test/.snapshot/snap2/file2
> {noformat}
> 3) *fsck -includeSnapshots* reports 12 blocks in total (4 blocks for the 
> normal file path, plus 4 blocks for each snapshot path):
> {noformat}
> $ hdfs fsck / -includeSnapshots
> FSCK started by root (auth:SIMPLE) from /127.0.0.1 for path / at Mon Oct 09 
> 15:15:36 BST 2017
> Status: HEALTHY
>  Number of data-nodes:1
>  Number of racks: 1
>  Total dirs:  6
>  Total symlinks:  0
> Replicated Blocks:
>  Total size:  1258291200 B
>  Total files: 6
>  Total blocks (validated):12 (avg. block size 104857600 B)
>  Minimally replicated blocks: 12 (100.0 %)
>  Over-replicated blocks:  0 (0.0 %)
>  Under-replicated blocks: 0 (0.0 %)
>  Mis-replicated blocks:   0 (0.0 %)
>  Default replication factor:  1
>  Average block replication:   1.0
>  Missing blocks:  0
>  Corrupt blocks:  0
>  Missing replicas:0 (0.0 %)
> {noformat}
> 4) Web UI shows the correct number (4 blocks only):
> {noformat}
> Security is off.
> Safemode is off.
> 5 files and directories, 4 blocks = 9 total filesystem object(s).
> {noformat}
> I would like to work on this solution, will propose an initial solution 
> shortly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8198) Erasure Coding: system test of TeraSort

2017-11-10 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248304#comment-16248304
 ] 

Xiao Chen commented on HDFS-8198:
-

Thanks Daniel for reporting the issue and details.
bq. I can't seem to find the proper way to upload
Probably due to jira permissions. I just added you to the HDFS contributor 
role, could you see the 'Attach Files' option now?

We will try to reproduce this in our cluster too.

> Erasure Coding: system test of TeraSort
> ---
>
> Key: HDFS-8198
> URL: https://issues.apache.org/jira/browse/HDFS-8198
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Kai Sasaki
>
> Functional system test of TeraSort on EC files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12685) [READ] FsVolumeImpl exception when scanning Provided storage volume

2017-11-10 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248284#comment-16248284
 ] 

Virajith Jalaparti commented on HDFS-12685:
---

HDFS-12777 disables the DirectoryScanner for Provided volumes. So, this issue 
should no longer arise. However, this patch proposes the right way of 
construction of {{ScanInfo}} for provided volumes. So that if in the future, 
{{DirectoryScanner}} is enabled back on Provided volumes, this error will not 
occur. 

[~ehiggs], can you take a look at this fix?

> [READ] FsVolumeImpl exception when scanning Provided storage volume
> ---
>
> Key: HDFS-12685
> URL: https://issues.apache.org/jira/browse/HDFS-12685
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12685-HDFS-9806.001.patch, 
> HDFS-12685-HDFS-9806.002.patch
>
>
> I left a Datanode running overnight and found this in the logs in the morning:
> {code}
> 2017-10-18 23:51:54,391 ERROR datanode.DirectoryScanner: Error compiling 
> report for the volume, StorageId: DS-e75ebc3c-6b12-424e-875a-a4ae1a4dcc29 
>   
>  
> java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: 
> URI scheme is not "file"  
>   
>  
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   
>   
> 
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)   
>   
>   
> 
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.getDiskReport(DirectoryScanner.java:544)
>   
>  
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.scan(DirectoryScanner.java:393)
>   
>   
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile(DirectoryScanner.java:375)
>   
>  
> at 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.run(DirectoryScanner.java:320)
>   
>
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)   
>   
>
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)   
>   
>   
> 
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   
> 
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   
>
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   
>   
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   
>   
> at java.lang.Thread.run(Thread.java:748)  
>   
>   
> 
> Caused by: java.lang.IllegalArgumentException: URI scheme is not 

[jira] [Updated] (HDFS-12775) [READ] Fix reporting of Provided volumes

2017-11-10 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12775:
--
Attachment: HDFS-12775-HDFS-9806.001.patch

Posting a patch that reports the capacity of PROVIDED volumes as follows:
#The capacity (and dfs used) of a PROVIDED volume on a DN is reported to be 
equal to the total size of the data (in bytes) mounted from the remote storage. 
Each volume reports zero available capacity (thus 100% usage). This included 
changes to {{ProvidedVolumeImpl}}, and adding a default {{ProvidedVolumeDF}} 
implementation and removing the earlier configurable {{ProvidedVolumeDF}} 
interface.
# The capacity of the Provided volumes is not aggregated in the NN, and does 
not account towards the total capacity reported by the NN. Thus, the 
"Configured Capacity" metric reported in the NN Web UI only reports the local 
capacity available.
# In stats reported by {{BlockStatsMXBean}}, the capacity of the PROVIDED 
storagetype is not aggregated across Datanodes. The reported capacity is equal 
to the capacity of the remote storage. 
# Adds a Provided capacity metric to JMX ({{getProvidedCapacity}} and to the NN 
web UI, to distinguish this capacity from that of the local HDFS capacity.

These changes are motivated by the fact that the capacity of the PROVIDED 
volumes is virtual, and aggregating it across Datanodes in the Namenode can 
provide the illusion of having a capacity far greater than what is physically 
available in the cluster.

> [READ] Fix reporting of Provided volumes
> 
>
> Key: HDFS-12775
> URL: https://issues.apache.org/jira/browse/HDFS-12775
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12775-HDFS-9806.001.patch
>
>
> Provided Volumes currently report infinite capacity and 0 space used. 
> Further, PROVIDED locations are reported as {{/default-rack/null:0}} in fsck. 
> This JIRA is for making this more readable, and replace these with what users 
> would expect.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12775) [READ] Fix reporting of Provided volumes

2017-11-10 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12775:
--
Status: Patch Available  (was: Open)

> [READ] Fix reporting of Provided volumes
> 
>
> Key: HDFS-12775
> URL: https://issues.apache.org/jira/browse/HDFS-12775
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12775-HDFS-9806.001.patch
>
>
> Provided Volumes currently report infinite capacity and 0 space used. 
> Further, PROVIDED locations are reported as {{/default-rack/null:0}} in fsck. 
> This JIRA is for making this more readable, and replace these with what users 
> would expect.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12801) RBF: Set MountTableResolver as default file resolver

2017-11-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248265#comment-16248265
 ] 

Íñigo Goiri commented on HDFS-12801:


I've should've done this change with HDFS-10880.

> RBF: Set MountTableResolver as default file resolver
> 
>
> Key: HDFS-12801
> URL: https://issues.apache.org/jira/browse/HDFS-12801
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-12801.000.patch
>
>
> {{hdfs-default.xml}} is still using the {{MockResolver}} for the default 
> setup which is the one used for unit testing. This should be a real resolver 
> like the {{MountTableResolver}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12106) [SPS]: Improve storage policy satisfier configurations

2017-11-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248258#comment-16248258
 ] 

Hadoop QA commented on HDFS-12106:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-10285 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
39s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
47s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
55s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
53s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} HDFS-10285 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 50s{color} | {color:orange} hadoop-hdfs-project: The patch generated 16 new 
+ 688 unchanged - 5 fixed = 704 total (was 693) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 45s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
24s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}119m 45s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}189m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.server.namenode.TestStoragePolicySatisfier |
|   | hadoop.hdfs.TestPread |
|   | hadoop.tools.TestHdfsConfigFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12106 |
| JIRA Patch URL | 

[jira] [Commented] (HDFS-12498) Journal Syncer is not started in Federated + HA cluster

2017-11-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248252#comment-16248252
 ] 

Hudson commented on HDFS-12498:
---

ABORTED: Integrated in Jenkins build Hadoop-trunk-Commit #13221 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13221/])
HDFS-12498. Journal Syncer is not started in Federated + HA cluster. (arp: rev 
6d201f77c734d6c6a9e3e297fe3dbff251cbb8b3)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeSyncer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/server/TestJournalNode.java


> Journal Syncer is not started in Federated + HA cluster
> ---
>
> Key: HDFS-12498
> URL: https://issues.apache.org/jira/browse/HDFS-12498
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Fix For: 3.1.0
>
> Attachments: HDFS-12498.01.patch, HDFS-12498.02.patch, 
> HDFS-12498.03.patch, HDFS-12498.04.patch, HDFS-12498.05.patch, hdfs-site.xml
>
>
> Journal Syncer is not getting started in HDFS + Federated cluster, when 
> dfs.shared.edits.dir.<> is provided, instead of 
> dfs.namenode.shared.edits.dir 
> *Log Snippet:*
> {code:java}
> 2017-09-19 21:42:40,598 WARN 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Could not construct 
> Shared Edits Uri
> 2017-09-19 21:42:40,598 WARN 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Other JournalNode 
> addresses not available. Journal Syncing cannot be done
> 2017-09-19 21:42:40,598 WARN 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Failed to start 
> SyncJournal daemon for journal ns1
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12801) RBF: Set MountTableResolver as default file resolver

2017-11-10 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248248#comment-16248248
 ] 

Íñigo Goiri commented on HDFS-12801:


{{DFSConfigKeys}} and the documentation at {{HDFSRouterFederation.md}} are 
correct but by default is still using {{MockResolver}}.

> RBF: Set MountTableResolver as default file resolver
> 
>
> Key: HDFS-12801
> URL: https://issues.apache.org/jira/browse/HDFS-12801
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-12801.000.patch
>
>
> {{hdfs-default.xml}} is still using the {{MockResolver}} for the default 
> setup which is the one used for unit testing. This should be a real resolver 
> like the {{MountTableResolver}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12801) RBF: Set MountTableResolver as default file resolver

2017-11-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12801:
---
Attachment: HDFS-12801.000.patch

> RBF: Set MountTableResolver as default file resolver
> 
>
> Key: HDFS-12801
> URL: https://issues.apache.org/jira/browse/HDFS-12801
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-12801.000.patch
>
>
> {{hdfs-default.xml}} is still using the {{MockResolver}} for the default 
> setup which is the one used for unit testing. This should be a real resolver 
> like the {{MountTableResolver}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12801) RBF: Set MountTableResolver as default file resolver

2017-11-10 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12801:
---
Assignee: Íñigo Goiri
  Status: Patch Available  (was: Open)

> RBF: Set MountTableResolver as default file resolver
> 
>
> Key: HDFS-12801
> URL: https://issues.apache.org/jira/browse/HDFS-12801
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Attachments: HDFS-12801.000.patch
>
>
> {{hdfs-default.xml}} is still using the {{MockResolver}} for the default 
> setup which is the one used for unit testing. This should be a real resolver 
> like the {{MountTableResolver}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12801) RBF: Set MountTableResolver as default file resolver

2017-11-10 Thread JIRA
Íñigo Goiri created HDFS-12801:
--

 Summary: RBF: Set MountTableResolver as default file resolver
 Key: HDFS-12801
 URL: https://issues.apache.org/jira/browse/HDFS-12801
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Íñigo Goiri
Priority: Minor


{{hdfs-default.xml}} is still using the {{MockResolver}} for the default setup 
which is the one used for unit testing. This should be a real resolver like the 
{{MountTableResolver}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12754) Lease renewal can hit a deadlock

2017-11-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248245#comment-16248245
 ] 

Hadoop QA commented on HDFS-12754:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
2s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 36s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
225 unchanged - 0 fixed = 226 total (was 225) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 46s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
19s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 48m 48s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
27s{color} | {color:red} The patch generated 241 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}109m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Unreaped Processes | hadoop-hdfs:9 |
| Failed junit tests | hadoop.hdfs.TestParallelShortCircuitRead |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.fs.TestUnbuffer |
|   | hadoop.hdfs.TestWriteReadStripedFile |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 |
|   | hadoop.cli.TestHDFSCLI |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.TestReadStripedFileWithDNFailure |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000 |
|   | hadoop.hdfs.shortcircuit.TestShortCircuitCache |
|   | hadoop.hdfs.TestErasureCodingMultipleRacks |
|   | hadoop.hdfs.TestDFSStripedInputStream |
|   | 

[jira] [Comment Edited] (HDFS-12498) Journal Syncer is not started in Federated + HA cluster

2017-11-10 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248228#comment-16248228
 ] 

Arpit Agarwal edited comment on HDFS-12498 at 11/11/17 12:31 AM:
-

I've committed this. Thanks [~bharatviswa]. Thanks for the code review 
[~hanishakoneru].


was (Author: arpitagarwal):
I've committed this. Thanks [~bharatviswa].

> Journal Syncer is not started in Federated + HA cluster
> ---
>
> Key: HDFS-12498
> URL: https://issues.apache.org/jira/browse/HDFS-12498
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Fix For: 3.1.0
>
> Attachments: HDFS-12498.01.patch, HDFS-12498.02.patch, 
> HDFS-12498.03.patch, HDFS-12498.04.patch, HDFS-12498.05.patch, hdfs-site.xml
>
>
> Journal Syncer is not getting started in HDFS + Federated cluster, when 
> dfs.shared.edits.dir.<> is provided, instead of 
> dfs.namenode.shared.edits.dir 
> *Log Snippet:*
> {code:java}
> 2017-09-19 21:42:40,598 WARN 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Could not construct 
> Shared Edits Uri
> 2017-09-19 21:42:40,598 WARN 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Other JournalNode 
> addresses not available. Journal Syncing cannot be done
> 2017-09-19 21:42:40,598 WARN 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Failed to start 
> SyncJournal daemon for journal ns1
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12498) Journal Syncer is not started in Federated + HA cluster

2017-11-10 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-12498:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.1.0
   Status: Resolved  (was: Patch Available)

I've committed this. Thanks [~bharatviswa].

> Journal Syncer is not started in Federated + HA cluster
> ---
>
> Key: HDFS-12498
> URL: https://issues.apache.org/jira/browse/HDFS-12498
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Fix For: 3.1.0
>
> Attachments: HDFS-12498.01.patch, HDFS-12498.02.patch, 
> HDFS-12498.03.patch, HDFS-12498.04.patch, HDFS-12498.05.patch, hdfs-site.xml
>
>
> Journal Syncer is not getting started in HDFS + Federated cluster, when 
> dfs.shared.edits.dir.<> is provided, instead of 
> dfs.namenode.shared.edits.dir 
> *Log Snippet:*
> {code:java}
> 2017-09-19 21:42:40,598 WARN 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Could not construct 
> Shared Edits Uri
> 2017-09-19 21:42:40,598 WARN 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Other JournalNode 
> addresses not available. Journal Syncing cannot be done
> 2017-09-19 21:42:40,598 WARN 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Failed to start 
> SyncJournal daemon for journal ns1
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12800) Potential disk/block missing when DataNode upgrade with data layout changed

2017-11-10 Thread Wei Yan (JIRA)
Wei Yan created HDFS-12800:
--

 Summary: Potential disk/block missing when DataNode upgrade with 
data layout changed
 Key: HDFS-12800
 URL: https://issues.apache.org/jira/browse/HDFS-12800
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Wei Yan
Assignee: Wei Yan


During upgrade with a data layout change, we found some disks are not formatted 
as new layout version, causing some blocks are missing. The root cause is 
because of race conflict in the doUpgrade process.

In current DataStorage.java's loadBlockPoolSliceStorage implementation, for 
each datadir, it will restore trash, generate upgrade task, and execute these 
tasks at the end of each datadir for-loop. 
{code}
for (StorageLocation dataDir : dataDirs) {
  dataDir.makeBlockPoolDir(bpid, null);
  try {
final List callables = Lists.newArrayList();
final List dirs = bpStorage.recoverTransitionRead(
nsInfo, dataDir, startOpt, callables, datanode.getConf());
if (callables.isEmpty()) {
  ..
} else {
  for(Callable c : callables) {
tasks.add(new UpgradeTask(dataDir, executor.submit(c)));
  }
}
  } catch (IOException e) {
..
  }
}
{code}

Inside the doUpgrade task, it will actually update the layoutVersion variable.
{code}
this.layoutVersion = HdfsServerConstants.DATANODE_LAYOUT_VERSION;
{code}
This will break the upgrade task generation for other datadirs 
(BlockPoolSliceStorage.java). The 2nd if condition will fail, causing some 
disks are not added to the upgrade task lists. As a results, only part of disks 
are upgraded to the new layout format, and few are not. Restarting DataNodes 
will reduce the missing number.
{code}
if (this.layoutVersion > HdfsServerConstants.DATANODE_LAYOUT_VERSION) {
  int restored = restoreBlockFilesFromTrash(getTrashRootDir(sd));
  LOG.info("Restored " + restored + " block files from trash " +
"before the layout upgrade. These blocks will be moved to " +
"the previous directory during the upgrade");
}
if (this.layoutVersion > HdfsServerConstants.DATANODE_LAYOUT_VERSION
|| this.cTime < nsInfo.getCTime()) {
  doUpgrade(sd, nsInfo, callables, conf); // upgrade
  return true;
}
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12351) Explicitly describe the minimal number of DataNodes required to support an EC policy in EC document.

2017-11-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248087#comment-16248087
 ] 

Hadoop QA commented on HDFS-12351:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
31m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12351 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12897125/HDFS-12351.001.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux dc4dbb43c938 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8a1bd9a |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 328 (vs. ulimit of 5000) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22046/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Explicitly describe the minimal number of DataNodes required to support an EC 
> policy in EC document.
> 
>
> Key: HDFS-12351
> URL: https://issues.apache.org/jira/browse/HDFS-12351
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Hanisha Koneru
>Priority: Minor
> Attachments: HDFS-12351.001.patch
>
>
> Should explicitly call out the minimal number of DataNodes (ie.. 5 for 
> RS(3,2)) in EC document, to make it easy to understand for non-storage 
> people. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12798) Ozone: scm web: fix the node status table

2017-11-10 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12798:

  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: HDFS-7240
Target Version/s: HDFS-7240
  Status: Resolved  (was: Patch Available)

Thanks for the contribution. I have committed this to the feature branch.


> Ozone: scm web: fix the node status table
> -
>
> Key: HDFS-12798
> URL: https://issues.apache.org/jira/browse/HDFS-12798
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Fix For: HDFS-7240
>
> Attachments: HDFS-12798-HDFS-7240.001.patch, after.png
>
>
> JMX interface has been fixed with HDFS-12684 with removing a duplicated 
> information. We need to update the web ui to use the right jmx bean and 
> display the node statuses from the right page.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12754) Lease renewal can hit a deadlock

2017-11-10 Thread Kuhu Shukla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kuhu Shukla updated HDFS-12754:
---
Attachment: HDFS-12754.007.patch

Apologies the v6 patch had additional changes that are not relevant to this 
JIRA as [~kihwal] pointed out off line. Uploading patch with the needed 
changes. 

> Lease renewal can hit a deadlock 
> -
>
> Key: HDFS-12754
> URL: https://issues.apache.org/jira/browse/HDFS-12754
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.1
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
> Attachments: HDFS-12754.001.patch, HDFS-12754.002.patch, 
> HDFS-12754.003.patch, HDFS-12754.004.patch, HDFS-12754.005.patch, 
> HDFS-12754.006.patch, HDFS-12754.007.patch
>
>
> The Client and the renewer can hit a deadlock during close operation since 
> closeFile() reaches back to the DFSClient#removeFileBeingWritten. This is 
> possible if the client class close when the renewer is renewing a lease.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12106) [SPS]: Improve storage policy satisfier configurations

2017-11-10 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247990#comment-16247990
 ] 

Surendra Singh Lilhore commented on HDFS-12106:
---

Thanks [~rakeshr] for review..
Attached V3 patch. please review...

Fixed Comments:
# Comment1 : Fixed
# Comment2 : Fixed
# Comment3 : Agree.Changed to 0.
# Comment4 : Changed to switch case

> [SPS]: Improve storage policy satisfier configurations
> --
>
> Key: HDFS-12106
> URL: https://issues.apache.org/jira/browse/HDFS-12106
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-10285
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-12106-HDFS-10285-01.patch, 
> HDFS-12106-HDFS-10285-02.patch, HDFS-12106-HDFS-10285-03.patch
>
>
> Following are the changes doing as part of this task:-
> # Make satisfy policy retry configurable.
> Based on 
> [discussion|https://issues.apache.org/jira/browse/HDFS-11965?focusedCommentId=16074338=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16074338]
>  in HDFS-11965, we can make satisfy policy retry configurable.
> # Change {{dfs.storage.policy.satisfier.low.max-streams.preference}}'s value 
> to {{true}} and modify the default value to true as well. If user wants equal 
> share then it should be false, but presently it is true which is not correct. 
> Thanks [~umamaheswararao] for pointing out this case.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12106) [SPS]: Improve storage policy satisfier configurations

2017-11-10 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-12106:
--
Attachment: HDFS-12106-HDFS-10285-03.patch

> [SPS]: Improve storage policy satisfier configurations
> --
>
> Key: HDFS-12106
> URL: https://issues.apache.org/jira/browse/HDFS-12106
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-10285
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-12106-HDFS-10285-01.patch, 
> HDFS-12106-HDFS-10285-02.patch, HDFS-12106-HDFS-10285-03.patch
>
>
> Following are the changes doing as part of this task:-
> # Make satisfy policy retry configurable.
> Based on 
> [discussion|https://issues.apache.org/jira/browse/HDFS-11965?focusedCommentId=16074338=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16074338]
>  in HDFS-11965, we can make satisfy policy retry configurable.
> # Change {{dfs.storage.policy.satisfier.low.max-streams.preference}}'s value 
> to {{true}} and modify the default value to true as well. If user wants equal 
> share then it should be false, but presently it is true which is not correct. 
> Thanks [~umamaheswararao] for pointing out this case.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-12785) Ozone: Add timeunit for ozone.scm.heartbeat.interval.seconds in ozone-default.xml

2017-11-10 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDFS-12785.
---
Resolution: Duplicate

> Ozone: Add timeunit for ozone.scm.heartbeat.interval.seconds in 
> ozone-default.xml
> -
>
> Key: HDFS-12785
> URL: https://issues.apache.org/jira/browse/HDFS-12785
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Bharat Viswanadham
>Priority: Minor
>  Labels: newbie
>
> Have seen lot of mesage like below, adding a timeunit will help get rid of 
> the info mesage below.
> {code}
> 2017-10-20 17:02:55,168 [Datanode State Machine Thread - 0] INFO  
> Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
> for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12785) Ozone: Add timeunit for ozone.scm.heartbeat.interval.seconds in ozone-default.xml

2017-11-10 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247937#comment-16247937
 ] 

Bharat Viswanadham commented on HDFS-12785:
---

[~elek]
Yes I think you have addressed this one already in your jira.
Closing this one as a duplicate of HDFS-12698.


> Ozone: Add timeunit for ozone.scm.heartbeat.interval.seconds in 
> ozone-default.xml
> -
>
> Key: HDFS-12785
> URL: https://issues.apache.org/jira/browse/HDFS-12785
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Bharat Viswanadham
>Priority: Minor
>  Labels: newbie
>
> Have seen lot of mesage like below, adding a timeunit will help get rid of 
> the info mesage below.
> {code}
> 2017-10-20 17:02:55,168 [Datanode State Machine Thread - 0] INFO  
> Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
> for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12798) Ozone: scm web: fix the node status table

2017-11-10 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247906#comment-16247906
 ] 

Anu Engineer commented on HDFS-12798:
-

+1, I will commit this shortly.



> Ozone: scm web: fix the node status table
> -
>
> Key: HDFS-12798
> URL: https://issues.apache.org/jira/browse/HDFS-12798
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12798-HDFS-7240.001.patch, after.png
>
>
> JMX interface has been fixed with HDFS-12684 with removing a duplicated 
> information. We need to update the web ui to use the right jmx bean and 
> display the node statuses from the right page.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12351) Explicitly describe the minimal number of DataNodes required to support an EC policy in EC document.

2017-11-10 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-12351:
--
Status: Patch Available  (was: Open)

> Explicitly describe the minimal number of DataNodes required to support an EC 
> policy in EC document.
> 
>
> Key: HDFS-12351
> URL: https://issues.apache.org/jira/browse/HDFS-12351
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Hanisha Koneru
>Priority: Minor
> Attachments: HDFS-12351.001.patch
>
>
> Should explicitly call out the minimal number of DataNodes (ie.. 5 for 
> RS(3,2)) in EC document, to make it easy to understand for non-storage 
> people. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12351) Explicitly describe the minimal number of DataNodes required to support an EC policy in EC document.

2017-11-10 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-12351:
--
Attachment: HDFS-12351.001.patch

> Explicitly describe the minimal number of DataNodes required to support an EC 
> policy in EC document.
> 
>
> Key: HDFS-12351
> URL: https://issues.apache.org/jira/browse/HDFS-12351
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Hanisha Koneru
>Priority: Minor
> Attachments: HDFS-12351.001.patch
>
>
> Should explicitly call out the minimal number of DataNodes (ie.. 5 for 
> RS(3,2)) in EC document, to make it easy to understand for non-storage 
> people. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12756) Ozone: Add datanodeID to heartbeat responses and container protocol

2017-11-10 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247889#comment-16247889
 ] 

Anu Engineer edited comment on HDFS-12756 at 11/10/17 6:34 PM:
---

[~nandakumar131] , [~msingh] and [~xyao] Thanks for reviews and rebasing the 
patch. I have committed this to the feature branch.


was (Author: anu):
[~nandakumar131] and [~xyao] Thanks for reviews and rebasing the patch. I have 
committed this to the feature branch.

> Ozone: Add datanodeID to heartbeat responses and container protocol
> ---
>
> Key: HDFS-12756
> URL: https://issues.apache.org/jira/browse/HDFS-12756
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Trivial
> Fix For: HDFS-7240
>
> Attachments: HDFS-12756-HDFS-7240.001.patch, 
> HDFS-12756-HDFS-7240.002.patch, HDFS-12756-HDFS-7240.003.patch, 
> HDFS-12756-HDFS-7240.004.patch, HDFS-12756-HDFS-7240.005.patch
>
>
> if we have datanode ID in the HBs responses and commands send to datanode, we 
> will be able to do additional sanity checking on datanode before executing 
> the command. This is also very helpful in creating a MiniOzoneCluster with 
> 1000s of simulated nodes. This is needed for scale based unit tests of SCM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12756) Ozone: Add datanodeID to heartbeat responses and container protocol

2017-11-10 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12756:

  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: HDFS-7240
Target Version/s: HDFS-7240
  Status: Resolved  (was: Patch Available)

[~nandakumar131] and [~xyao] Thanks for reviews and rebasing the patch. I have 
committed this to the feature branch.

> Ozone: Add datanodeID to heartbeat responses and container protocol
> ---
>
> Key: HDFS-12756
> URL: https://issues.apache.org/jira/browse/HDFS-12756
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Trivial
> Fix For: HDFS-7240
>
> Attachments: HDFS-12756-HDFS-7240.001.patch, 
> HDFS-12756-HDFS-7240.002.patch, HDFS-12756-HDFS-7240.003.patch, 
> HDFS-12756-HDFS-7240.004.patch, HDFS-12756-HDFS-7240.005.patch
>
>
> if we have datanode ID in the HBs responses and commands send to datanode, we 
> will be able to do additional sanity checking on datanode before executing 
> the command. This is also very helpful in creating a MiniOzoneCluster with 
> 1000s of simulated nodes. This is needed for scale based unit tests of SCM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12777) [READ] Reduce memory and CPU footprint for PROVIDED volumes.

2017-11-10 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247878#comment-16247878
 ] 

Virajith Jalaparti commented on HDFS-12777:
---

Thanks for taking a look [~elgoiri]. Committing patch v4 to feature branch.

> [READ] Reduce memory and CPU footprint for PROVIDED volumes.
> 
>
> Key: HDFS-12777
> URL: https://issues.apache.org/jira/browse/HDFS-12777
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12777-HDFS-9806.001.patch, 
> HDFS-12777-HDFS-9806.002.patch, HDFS-12777-HDFS-9806.003.patch, 
> HDFS-12777-HDFS-9806.004.patch
>
>
> As opposed to local blocks, each DN keeps track of all blocks in PROVIDED 
> storage. This can be millions of blocks for 100s of TBs of PROVIDED data. 
> Storing the data for these blocks can lead to a large memory footprint. 
> Further, with so many blocks, {{DirectoryScanner}} running on a PROVIDED 
> volume can increase the memory and CPU utilization. 
> To reduce these overheads, this JIRA aims to (a) disable the 
> {{DirectoryScanner}} on PROVIDED volumes (as HDFS-9806 focuses on only 
> read-only data in PROVIDED volumes), (b) reduce the space occupied by 
> {{FinalizedProvidedReplicaInfo}} by using a common URI prefix across all 
> PROVIDED blocks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12777) [READ] Reduce memory and CPU footprint for PROVIDED volumes.

2017-11-10 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12777:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> [READ] Reduce memory and CPU footprint for PROVIDED volumes.
> 
>
> Key: HDFS-12777
> URL: https://issues.apache.org/jira/browse/HDFS-12777
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12777-HDFS-9806.001.patch, 
> HDFS-12777-HDFS-9806.002.patch, HDFS-12777-HDFS-9806.003.patch, 
> HDFS-12777-HDFS-9806.004.patch
>
>
> As opposed to local blocks, each DN keeps track of all blocks in PROVIDED 
> storage. This can be millions of blocks for 100s of TBs of PROVIDED data. 
> Storing the data for these blocks can lead to a large memory footprint. 
> Further, with so many blocks, {{DirectoryScanner}} running on a PROVIDED 
> volume can increase the memory and CPU utilization. 
> To reduce these overheads, this JIRA aims to (a) disable the 
> {{DirectoryScanner}} on PROVIDED volumes (as HDFS-9806 focuses on only 
> read-only data in PROVIDED volumes), (b) reduce the space occupied by 
> {{FinalizedProvidedReplicaInfo}} by using a common URI prefix across all 
> PROVIDED blocks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12787) Ozone: SCM: Aggregate the metrics from all the container reports

2017-11-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247877#comment-16247877
 ] 

Hadoop QA commented on HDFS-12787:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 
43s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
13s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
14s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}138m 49s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}216m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy |
|   | hadoop.ozone.container.common.impl.TestContainerPersistence |
|   | hadoop.cblock.TestCBlockReadWrite |
|   | hadoop.cblock.TestBufferManager |
|   | hadoop.hdfs.server.namenode.TestCacheDirectives |
|   | hadoop.ozone.scm.TestSCMCli |
|   | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
|   | hadoop.ozone.web.client.TestKeys |
| Timed out junit tests | org.apache.hadoop.cblock.TestLocalBlockCache |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d11161b |
| JIRA Issue | HDFS-12787 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12897080/HDFS-12787-HDFS-7240.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux bbd28a7d9959 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HDFS-12618) fsck -includeSnapshots reports wrong amount of total blocks

2017-11-10 Thread Wellington Chevreuil (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247872#comment-16247872
 ] 

Wellington Chevreuil commented on HDFS-12618:
-

bq. Let's please make sure the test also covers the case that different inode 
references contain different blocks, and verify there is no over/under counting.
My understanding is that this would be the case when a file already present on 
snapshot(s) has then been appended. Do you guys know of any other condition for 
such? I'm working on tests and solution to also cover this scenario as well.



> fsck -includeSnapshots reports wrong amount of total blocks
> ---
>
> Key: HDFS-12618
> URL: https://issues.apache.org/jira/browse/HDFS-12618
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0-alpha3
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Minor
> Attachments: HDFS-121618.initial, HDFS-12618.001.patch, 
> HDFS-12618.002.patch, HDFS-12618.003.patch
>
>
> When snapshot is enabled, if a file is deleted but is contained by a 
> snapshot, *fsck* will not reported blocks for such file, showing different 
> number of *total blocks* than what is exposed in the Web UI. 
> This should be fine, as *fsck* provides *-includeSnapshots* option. The 
> problem is that *-includeSnapshots* option causes *fsck* to count blocks for 
> every occurrence of a file on snapshots, which is wrong because these blocks 
> should be counted only once (for instance, if a 100MB file is present on 3 
> snapshots, it would still map to one block only in hdfs). This causes fsck to 
> report much more blocks than what actually exist in hdfs and is reported in 
> the Web UI.
> Here's an example:
> 1) HDFS has two files of 2 blocks each:
> {noformat}
> $ hdfs dfs -ls -R /
> drwxr-xr-x   - root supergroup  0 2017-10-07 21:21 /snap-test
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:16 /snap-test/file1
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:17 /snap-test/file2
> drwxr-xr-x   - root supergroup  0 2017-05-13 13:03 /test
> {noformat} 
> 2) There are two snapshots, with the two files present on each of the 
> snapshots:
> {noformat}
> $ hdfs dfs -ls -R /snap-test/.snapshot
> drwxr-xr-x   - root supergroup  0 2017-10-07 21:21 
> /snap-test/.snapshot/snap1
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:16 
> /snap-test/.snapshot/snap1/file1
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:17 
> /snap-test/.snapshot/snap1/file2
> drwxr-xr-x   - root supergroup  0 2017-10-07 21:21 
> /snap-test/.snapshot/snap2
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:16 
> /snap-test/.snapshot/snap2/file1
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:17 
> /snap-test/.snapshot/snap2/file2
> {noformat}
> 3) *fsck -includeSnapshots* reports 12 blocks in total (4 blocks for the 
> normal file path, plus 4 blocks for each snapshot path):
> {noformat}
> $ hdfs fsck / -includeSnapshots
> FSCK started by root (auth:SIMPLE) from /127.0.0.1 for path / at Mon Oct 09 
> 15:15:36 BST 2017
> Status: HEALTHY
>  Number of data-nodes:1
>  Number of racks: 1
>  Total dirs:  6
>  Total symlinks:  0
> Replicated Blocks:
>  Total size:  1258291200 B
>  Total files: 6
>  Total blocks (validated):12 (avg. block size 104857600 B)
>  Minimally replicated blocks: 12 (100.0 %)
>  Over-replicated blocks:  0 (0.0 %)
>  Under-replicated blocks: 0 (0.0 %)
>  Mis-replicated blocks:   0 (0.0 %)
>  Default replication factor:  1
>  Average block replication:   1.0
>  Missing blocks:  0
>  Corrupt blocks:  0
>  Missing replicas:0 (0.0 %)
> {noformat}
> 4) Web UI shows the correct number (4 blocks only):
> {noformat}
> Security is off.
> Safemode is off.
> 5 files and directories, 4 blocks = 9 total filesystem object(s).
> {noformat}
> I would like to work on this solution, will propose an initial solution 
> shortly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12740) SCM should support a RPC to share the cluster Id with KSM and DataNodes

2017-11-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247856#comment-16247856
 ] 

Hadoop QA commented on HDFS-12740:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
29s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
1s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
9s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
18s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
12s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
49s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}120m 30s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}196m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Unreaped Processes | hadoop-hdfs:1 |
| Failed junit tests | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
|   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
|   | hadoop.ozone.TestStorageContainerManager |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
| Timed out junit tests | 

[jira] [Commented] (HDFS-12756) Ozone: Add datanodeID to heartbeat responses and container protocol

2017-11-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247834#comment-16247834
 ] 

Hadoop QA commented on HDFS-12756:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
40s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 46 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 1s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m  
4s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
58s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
13s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
53s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
40s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 11m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
11s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  1s{color} | {color:orange} root: The patch generated 2 new + 3 unchanged - 
0 fixed = 5 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
34s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 34s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 19s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
53s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}174m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Unreaped Processes | hadoop-hdfs:4 |
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 |
|   | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | 

[jira] [Commented] (HDFS-12798) Ozone: scm web: fix the node status table

2017-11-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247801#comment-16247801
 ] 

Hadoop QA commented on HDFS-12798:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
44s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
29m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d11161b |
| JIRA Issue | HDFS-12798 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12897103/HDFS-12798-HDFS-7240.001.patch
 |
| Optional Tests |  asflicense  shadedclient  |
| uname | Linux 07aaca2100a6 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-7240 / b8297b0 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 297 (vs. ulimit of 5000) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22045/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: scm web: fix the node status table
> -
>
> Key: HDFS-12798
> URL: https://issues.apache.org/jira/browse/HDFS-12798
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12798-HDFS-7240.001.patch, after.png
>
>
> JMX interface has been fixed with HDFS-12684 with removing a duplicated 
> information. We need to update the web ui to use the right jmx bean and 
> display the node statuses from the right page.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12799) Ozone: SCM: Close containers: extend SCMCommandResponseProto with SCMCloseContainerCmdResponseProto

2017-11-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247738#comment-16247738
 ] 

Hadoop QA commented on HDFS-12799:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HDFS-12799 does not apply to HDFS-7240. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-12799 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12897096/HDFS-12799-HDFS-7240.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22044/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: SCM: Close containers: extend SCMCommandResponseProto with 
> SCMCloseContainerCmdResponseProto
> ---
>
> Key: HDFS-12799
> URL: https://issues.apache.org/jira/browse/HDFS-12799
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12799-HDFS-7240.001.patch
>
>
> This issue is about extending the HB response protocol between SCM and DN 
> with a command to ask the datanode to close a container. (This is just about 
> extending the protocol not about fixing the implementation of SCM tto handle 
> the state transitions).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12798) Ozone: scm web: fix the node status table

2017-11-10 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12798:

Status: Patch Available  (was: Open)

> Ozone: scm web: fix the node status table
> -
>
> Key: HDFS-12798
> URL: https://issues.apache.org/jira/browse/HDFS-12798
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12798-HDFS-7240.001.patch, after.png
>
>
> JMX interface has been fixed with HDFS-12684 with removing a duplicated 
> information. We need to update the web ui to use the right jmx bean and 
> display the node statuses from the right page.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12798) Ozone: scm web: fix the node status table

2017-11-10 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12798:

Attachment: HDFS-12798-HDFS-7240.001.patch
after.png

Nodes count table is fixed.

To test: start an SCM ui and check the "Node counts" table (should be displayed 
similar to the attached screenshot).

> Ozone: scm web: fix the node status table
> -
>
> Key: HDFS-12798
> URL: https://issues.apache.org/jira/browse/HDFS-12798
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12798-HDFS-7240.001.patch, after.png
>
>
> JMX interface has been fixed with HDFS-12684 with removing a duplicated 
> information. We need to update the web ui to use the right jmx bean and 
> display the node statuses from the right page.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12799) Ozone: SCM: Close containers: extend SCMCommandResponseProto with SCMCloseContainerCmdResponseProto

2017-11-10 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12799:

Attachment: HDFS-12799-HDFS-7240.001.patch

First patch.

Please double check the change of 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/scm/container/ContainerMapping.java.
 I don't understand why the original container state was returned. Was it a bug 
or did I miss something?

Also please check the unit test and make suggestion if you have any more simple 
way to create a container, that was the first time when I used MiniOzone 
cluster (I created a key from KSM to avoid the complex workflow of the 
container creation). 


> Ozone: SCM: Close containers: extend SCMCommandResponseProto with 
> SCMCloseContainerCmdResponseProto
> ---
>
> Key: HDFS-12799
> URL: https://issues.apache.org/jira/browse/HDFS-12799
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12799-HDFS-7240.001.patch
>
>
> This issue is about extending the HB response protocol between SCM and DN 
> with a command to ask the datanode to close a container. (This is just about 
> extending the protocol not about fixing the implementation of SCM tto handle 
> the state transitions).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12799) Ozone: SCM: Close containers: extend SCMCommandResponseProto with SCMCloseContainerCmdResponseProto

2017-11-10 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12799:

Status: Patch Available  (was: Open)

> Ozone: SCM: Close containers: extend SCMCommandResponseProto with 
> SCMCloseContainerCmdResponseProto
> ---
>
> Key: HDFS-12799
> URL: https://issues.apache.org/jira/browse/HDFS-12799
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12799-HDFS-7240.001.patch
>
>
> This issue is about extending the HB response protocol between SCM and DN 
> with a command to ask the datanode to close a container. (This is just about 
> extending the protocol not about fixing the implementation of SCM tto handle 
> the state transitions).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12754) Lease renewal can hit a deadlock

2017-11-10 Thread Kuhu Shukla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247594#comment-16247594
 ] 

Kuhu Shukla commented on HDFS-12754:


[~kihwal], [~yangjiandan] could you help review. Thank you!

> Lease renewal can hit a deadlock 
> -
>
> Key: HDFS-12754
> URL: https://issues.apache.org/jira/browse/HDFS-12754
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.1
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
> Attachments: HDFS-12754.001.patch, HDFS-12754.002.patch, 
> HDFS-12754.003.patch, HDFS-12754.004.patch, HDFS-12754.005.patch, 
> HDFS-12754.006.patch
>
>
> The Client and the renewer can hit a deadlock during close operation since 
> closeFile() reaches back to the DFSClient#removeFileBeingWritten. This is 
> possible if the client class close when the renewer is renewing a lease.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12799) Ozone: SCM: Close containers: extend SCMCommandResponseProto with SCMCloseContainerCmdResponseProto

2017-11-10 Thread Elek, Marton (JIRA)
Elek, Marton created HDFS-12799:
---

 Summary: Ozone: SCM: Close containers: extend 
SCMCommandResponseProto with SCMCloseContainerCmdResponseProto
 Key: HDFS-12799
 URL: https://issues.apache.org/jira/browse/HDFS-12799
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Elek, Marton
Assignee: Elek, Marton


This issue is about extending the HB response protocol between SCM and DN with 
a command to ask the datanode to close a container. (This is just about 
extending the protocol not about fixing the implementation of SCM tto handle 
the state transitions).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12785) Ozone: Add timeunit for ozone.scm.heartbeat.interval.seconds in ozone-default.xml

2017-11-10 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247584#comment-16247584
 ] 

Elek, Marton commented on HDFS-12785:
-

Seems to be a duplicate of HDFS-12698

> Ozone: Add timeunit for ozone.scm.heartbeat.interval.seconds in 
> ozone-default.xml
> -
>
> Key: HDFS-12785
> URL: https://issues.apache.org/jira/browse/HDFS-12785
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Bharat Viswanadham
>Priority: Minor
>  Labels: newbie
>
> Have seen lot of mesage like below, adding a timeunit will help get rid of 
> the info mesage below.
> {code}
> 2017-10-20 17:02:55,168 [Datanode State Machine Thread - 0] INFO  
> Configuration.deprecation (Configuration.java:logDeprecation(1306)) - No unit 
> for ozone.scm.heartbeat.interval.seconds(1) assuming SECONDS
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-12647) DN commands processing should be async

2017-11-10 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-12647 started by Nanda kumar.
--
> DN commands processing should be async
> --
>
> Key: HDFS-12647
> URL: https://issues.apache.org/jira/browse/HDFS-12647
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: 2.8.0
>Reporter: Daryn Sharp
>Assignee: Nanda kumar
>
> Due to dataset lock contention, service actors may encounter significant 
> latency while processing  DN commands.  Even the queuing of async deletions 
> require multiple lock acquisitions.  A slow disk will cause a backlog of 
> xceivers instantiating block sender/receivers which starves the actor and 
> leads to the NN falsely declaring the node dead.
> Async processing of all commands will free the actor to perform its primary 
> purpose of heartbeating and block reporting.  Note that FBRs will be 
> dependent on queued block invalidations not being included in the report.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-4312) fix test TestSecureNameNode and improve test TestSecureNameNodeWithExternalKdc

2017-11-10 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HDFS-4312:
---
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

Seems like obsolete. Java6 is not used by any supported version.

> fix test TestSecureNameNode and improve test TestSecureNameNodeWithExternalKdc
> --
>
> Key: HDFS-4312
> URL: https://issues.apache.org/jira/browse/HDFS-4312
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
>  Labels: BB2015-05-TBR
> Attachments: HDFS-4312-trunk--N2.patch, HDFS-4312.patch
>
>
> TestSecureNameNode does not work on Java6 without 
> "dfs.web.authentication.kerberos.principal" config property set.
> Also the following improved:
> 1) keytab files are checked for existence and readability to provide 
> fast-fail on config error.
> 2) added comment to TestSecureNameNode describing the required sys props.
> 3) string literals replaced with config constants.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12684) Ozone: SCMMXBean NodeCount is overlapping with NodeManagerMXBean

2017-11-10 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247561#comment-16247561
 ] 

Weiwei Yang commented on HDFS-12684:


Thanks for following up this [~elek].

> Ozone: SCMMXBean NodeCount is overlapping with NodeManagerMXBean
> 
>
> Key: HDFS-12684
> URL: https://issues.apache.org/jira/browse/HDFS-12684
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, scm
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
> Fix For: HDFS-7240
>
> Attachments: HDFS-12684-HDFS-7240.001.patch
>
>
> I found this issue while reviewing HDFS-11468, from http://scm_host:9876/jmx, 
> both SCM and SCMNodeManager has {{NodeCount}} metrics
> {noformat}
>  {
> "name" : 
> "Hadoop:service=StorageContainerManager,name=StorageContainerManagerInfo,component=ServerRuntime",
> "modelerType" : "org.apache.hadoop.ozone.scm.StorageContainerManager",
> "ClientRpcPort" : "9860",
> "DatanodeRpcPort" : "9861",
> "NodeCount" : [ {
>   "key" : "STALE",
>   "value" : 0
> }, {
>   "key" : "DECOMMISSIONING",
>   "value" : 0
> }, {
>   "key" : "DECOMMISSIONED",
>   "value" : 0
> }, {
>   "key" : "FREE_NODE",
>   "value" : 0
> }, {
>   "key" : "RAFT_MEMBER",
>   "value" : 0
> }, {
>   "key" : "HEALTHY",
>   "value" : 0
> }, {
>   "key" : "DEAD",
>   "value" : 0
> }, {
>   "key" : "UNKNOWN",
>   "value" : 0
> } ],
> "CompileInfo" : "2017-10-17T06:47Z xxx",
> "Version" : "3.1.0-SNAPSHOT, r6019a25908ce75155656f13effd8e2e53ed43461",
> "SoftwareVersion" : "3.1.0-SNAPSHOT",
> "StartedTimeInMillis" : 1508393551065
>   }, {
> "name" : "Hadoop:service=SCMNodeManager,name=SCMNodeManagerInfo",
> "modelerType" : "org.apache.hadoop.ozone.scm.node.SCMNodeManager",
> "NodeCount" : [ {
>   "key" : "STALE",
>   "value" : 0
> }, {
>   "key" : "DECOMMISSIONING",
>   "value" : 0
> }, {
>   "key" : "DECOMMISSIONED",
>   "value" : 0
> }, {
>   "key" : "FREE_NODE",
>   "value" : 0
> }, {
>   "key" : "RAFT_MEMBER",
>   "value" : 0
> }, {
>   "key" : "HEALTHY",
>   "value" : 0
> }, {
>   "key" : "DEAD",
>   "value" : 0
> }, {
>   "key" : "UNKNOWN",
>   "value" : 0
> } ],
> "OutOfChillMode" : false,
> "MinimumChillModeNodes" : 1,
> "ChillModeStatus" : "Still in chill mode, waiting on nodes to report in. 
> 0 nodes reported, minimal 1 nodes required."
>   }
> {noformat}
> hence, propose to remove {{NodeCount}} from {{SCMMXBean}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8198) Erasure Coding: system test of TeraSort

2017-11-10 Thread Daniel Pol (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247558#comment-16247558
 ] 

Daniel Pol commented on HDFS-8198:
--

[~eddyxu] I have 7 datanodes. I'm new to the JIRA system and I can't seem to 
find the proper way to upload the terasort output file. Please let me know how 
I can do that. The relevant error from the terasort output is:
17/11/04 09:36:15 INFO mapreduce.Job: Task Id : 
attempt_1509761319113_0021_m_02_0, Status : FAILEDError: 
java.io.IOException: 3 missing blocks, the stripe is: Offset=77594624, 
length=1048576, fetchedChunksNum=1, missingChunksNum=3; locatedBlocks is: 
LocatedBlocks{  fileLength=50  underConstruction=false  
blocks=[LocatedStripedBlock{BP-260511027-172.30.253.91-1487788944154:blk_-9223372036852841888_5101378;
 getBlockSize()=1610612736; corrupt=false; offset=0; 
locs=[DatanodeInfoWithStorage[172.30.253.6:50010,DS-780df34f-44c3-4c67-b7dc-f901bc12a957,DISK],
 
DatanodeInfoWithStorage[172.30.253.5:50010,DS-c5e33c96-3df3-480b-80aa-fe97a3b8e3b4,DISK],
 
DatanodeInfoWithStorage[172.30.253.3:50010,DS-4cd5c037-9dcb-488c-81c2-0aa8ff1cbd2f,DISK],
 
DatanodeInfoWithStorage[172.30.253.4:50010,DS-6bac2c0f-f8c6-4a67-8801-f2a7a74279a6,DISK],
 
DatanodeInfoWithStorage[172.30.253.7:50010,DS-0ee9e606-db4b-4df6-b180-fedb696c5e4f,DISK]];
 indices=[0, 1, 2, 3, 4]}, 
LocatedStripedBlock{BP-260511027-172.30.253.91-1487788944154:blk_-9223372036852841856_5101380;
 getBlockSize()=1610612736; corrupt=false; offset=1610612736; 
locs=[DatanodeInfoWithStorage[172.30.253.2:50010,DS-f053781f-b2c4-41e9-8960-745b3fe8ef50,DISK],
 
DatanodeInfoWithStorage[172.30.253.5:50010,DS-4efc46be-5769-4a2f-9cf6-736b3d56edaf,DISK],
 
DatanodeInfoWithStorage[172.30.253.3:50010,DS-74b0796e-425d-4fa6-9309-247271f63f53,DISK],
 
DatanodeInfoWithStorage[172.30.253.4:50010,DS-ddfc805a-9ed9-4493-921d-acc169787683,DISK],
 
DatanodeInfoWithStorage[172.30.253.7:50010,DS-c3be97ce-660a-4c98-9f71-5c2f76236dc4,DISK]];
 indices=[0, 1, 2, 3, 4]}, 
LocatedStripedBlock{BP-260511027-172.30.253.91-1487788944154:blk_-9223372036852841824_5101382;
 getBlockSize()=1610612736; corrupt=false; offset=3221225472; 
locs=[DatanodeInfoWithStorage[172.30.253.1:50010,DS-336c025e-f04b-475f-b051-d7a4d1b7669f,DISK],
 
DatanodeInfoWithStorage[172.30.253.5:50010,DS-dab6afcd-bf22-4d1d-b878-d52ee0b5bcd9,DISK],
 
DatanodeInfoWithStorage[172.30.253.7:50010,DS-16ade97a-978c-4a83-aae4-f25e861d63f5,DISK],
 
DatanodeInfoWithStorage[172.30.253.2:50010,DS-176f2769-3236-4548-94df-74de95171cdd,DISK],
 
DatanodeInfoWithStorage[172.30.253.3:50010,DS-2350ab83-f4bd-49f1-aa29-f8d4b5de5f78,DISK]];
 indices=[0, 1, 2, 3, 4]}, 
LocatedStripedBlock{BP-260511027-172.30.253.91-1487788944154:blk_-9223372036852841792_5101384;
 getBlockSize()=168161792; corrupt=false; offset=4831838208; 
locs=[DatanodeInfoWithStorage[172.30.253.5:50010,DS-b63b7da0-20b7-4480-b80a-cb0491c4e17f,DISK],
 
DatanodeInfoWithStorage[172.30.253.2:50010,DS-dcb3d66b-ee0f-4e4d-b5c8-611498227092,DISK],
 
DatanodeInfoWithStorage[172.30.253.1:50010,DS-bc0b4749-6599-4691-98b6-35623ce8c08d,DISK],
 
DatanodeInfoWithStorage[172.30.253.7:50010,DS-1029b9e5-abff-4c63-bb9f-7986d1729e03,DISK],
 
DatanodeInfoWithStorage[172.30.253.4:50010,DS-6fa25607-f980-4a15-8592-d31ef51a48ba,DISK]];
 indices=[0, 1, 2, 3, 4]}]  
lastLocatedBlock=LocatedStripedBlock{BP-260511027-172.30.253.91-1487788944154:blk_-9223372036852841792_5101384;
 getBlockSize()=168161792; corrupt=false; offset=4831838208; 
locs=[DatanodeInfoWithStorage[172.30.253.5:50010,DS-b63b7da0-20b7-4480-b80a-cb0491c4e17f,DISK],
 
DatanodeInfoWithStorage[172.30.253.2:50010,DS-dcb3d66b-ee0f-4e4d-b5c8-611498227092,DISK],
 
DatanodeInfoWithStorage[172.30.253.1:50010,DS-bc0b4749-6599-4691-98b6-35623ce8c08d,DISK],
 
DatanodeInfoWithStorage[172.30.253.7:50010,DS-1029b9e5-abff-4c63-bb9f-7986d1729e03,DISK],
 
DatanodeInfoWithStorage[172.30.253.4:50010,DS-6fa25607-f980-4a15-8592-d31ef51a48ba,DISK]];
 indices=[0, 1, 2, 3, 4]}  isLastBlockComplete=true} at 
org.apache.hadoop.hdfs.StripeReader.checkMissingBlocks(StripeReader.java:175) 
at org.apache.hadoop.hdfs.StripeReader.readStripe(StripeReader.java:366) at 
org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:315)
 at 
org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:388)
 at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:813) at 
java.io.DataInputStream.read(DataInputStream.java:149) at 
org.apache.hadoop.examples.terasort.TeraInputFormat$TeraRecordReader.nextKeyValue(TeraInputFormat.java:257)
 at 
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:562)
 at 
org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
 at 
org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
 at 

[jira] [Updated] (HDFS-12787) Ozone: SCM: Aggregate the metrics from all the container reports

2017-11-10 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12787:
-
Attachment: HDFS-12787-HDFS-7240.003.patch

Update new patch to fix checkstyle, findbugs warnings.

> Ozone: SCM: Aggregate the metrics from all the container reports
> 
>
> Key: HDFS-12787
> URL: https://issues.apache.org/jira/browse/HDFS-12787
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: metrics, ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12787-HDFS-7240.001.patch, 
> HDFS-12787-HDFS-7240.002.patch, HDFS-12787-HDFS-7240.003.patch
>
>
> We should aggregate the metrics from all the reports of different datanodes 
> in addition to the last report. This way, we can get a global view of the 
> container I/Os over the ozone cluster. This is a follow up work of HDFS-11468.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12740) SCM should support a RPC to share the cluster Id with KSM and DataNodes

2017-11-10 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDFS-12740:
---
Attachment: HDFS-12740-HDFS-7240.003.patch

Attached correct v3 patch and removed the stale one.

> SCM should support a RPC to share the cluster Id with KSM and DataNodes
> ---
>
> Key: HDFS-12740
> URL: https://issues.apache.org/jira/browse/HDFS-12740
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Fix For: HDFS-7240
>
> Attachments: HDFS-12740-HDFS-7240.001.patch, 
> HDFS-12740-HDFS-7240.002.patch, HDFS-12740-HDFS-7240.003.patch
>
>
> When the ozone cluster is first Created, SCM --init command will generate 
> cluster Id as well as SCM Id and persist it locally. The same cluster Id and 
> the SCM id will be shared with KSM during the KSM initialization and 
> Datanodes during datanode registration. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12740) SCM should support a RPC to share the cluster Id with KSM and DataNodes

2017-11-10 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDFS-12740:
---
Attachment: (was: HDFS-12740-HDFS-7240.003.patch)

> SCM should support a RPC to share the cluster Id with KSM and DataNodes
> ---
>
> Key: HDFS-12740
> URL: https://issues.apache.org/jira/browse/HDFS-12740
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Fix For: HDFS-7240
>
> Attachments: HDFS-12740-HDFS-7240.001.patch, 
> HDFS-12740-HDFS-7240.002.patch
>
>
> When the ozone cluster is first Created, SCM --init command will generate 
> cluster Id as well as SCM Id and persist it locally. The same cluster Id and 
> the SCM id will be shared with KSM during the KSM initialization and 
> Datanodes during datanode registration. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12340) Ozone: C/C++ implementation of ozone client using curl

2017-11-10 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247404#comment-16247404
 ] 

Mukul Kumar Singh commented on HDFS-12340:
--


1) #add_definitions(-DLIBOZONECLIENT_DLL_EXPORT), has been commented out from 
the second CMakeLists.txt, should this be removed ?

2) main.c - 19: a space between ozone and client
3) main.c - 135, memory has already been allocated at 118, that needs to be 
de-allocated
4) Comments are needed in main.c to explain the code flow.
5) main.c:142, memory is being malloc'ed here without any check,
6) main.c:144, there are commented lines of code
7) these series of elseif can be replcaced with switch cases and enums, and 
then lets have small helper functions for each command type.



> Ozone: C/C++ implementation of ozone client using curl
> --
>
> Key: HDFS-12340
> URL: https://issues.apache.org/jira/browse/HDFS-12340
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>  Labels: OzonePostMerge
> Fix For: HDFS-7240
>
> Attachments: HDFS-12340-HDFS-7240.001.patch, 
> HDFS-12340-HDFS-7240.002.patch, HDFS-12340-HDFS-7240.003.patch, main.C, 
> ozoneClient.C, ozoneClient.h
>
>
> This Jira is introduced for implementation of ozone client in C/C++ using 
> curl library.
> All these calls will make use of HTTP protocol and would require libcurl. The 
> libcurl API are referenced from here:
> https://curl.haxx.se/libcurl/
> Additional details would be posted along with the patches.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12340) Ozone: C/C++ implementation of ozone client using curl

2017-11-10 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247404#comment-16247404
 ] 

Mukul Kumar Singh edited comment on HDFS-12340 at 11/10/17 12:10 PM:
-

Thanks for the updated patch [~shashikant]

1) #add_definitions(-DLIBOZONECLIENT_DLL_EXPORT), has been commented out from 
the second CMakeLists.txt, should this be removed ?
2) main.c - 19: a space between ozone and client
3) main.c - 135, memory has already been allocated at 118, that needs to be 
de-allocated
4) Comments are needed in main.c to explain the code flow.
5) main.c:142, memory is being malloc'ed here without any check,
6) main.c:144, there are commented lines of code
7) these series of elseif can be replcaced with switch cases and enums, and 
then lets have small helper functions for each command type.




was (Author: msingh):

1) #add_definitions(-DLIBOZONECLIENT_DLL_EXPORT), has been commented out from 
the second CMakeLists.txt, should this be removed ?

2) main.c - 19: a space between ozone and client
3) main.c - 135, memory has already been allocated at 118, that needs to be 
de-allocated
4) Comments are needed in main.c to explain the code flow.
5) main.c:142, memory is being malloc'ed here without any check,
6) main.c:144, there are commented lines of code
7) these series of elseif can be replcaced with switch cases and enums, and 
then lets have small helper functions for each command type.



> Ozone: C/C++ implementation of ozone client using curl
> --
>
> Key: HDFS-12340
> URL: https://issues.apache.org/jira/browse/HDFS-12340
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>  Labels: OzonePostMerge
> Fix For: HDFS-7240
>
> Attachments: HDFS-12340-HDFS-7240.001.patch, 
> HDFS-12340-HDFS-7240.002.patch, HDFS-12340-HDFS-7240.003.patch, main.C, 
> ozoneClient.C, ozoneClient.h
>
>
> This Jira is introduced for implementation of ozone client in C/C++ using 
> curl library.
> All these calls will make use of HTTP protocol and would require libcurl. The 
> libcurl API are referenced from here:
> https://curl.haxx.se/libcurl/
> Additional details would be posted along with the patches.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12740) SCM should support a RPC to share the cluster Id with KSM and DataNodes

2017-11-10 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDFS-12740:
---
Attachment: HDFS-12740-HDFS-7240.003.patch

Thanks [~msingh] for the review comments. Patch v3 addresses the same.

> SCM should support a RPC to share the cluster Id with KSM and DataNodes
> ---
>
> Key: HDFS-12740
> URL: https://issues.apache.org/jira/browse/HDFS-12740
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Fix For: HDFS-7240
>
> Attachments: HDFS-12740-HDFS-7240.001.patch, 
> HDFS-12740-HDFS-7240.002.patch, HDFS-12740-HDFS-7240.003.patch
>
>
> When the ozone cluster is first Created, SCM --init command will generate 
> cluster Id as well as SCM Id and persist it locally. The same cluster Id and 
> the SCM id will be shared with KSM during the KSM initialization and 
> Datanodes during datanode registration. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12794) Ozone: Parallelize ChunkOutputSream Writes to container

2017-11-10 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247382#comment-16247382
 ] 

Mukul Kumar Singh commented on HDFS-12794:
--

Thanks for the patch Shashikant,

1) ChunkGroupOutputStream:75: this can be changed to ArrayBlockingQueue
2) ChunkGroupOutputStream:109, lets change this name to maxQueueSize, and also 
at all the other places
3) ChunkGroupOutputStream:290, this check should be moved before the start of 
this if statement.
4) ChunkGroupOutputStream: 417, queue here cannot be null, any reason for doing 
this ?
5) ChunkOutputStream:64,65 are not being used, Please remove them
6) ChunkOutputStream:247 & 248 can be changed to 
Preconditions.checkNotNull(queue.remove(reply.getTraceID()). Also do we need a 
return true here ?
7) In then apply the response should be validated using 
{{validateContainerResponse(response);}} as in 
ContainerProtocolCalls#writeChunk, Also if we can just keep the formatting as 
in the writeChunk command. Would it also make sense for writeChunk to use 
WriteChunkAsync with get on the future?



> Ozone: Parallelize ChunkOutputSream Writes to container
> ---
>
> Key: HDFS-12794
> URL: https://issues.apache.org/jira/browse/HDFS-12794
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Fix For: HDFS-7240
>
> Attachments: HDFS-12794-HDFS-7240.001.patch
>
>
> The chunkOutPutStream Write are sync in nature .Once one chunk of data gets 
> written, the next chunk write is blocked until the previous chunk is written 
> to the container.
> The ChunkOutputWrite Stream writes should be made async and Close on the 
> OutputStream should ensure flushing of all dirty buffers to the container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12756) Ozone: Add datanodeID to heartbeat responses and container protocol

2017-11-10 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247345#comment-16247345
 ] 

Nanda kumar edited comment on HDFS-12756 at 11/10/17 11:24 AM:
---

The patch is not applying anymore, uploaded v005 after rebase.


was (Author: nandakumar131):
I will commit this shortly, will take care of whitespace and checkstyle issues 
while committing.

> Ozone: Add datanodeID to heartbeat responses and container protocol
> ---
>
> Key: HDFS-12756
> URL: https://issues.apache.org/jira/browse/HDFS-12756
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Trivial
> Attachments: HDFS-12756-HDFS-7240.001.patch, 
> HDFS-12756-HDFS-7240.002.patch, HDFS-12756-HDFS-7240.003.patch, 
> HDFS-12756-HDFS-7240.004.patch, HDFS-12756-HDFS-7240.005.patch
>
>
> if we have datanode ID in the HBs responses and commands send to datanode, we 
> will be able to do additional sanity checking on datanode before executing 
> the command. This is also very helpful in creating a MiniOzoneCluster with 
> 1000s of simulated nodes. This is needed for scale based unit tests of SCM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12756) Ozone: Add datanodeID to heartbeat responses and container protocol

2017-11-10 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDFS-12756:
---
Attachment: HDFS-12756-HDFS-7240.005.patch

> Ozone: Add datanodeID to heartbeat responses and container protocol
> ---
>
> Key: HDFS-12756
> URL: https://issues.apache.org/jira/browse/HDFS-12756
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Trivial
> Attachments: HDFS-12756-HDFS-7240.001.patch, 
> HDFS-12756-HDFS-7240.002.patch, HDFS-12756-HDFS-7240.003.patch, 
> HDFS-12756-HDFS-7240.004.patch, HDFS-12756-HDFS-7240.005.patch
>
>
> if we have datanode ID in the HBs responses and commands send to datanode, we 
> will be able to do additional sanity checking on datanode before executing 
> the command. This is also very helpful in creating a MiniOzoneCluster with 
> 1000s of simulated nodes. This is needed for scale based unit tests of SCM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12740) SCM should support a RPC to share the cluster Id with KSM and DataNodes

2017-11-10 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247366#comment-16247366
 ] 

Mukul Kumar Singh commented on HDFS-12740:
--

Hi Sashikant, the updated patch looks really good, some really minor comments.

1) Can we generate a separate scm id in MiniOzoneCluster? Just to ensure that 
both the cluster and scm id values are different in the run.
2) Retreives is spelled wrong in comments.
3) Also in Test, can we generate, random UUIDs, in place of strings like 
"testClusterId" & "testScmId".
4) checkstyle issues


> SCM should support a RPC to share the cluster Id with KSM and DataNodes
> ---
>
> Key: HDFS-12740
> URL: https://issues.apache.org/jira/browse/HDFS-12740
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Fix For: HDFS-7240
>
> Attachments: HDFS-12740-HDFS-7240.001.patch, 
> HDFS-12740-HDFS-7240.002.patch
>
>
> When the ozone cluster is first Created, SCM --init command will generate 
> cluster Id as well as SCM Id and persist it locally. The same cluster Id and 
> the SCM id will be shared with KSM during the KSM initialization and 
> Datanodes during datanode registration. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12787) Ozone: SCM: Aggregate the metrics from all the container reports

2017-11-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247361#comment-16247361
 ] 

Hadoop QA commented on HDFS-12787:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
38s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
24s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 42s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 2 unchanged - 0 fixed = 4 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
54s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}115m  5s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
27s{color} | {color:red} The patch generated 216 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}178m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Null passed for non-null parameter of 
com.google.common.cache.Cache.put(Object, Object) in 
org.apache.hadoop.ozone.scm.StorageContainerManager.updateContainerReportMetrics(StorageContainerDatanodeProtocolProtos$ContainerReportsRequestProto)
  Method invoked at StorageContainerManager.java:of 
com.google.common.cache.Cache.put(Object, Object) in 
org.apache.hadoop.ozone.scm.StorageContainerManager.updateContainerReportMetrics(StorageContainerDatanodeProtocolProtos$ContainerReportsRequestProto)
  Method invoked at StorageContainerManager.java:[line 997] |
| Unreaped Processes | hadoop-hdfs:5 |
| Failed junit tests | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.client.impl.TestBlockReaderFactory |
|   | hadoop.hdfs.client.impl.TestBlockReaderLocal |
|   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
|   | hadoop.hdfs.server.mover.TestMover |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | 

[jira] [Commented] (HDFS-12756) Ozone: Add datanodeID to heartbeat responses and container protocol

2017-11-10 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247345#comment-16247345
 ] 

Nanda kumar commented on HDFS-12756:


I will commit this shortly, will take care of whitespace and checkstyle issues 
while committing.

> Ozone: Add datanodeID to heartbeat responses and container protocol
> ---
>
> Key: HDFS-12756
> URL: https://issues.apache.org/jira/browse/HDFS-12756
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Trivial
> Attachments: HDFS-12756-HDFS-7240.001.patch, 
> HDFS-12756-HDFS-7240.002.patch, HDFS-12756-HDFS-7240.003.patch, 
> HDFS-12756-HDFS-7240.004.patch
>
>
> if we have datanode ID in the HBs responses and commands send to datanode, we 
> will be able to do additional sanity checking on datanode before executing 
> the command. This is also very helpful in creating a MiniOzoneCluster with 
> 1000s of simulated nodes. This is needed for scale based unit tests of SCM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12796) SCM should not start if Cluster Version file does not exist

2017-11-10 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDFS-12796:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> SCM should not start if Cluster Version file does not exist
> ---
>
> Key: HDFS-12796
> URL: https://issues.apache.org/jira/browse/HDFS-12796
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Fix For: HDFS-7240
>
> Attachments: HDFS-12796-HDFS-7240.001.patch, 
> HDFS-12796-HDFS-7240.002.patch, HDFS-12796-HDFS-7240.003.patch
>
>
> We have SCM --init command which persist the cluster Version info in the 
> version file.  If SCM gets 
> started without SCM --init being done even once, it should fail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12796) SCM should not start if Cluster Version file does not exist

2017-11-10 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247342#comment-16247342
 ] 

Nanda kumar commented on HDFS-12796:


I have committed it to feature branch. Thanks for the contribution 
[~shashikant].

> SCM should not start if Cluster Version file does not exist
> ---
>
> Key: HDFS-12796
> URL: https://issues.apache.org/jira/browse/HDFS-12796
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Fix For: HDFS-7240
>
> Attachments: HDFS-12796-HDFS-7240.001.patch, 
> HDFS-12796-HDFS-7240.002.patch, HDFS-12796-HDFS-7240.003.patch
>
>
> We have SCM --init command which persist the cluster Version info in the 
> version file.  If SCM gets 
> started without SCM --init being done even once, it should fail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12796) SCM should not start if Cluster Version file does not exist

2017-11-10 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247334#comment-16247334
 ] 

Nanda kumar commented on HDFS-12796:


Test failures are not related, I will commit this shortly.

> SCM should not start if Cluster Version file does not exist
> ---
>
> Key: HDFS-12796
> URL: https://issues.apache.org/jira/browse/HDFS-12796
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Fix For: HDFS-7240
>
> Attachments: HDFS-12796-HDFS-7240.001.patch, 
> HDFS-12796-HDFS-7240.002.patch, HDFS-12796-HDFS-7240.003.patch
>
>
> We have SCM --init command which persist the cluster Version info in the 
> version file.  If SCM gets 
> started without SCM --init being done even once, it should fail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12796) SCM should not start if Cluster Version file does not exist

2017-11-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247320#comment-16247320
 ] 

Hadoop QA commented on HDFS-12796:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
44s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 48s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 31m 56s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
22s{color} | {color:red} The patch generated 154 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 87m 47s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Unreaped Processes | hadoop-hdfs:7 |
| Failed junit tests | hadoop.fs.TestSymlinkHdfsFileSystem |
|   | hadoop.fs.contract.hdfs.TestHDFSContractCreate |
|   | hadoop.fs.viewfs.TestViewFileSystemLinkFallback |
|   | hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot |
|   | hadoop.fs.permission.TestStickyBit |
|   | hadoop.hdfs.TestHFlush |
|   | hadoop.fs.TestResolveHdfsSymlink |
|   | hadoop.fs.viewfs.TestViewFileSystemLinkMergeSlash |
|   | hadoop.fs.viewfs.TestViewFsWithXAttrs |
|   | hadoop.fs.viewfs.TestViewFsWithAcls |
|   | hadoop.fs.TestSWebHdfsFileContextMainOperations |
|   | hadoop.fs.viewfs.TestViewFileSystemWithTruncate |
|   | hadoop.fs.contract.hdfs.TestHDFSContractSeek |
|   | hadoop.fs.viewfs.TestViewFileSystemWithAcls |
|   | hadoop.fs.viewfs.TestViewFsDefaultValue |
|   | hadoop.fs.TestUnbuffer |
|   | hadoop.fs.shell.TestHdfsTextCommand |
|   | hadoop.fs.viewfs.TestViewFsFileStatusHdfs |
|   | hadoop.fs.TestFcHdfsPermission |
|   | hadoop.fs.TestUrlStreamHandler |
|   | hadoop.fs.viewfs.TestViewFsHdfs |
|   | hadoop.fs.viewfs.TestViewFileSystemHdfs |
|   | hadoop.fs.contract.hdfs.TestHDFSContractDelete |
|   | hadoop.fs.TestWebHdfsFileContextMainOperations |
|   | hadoop.fs.loadGenerator.TestLoadGenerator |
|   | hadoop.fs.TestGlobPaths |
|   | 

[jira] [Created] (HDFS-12798) Ozone: scm web: fix the node status table

2017-11-10 Thread Elek, Marton (JIRA)
Elek, Marton created HDFS-12798:
---

 Summary: Ozone: scm web: fix the node status table
 Key: HDFS-12798
 URL: https://issues.apache.org/jira/browse/HDFS-12798
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Elek, Marton
Assignee: Elek, Marton


JMX interface has been fixed with HDFS-12684 with removing a duplicated 
information. We need to update the web ui to use the right jmx bean and display 
the node statuses from the right page.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12684) Ozone: SCMMXBean NodeCount is overlapping with NodeManagerMXBean

2017-11-10 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247290#comment-16247290
 ] 

Elek, Marton commented on HDFS-12684:
-

Yes, thx [~cheersyang], you are right. I missed something in my check, the 
table has been disappeared. I opened HDFS-12798 and will fix the ui, soon.

> Ozone: SCMMXBean NodeCount is overlapping with NodeManagerMXBean
> 
>
> Key: HDFS-12684
> URL: https://issues.apache.org/jira/browse/HDFS-12684
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, scm
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
> Fix For: HDFS-7240
>
> Attachments: HDFS-12684-HDFS-7240.001.patch
>
>
> I found this issue while reviewing HDFS-11468, from http://scm_host:9876/jmx, 
> both SCM and SCMNodeManager has {{NodeCount}} metrics
> {noformat}
>  {
> "name" : 
> "Hadoop:service=StorageContainerManager,name=StorageContainerManagerInfo,component=ServerRuntime",
> "modelerType" : "org.apache.hadoop.ozone.scm.StorageContainerManager",
> "ClientRpcPort" : "9860",
> "DatanodeRpcPort" : "9861",
> "NodeCount" : [ {
>   "key" : "STALE",
>   "value" : 0
> }, {
>   "key" : "DECOMMISSIONING",
>   "value" : 0
> }, {
>   "key" : "DECOMMISSIONED",
>   "value" : 0
> }, {
>   "key" : "FREE_NODE",
>   "value" : 0
> }, {
>   "key" : "RAFT_MEMBER",
>   "value" : 0
> }, {
>   "key" : "HEALTHY",
>   "value" : 0
> }, {
>   "key" : "DEAD",
>   "value" : 0
> }, {
>   "key" : "UNKNOWN",
>   "value" : 0
> } ],
> "CompileInfo" : "2017-10-17T06:47Z xxx",
> "Version" : "3.1.0-SNAPSHOT, r6019a25908ce75155656f13effd8e2e53ed43461",
> "SoftwareVersion" : "3.1.0-SNAPSHOT",
> "StartedTimeInMillis" : 1508393551065
>   }, {
> "name" : "Hadoop:service=SCMNodeManager,name=SCMNodeManagerInfo",
> "modelerType" : "org.apache.hadoop.ozone.scm.node.SCMNodeManager",
> "NodeCount" : [ {
>   "key" : "STALE",
>   "value" : 0
> }, {
>   "key" : "DECOMMISSIONING",
>   "value" : 0
> }, {
>   "key" : "DECOMMISSIONED",
>   "value" : 0
> }, {
>   "key" : "FREE_NODE",
>   "value" : 0
> }, {
>   "key" : "RAFT_MEMBER",
>   "value" : 0
> }, {
>   "key" : "HEALTHY",
>   "value" : 0
> }, {
>   "key" : "DEAD",
>   "value" : 0
> }, {
>   "key" : "UNKNOWN",
>   "value" : 0
> } ],
> "OutOfChillMode" : false,
> "MinimumChillModeNodes" : 1,
> "ChillModeStatus" : "Still in chill mode, waiting on nodes to report in. 
> 0 nodes reported, minimal 1 nodes required."
>   }
> {noformat}
> hence, propose to remove {{NodeCount}} from {{SCMMXBean}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12796) SCM should not start if Cluster Version file does not exist

2017-11-10 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247289#comment-16247289
 ] 

Nanda kumar commented on HDFS-12796:


+1, LGTM. Pending jenkins.

> SCM should not start if Cluster Version file does not exist
> ---
>
> Key: HDFS-12796
> URL: https://issues.apache.org/jira/browse/HDFS-12796
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Fix For: HDFS-7240
>
> Attachments: HDFS-12796-HDFS-7240.001.patch, 
> HDFS-12796-HDFS-7240.002.patch, HDFS-12796-HDFS-7240.003.patch
>
>
> We have SCM --init command which persist the cluster Version info in the 
> version file.  If SCM gets 
> started without SCM --init being done even once, it should fail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12797) Add Test for NFS mount of not supported filesystems like (file:///)

2017-11-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247255#comment-16247255
 ] 

Hudson commented on HDFS-12797:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13219 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13219/])
HDFS-12797. Add Test for NFS mount of not supported filesystems like (jitendra: 
rev 8a1bd9a4f4b8864aa560094a53d43ef732d378e5)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestExportsTable.java


> Add Test for NFS mount of not supported filesystems like (file:///)
> ---
>
> Key: HDFS-12797
> URL: https://issues.apache.org/jira/browse/HDFS-12797
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: HDFS-12797.001.patch
>
>
> This jira is to fix a review comment in HDFS-11575.
> "Add a test to start NFS service with viewfs over a non-hdfs file system. It 
> is ok to add it in a follow up jira."
> This test adds 2 tests,
> 1) test mount of viewfs root, this isn't allowed right now as it is viewfs 
> root is a client side mount only.
> 2) test mount of filesystem like "file", this isn't supported as well right 
> now.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12796) SCM should not start if Cluster Version file does not exist

2017-11-10 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDFS-12796:
---
Attachment: HDFS-12796-HDFS-7240.003.patch

Thanks [~nandakumar131] for the review comments.

patch v3 addresses the same.

> SCM should not start if Cluster Version file does not exist
> ---
>
> Key: HDFS-12796
> URL: https://issues.apache.org/jira/browse/HDFS-12796
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Fix For: HDFS-7240
>
> Attachments: HDFS-12796-HDFS-7240.001.patch, 
> HDFS-12796-HDFS-7240.002.patch, HDFS-12796-HDFS-7240.003.patch
>
>
> We have SCM --init command which persist the cluster Version info in the 
> version file.  If SCM gets 
> started without SCM --init being done even once, it should fail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10285) Storage Policy Satisfier in Namenode

2017-11-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247155#comment-16247155
 ] 

Hadoop QA commented on HDFS-10285:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 26 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 13s{color} | {color:orange} hadoop-hdfs-project: The patch generated 18 new 
+ 2024 unchanged - 2 fixed = 2042 total (was 2026) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
25s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
13s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
17s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m  0s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}156m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
|   | hadoop.fs.TestUnbuffer |
|   |