[jira] [Updated] (HDFS-14139) FsShell ls and stat command return different Modification Time on display.

2018-12-16 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14139:

Attachment: HDFS-14139-01.patch

> FsShell ls and stat command return different Modification Time on display.
> --
>
> Key: HDFS-14139
> URL: https://issues.apache.org/jira/browse/HDFS-14139
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs, shell
>Reporter: Fred Peng
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: easyfix
> Attachments: HDFS-14139-01.patch
>
>
> When we run "hdfs dfs -ls" or "hdfs dfs -stat" on the same file/directory, 
> the time of results are different.
> Like this:
> >> $ ./hdfs dfs -stat /user/xxx/collie-pt-canary
> >> 2018-12-10 10:04:57
> >> ./hdfs dfs -ls /user/xxx/collie-pt-canary
> >> -rw-r--r-- 3 xxx supergroup 0 2018-12-10 18:04
> Strangely, we found the time is different(8 hours). The stat command uses UTC 
> timezone, but the Ls command uses system local timezone. 
> Why does the stat command use UTC timezone, but Ls not?
> {code:java}
> // in Stat.java
> timeFmt = new SimpleDateFormat("-MM-dd HH:mm:ss");
> timeFmt.setTimeZone(TimeZone.getTimeZone("UTC"));{code}
> By the way, in Unix/Linux the ls and stat return the same time on display.
> Should we unify the timezone?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-102) SCM CA: SCM CA server signs certificate for approved CSR

2018-12-16 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16722415#comment-16722415
 ] 

Hadoop QA commented on HDDS-102:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
14s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} HDDS-4 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 25m 30s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
12s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.web.client.TestKeys |
|   | hadoop.ozone.om.TestOmMetrics |
|   | hadoop.ozone.container.metrics.TestContainerMetrics |
|   | hadoop.ozone.container.ozoneimpl.TestOzoneContainerWithTLS |
|   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
|   | hadoop.ozone.ozShell.TestOzoneShell |
|   | hadoop.ozone.TestOzoneConfigurationFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-102 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12951924/HDDS-102-HDDS-4.002.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux 1ca1ff563018 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | HDDS-4 / 614bcda |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1937/artifact/out/whitespace-eol.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1937/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1937/testReport/ |
| Max. process+thread count | 1270 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common U: hadoop-hdds/common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1937/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> SCM CA: SCM CA server signs certificate for approved CSR
> 
>
> Key: 

[jira] [Commented] (HDFS-14139) FsShell ls and stat command return different Modification Time on display.

2018-12-16 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16722432#comment-16722432
 ] 

Ayush Saxena commented on HDFS-14139:
-

Thanx [~pengmq1] for putting this up.verified the scenario at the linux end. It 
is inline with what you mentioned.

IMO too Its better to be consistent at both places.

Have uploaded v1 with the fix.

> FsShell ls and stat command return different Modification Time on display.
> --
>
> Key: HDFS-14139
> URL: https://issues.apache.org/jira/browse/HDFS-14139
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs, shell
>Reporter: Fred Peng
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: easyfix
> Attachments: HDFS-14139-01.patch
>
>
> When we run "hdfs dfs -ls" or "hdfs dfs -stat" on the same file/directory, 
> the time of results are different.
> Like this:
> >> $ ./hdfs dfs -stat /user/xxx/collie-pt-canary
> >> 2018-12-10 10:04:57
> >> ./hdfs dfs -ls /user/xxx/collie-pt-canary
> >> -rw-r--r-- 3 xxx supergroup 0 2018-12-10 18:04
> Strangely, we found the time is different(8 hours). The stat command uses UTC 
> timezone, but the Ls command uses system local timezone. 
> Why does the stat command use UTC timezone, but Ls not?
> {code:java}
> // in Stat.java
> timeFmt = new SimpleDateFormat("-MM-dd HH:mm:ss");
> timeFmt.setTimeZone(TimeZone.getTimeZone("UTC"));{code}
> By the way, in Unix/Linux the ls and stat return the same time on display.
> Should we unify the timezone?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14139) FsShell ls and stat command return different Modification Time on display.

2018-12-16 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14139:

Status: Patch Available  (was: Open)

> FsShell ls and stat command return different Modification Time on display.
> --
>
> Key: HDFS-14139
> URL: https://issues.apache.org/jira/browse/HDFS-14139
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs, shell
>Reporter: Fred Peng
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: easyfix
> Attachments: HDFS-14139-01.patch
>
>
> When we run "hdfs dfs -ls" or "hdfs dfs -stat" on the same file/directory, 
> the time of results are different.
> Like this:
> >> $ ./hdfs dfs -stat /user/xxx/collie-pt-canary
> >> 2018-12-10 10:04:57
> >> ./hdfs dfs -ls /user/xxx/collie-pt-canary
> >> -rw-r--r-- 3 xxx supergroup 0 2018-12-10 18:04
> Strangely, we found the time is different(8 hours). The stat command uses UTC 
> timezone, but the Ls command uses system local timezone. 
> Why does the stat command use UTC timezone, but Ls not?
> {code:java}
> // in Stat.java
> timeFmt = new SimpleDateFormat("-MM-dd HH:mm:ss");
> timeFmt.setTimeZone(TimeZone.getTimeZone("UTC"));{code}
> By the way, in Unix/Linux the ls and stat return the same time on display.
> Should we unify the timezone?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14139) FsShell ls and stat command return different Modification Time on display.

2018-12-16 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14139:

Attachment: HDFS-14139-02.patch

> FsShell ls and stat command return different Modification Time on display.
> --
>
> Key: HDFS-14139
> URL: https://issues.apache.org/jira/browse/HDFS-14139
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs, shell
>Reporter: Fred Peng
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: easyfix
> Attachments: HDFS-14139-01.patch, HDFS-14139-02.patch
>
>
> When we run "hdfs dfs -ls" or "hdfs dfs -stat" on the same file/directory, 
> the time of results are different.
> Like this:
> >> $ ./hdfs dfs -stat /user/xxx/collie-pt-canary
> >> 2018-12-10 10:04:57
> >> ./hdfs dfs -ls /user/xxx/collie-pt-canary
> >> -rw-r--r-- 3 xxx supergroup 0 2018-12-10 18:04
> Strangely, we found the time is different(8 hours). The stat command uses UTC 
> timezone, but the Ls command uses system local timezone. 
> Why does the stat command use UTC timezone, but Ls not?
> {code:java}
> // in Stat.java
> timeFmt = new SimpleDateFormat("-MM-dd HH:mm:ss");
> timeFmt.setTimeZone(TimeZone.getTimeZone("UTC"));{code}
> By the way, in Unix/Linux the ls and stat return the same time on display.
> Should we unify the timezone?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14059) Test reads from standby on a secure cluster with Configured failover

2018-12-16 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16722548#comment-16722548
 ] 

Brahma Reddy Battula commented on HDFS-14059:
-

Sorry for pitching late here. would like to know some insights.

bq.So from my end things look good as far as ConfiguredFailoverProxy is 
concerned. I never experienced anything concerning around DelegationTokens and 
could confirm seeing them being generated and used by jobs.

[~zero45] do you've perf numbers with ORP and CFP. in this secure setup.? and 
what's value you configured for "*dfs.ha.tail-edits.period"?*

> Test reads from standby on a secure cluster with Configured failover
> 
>
> Key: HDFS-14059
> URL: https://issues.apache.org/jira/browse/HDFS-14059
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Konstantin Shvachko
>Assignee: Plamen Jeliazkov
>Priority: Major
>
> Run standard HDFS tests to verify reading from ObserverNode on a secure HA 
> cluster with {{ConfiguredFailoverProxyProvider}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12943) Consistent Reads from Standby Node

2018-12-16 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16722550#comment-16722550
 ] 

Brahma Reddy Battula commented on HDFS-12943:
-

{quote}I think when we discuss a "request", we need to differentiate an RPC 
request originating from a Java application (MapReduce task, etc.) vs. a CLI 
request. The former will be the vast majority of operations on a typical 
cluster, so I would argue that optimizing for the performance and efficiency of 
that usage is much more important.
{quote}
Agree, I Could have mentioned CLI. But getHAServiceState() call from ORP which 
taken 2s+ as I mentioned above.Bytheway My intent was when read/write are 
combined in single application how much will be impact as it needs switch?

Just for curiosity,,do we've write benchmarks with and without ORP,as I didn't 
find from HDFS-14058 and HDFS-14059?
{quote}1.Are you running with HDFS-13873? With this patch (only committed 
yesterday so I doubt you have it) the exception thrown should be more 
meaningful.
{quote}
Yes,with latest HDFS-12943 branch.
{quote}2.Did you remember to enable in-progress edit log tailing?
{quote}
Yes,Enabled for three NN's
{quote}3.Was this run on an almost completely stagnant cluster (no other 
writes)? This can make the ANN flush its edits to the JNs less frequently, 
increasing the lag time between ANN and Observer.
{quote}
Yes,no other writes.

 
 Tried the following test with and with ORF,Came to know it's(perf impact) 
based on the tailing edits("*dfs.ha.tail-edits.period") which is default 1m.(In 
tests, it's 100MS)..*
{code:java}
@Test
 public void testSimpleRead() throws Exception {
 long avg=0;
 long avgL=0;
 long avgC=0;
 int num = 100;
 for (int i = 0; i < num; i++) {
 Path testPath1 = new Path(testPath, "test1"+i);
 long startTime=System.currentTimeMillis();
 assertTrue(dfs.mkdirs(testPath1, FsPermission.getDefault()));
 long l = System.currentTimeMillis() - startTime;
 System.out.println("time TakenL1: "+i+" : "+l);
 avg = avg+l;
 assertSentTo(0);
 long startTime2=System.currentTimeMillis();
 dfs.getContentSummary(testPath1);
 long C = System.currentTimeMillis() - startTime2;
 System.out.println("time TakengetContentSummary: "+i+" : "+ C);
 avgC = avgC+C;
 assertSentTo(2);
 long startTime1=System.currentTimeMillis();
 dfs.getFileStatus(testPath1);
 long L = System.currentTimeMillis() - startTime1;
 System.out.println("time TakengetFileStatus: "+i+" : "+ L);
 avgL = avgL+L;
 assertSentTo(2);
}
 System.out.println("AVG: mkDir: "+avg/num+" List: "+avgL/num+" Cont: 
"+avgC/num);
}{code}
Apart from the perf i have following queries.
 i) Did we try with C/CPP client..?
 ii)are we planning separate metrics for observer reads(Client 
Side),Application like mapred might helpful for  job counters?

 

> Consistent Reads from Standby Node
> --
>
> Key: HDFS-12943
> URL: https://issues.apache.org/jira/browse/HDFS-12943
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs
>Reporter: Konstantin Shvachko
>Priority: Major
> Attachments: ConsistentReadsFromStandbyNode.pdf, 
> ConsistentReadsFromStandbyNode.pdf, HDFS-12943-001.patch, 
> HDFS-12943-002.patch, TestPlan-ConsistentReadsFromStandbyNode.pdf
>
>
> StandbyNode in HDFS is a replica of the active NameNode. The states of the 
> NameNodes are coordinated via the journal. It is natural to consider 
> StandbyNode as a read-only replica. As with any replicated distributed system 
> the problem of stale reads should be resolved. Our main goal is to provide 
> reads from standby in a consistent way in order to enable a wide range of 
> existing applications running on top of HDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14139) FsShell ls and stat command return different Modification Time on display.

2018-12-16 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16722470#comment-16722470
 ] 

Hadoop QA commented on HDFS-14139:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  0s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 47s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 3 new + 54 unchanged - 2 fixed = 57 total (was 56) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 13s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.ssl.TestSSLFactory |
|   | hadoop.cli.TestCLI |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14139 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12951934/HDFS-14139-01.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0e2a85d9b28f 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 04c0347 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25815/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25815/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  

[jira] [Commented] (HDFS-14139) FsShell ls and stat command return different Modification Time on display.

2018-12-16 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16722499#comment-16722499
 ] 

Hadoop QA commented on HDFS-14139:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
51s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 44s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 54 unchanged - 2 fixed = 56 total (was 56) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 57s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m  6s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 90m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.util.TestDiskCheckerWithDiskIo |
|   | hadoop.util.TestReadWriteDiskValidator |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14139 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12951939/HDFS-14139-02.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 273cef5b560f 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 04c0347 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25816/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 

[jira] [Commented] (HDFS-13762) Support non-volatile storage class memory(SCM) in HDFS cache directives

2018-12-16 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16722704#comment-16722704
 ] 

Hadoop QA commented on HDFS-13762:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 13m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project-dist . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m  
4s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 14m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m  
4s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 13s{color} | {color:orange} root: The patch generated 3 new + 785 unchanged 
- 3 fixed = 788 total (was 788) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 13m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 1s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
11s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project-dist . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m  
5s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 10s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}243m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 

[jira] [Updated] (HDFS-14151) RBF: Make the read-only column of Mount Table clearly understandable

2018-12-16 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-14151:

Status: Patch Available  (was: Open)

> RBF: Make the read-only column of Mount Table clearly understandable
> 
>
> Key: HDFS-14151
> URL: https://issues.apache.org/jira/browse/HDFS-14151
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14151.1.patch, mount_table_before.png, 
> read_only_a.png, read_only_b.png
>
>
> The read-only column of Mount Table is a little confusing now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14151) RBF: Make the read-only column of Mount Table clearly understandable

2018-12-16 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-14151:

Attachment: HDFS-14151.1.patch

> RBF: Make the read-only column of Mount Table clearly understandable
> 
>
> Key: HDFS-14151
> URL: https://issues.apache.org/jira/browse/HDFS-14151
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14151.1.patch, mount_table_before.png, 
> read_only_a.png, read_only_b.png
>
>
> The read-only column of Mount Table is a little confusing now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14096) [SPS] : Add Support for Storage Policy Satisfier in ViewFs

2018-12-16 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16722739#comment-16722739
 ] 

Hudson commented on HDFS-14096:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15618 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15618/])
HDFS-14096. [SPS] : Add Support for Storage Policy Satisfier in ViewFs. 
(surendralilhore: rev 788e7473a404fa074b3af522416ee3d2fae865a0)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFs.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFs.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFileSystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/fs/Hdfs.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestHarFileSystem.java


> [SPS] : Add Support for Storage Policy Satisfier in ViewFs
> --
>
> Key: HDFS-14096
> URL: https://issues.apache.org/jira/browse/HDFS-14096
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14096-01.patch, HDFS-14096-02.patch, 
> HDFS-14096-03.patch, HDFS-14096-04.patch
>
>
> Add support for SPS in ViewFileSystem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13111) Close recovery may incorrectly mark blocks corrupt

2018-12-16 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16722727#comment-16722727
 ] 

Wei-Chiu Chuang commented on HDFS-13111:


I suspect HDFS-10240, which was fixed recently, sounds similar from the 
description.

> Close recovery may incorrectly mark blocks corrupt
> --
>
> Key: HDFS-13111
> URL: https://issues.apache.org/jira/browse/HDFS-13111
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.8.0
>Reporter: Daryn Sharp
>Priority: Critical
>
> Close recovery can leave a block marked corrupt until the next FBR arrives 
> from one of the DNs.  The reason is unclear but has happened multiple times 
> when a DN has io saturated disks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14096) [SPS] : Add Support for Storage Policy Satisfier in ViewFs

2018-12-16 Thread Surendra Singh Lilhore (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-14096:
--
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

> [SPS] : Add Support for Storage Policy Satisfier in ViewFs
> --
>
> Key: HDFS-14096
> URL: https://issues.apache.org/jira/browse/HDFS-14096
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14096-01.patch, HDFS-14096-02.patch, 
> HDFS-14096-03.patch, HDFS-14096-04.patch
>
>
> Add support for SPS in ViewFileSystem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14151) RBF: Make the read-only column of Mount Table clearly understandable

2018-12-16 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16722737#comment-16722737
 ] 

Takanobu Asanuma commented on HDFS-14151:
-

Thanks for the comment, [~elgoiri]. Uploaded the 1st patch of the first one.

I think glyphicon-lock matches well with read-only. And green is often used for 
enable.

> RBF: Make the read-only column of Mount Table clearly understandable
> 
>
> Key: HDFS-14151
> URL: https://issues.apache.org/jira/browse/HDFS-14151
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14151.1.patch, mount_table_before.png, 
> read_only_a.png, read_only_b.png
>
>
> The read-only column of Mount Table is a little confusing now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14151) RBF: Make the read-only column of Mount Table clearly understandable

2018-12-16 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16722762#comment-16722762
 ] 

Hadoop QA commented on HDFS-14151:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
29m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14151 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12951994/HDFS-14151.1.patch |
| Optional Tests |  dupname  asflicense  shadedclient  |
| uname | Linux f1e99cb23262 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 788e747 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 443 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25818/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: Make the read-only column of Mount Table clearly understandable
> 
>
> Key: HDFS-14151
> URL: https://issues.apache.org/jira/browse/HDFS-14151
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14151.1.patch, mount_table_before.png, 
> read_only_a.png, read_only_b.png
>
>
> The read-only column of Mount Table is a little confusing now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-539) ozone datanode ignores the invalid options

2018-12-16 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16722686#comment-16722686
 ] 

Shashikant Banerjee commented on HDDS-539:
--

Thanks [~vmurakami] for updating the patch. Some tests are failing with the 
following 
{code:java}
[ERROR] testDNstartAfterSCM(org.apache.hadoop.ozone.TestMiniOzoneCluster)  Time 
elapsed: 5.8 s  <<< ERROR!
java.lang.NullPointerException
at 
org.apache.hadoop.ozone.HddsDatanodeService.createHddsDatanodeService(HddsDatanodeService.java:106)
at 
org.apache.hadoop.ozone.HddsDatanodeService.createHddsDatanodeService(HddsDatanodeService.java:90)
at 
org.apache.hadoop.ozone.MiniOzoneClusterImpl.restartHddsDatanode(MiniOzoneClusterImpl.java:272)
at 
org.apache.hadoop.ozone.TestMiniOzoneCluster.testDNstartAfterSCM(TestMiniOzoneCluster.java:239)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){code}
It seems to be related to the patch. Can you please check?

> ozone datanode ignores the invalid options
> --
>
> Key: HDDS-539
> URL: https://issues.apache.org/jira/browse/HDDS-539
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Vinicius Higa Murakami
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-539.003.patch, HDDS-539.004.patch, 
> HDDS-539.005.patch, HDDS-539.006.patch, HDDS-539.007.patch, 
> HDDS-539.008.patch, HDDS-539.009.patch, HDDS-539.patch
>
>
> ozone datanode command starts datanode and ignores the invalid option, apart 
> from help
> {code:java}
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone datanode -help
> Starts HDDS Datanode
> {code}
> For all the other invalid options, it just ignores and starts the DN like 
> below:
> {code:java}
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone datanode -ABC
> 2018-09-22 00:59:34,462 [main] INFO - STARTUP_MSG:
> /
> STARTUP_MSG: Starting HddsDatanodeService
> STARTUP_MSG: host = 
> ctr-e138-1518143905142-481027-01-02.hwx.site/172.27.54.20
> STARTUP_MSG: args = [-ABC]
> STARTUP_MSG: version = 3.2.0-SNAPSHOT
> STARTUP_MSG: classpath = 
> 

[jira] [Commented] (HDFS-14139) FsShell ls and stat command return different Modification Time on display.

2018-12-16 Thread Fred Peng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16722741#comment-16722741
 ] 

Fred Peng commented on HDFS-14139:
--

Thanks [~ayushtkn] for the patch. (y) I agree its to be consistent.

I try to find the desgin reason for this issue, but still unknow.  Or is it 
just no reason? 

What do you think?

> FsShell ls and stat command return different Modification Time on display.
> --
>
> Key: HDFS-14139
> URL: https://issues.apache.org/jira/browse/HDFS-14139
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs, shell
>Reporter: Fred Peng
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: easyfix
> Attachments: HDFS-14139-01.patch, HDFS-14139-02.patch
>
>
> When we run "hdfs dfs -ls" or "hdfs dfs -stat" on the same file/directory, 
> the time of results are different.
> Like this:
> >> $ ./hdfs dfs -stat /user/xxx/collie-pt-canary
> >> 2018-12-10 10:04:57
> >> ./hdfs dfs -ls /user/xxx/collie-pt-canary
> >> -rw-r--r-- 3 xxx supergroup 0 2018-12-10 18:04
> Strangely, we found the time is different(8 hours). The stat command uses UTC 
> timezone, but the Ls command uses system local timezone. 
> Why does the stat command use UTC timezone, but Ls not?
> {code:java}
> // in Stat.java
> timeFmt = new SimpleDateFormat("-MM-dd HH:mm:ss");
> timeFmt.setTimeZone(TimeZone.getTimeZone("UTC"));{code}
> By the way, in Unix/Linux the ls and stat return the same time on display.
> Should we unify the timezone?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14096) [SPS] : Add Support for Storage Policy Satisfier in ViewFs

2018-12-16 Thread Surendra Singh Lilhore (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16722733#comment-16722733
 ] 

Surendra Singh Lilhore commented on HDFS-14096:
---

Thanks [~ayushtkn] for contribution.
Thanks [~ljain] for review.

> [SPS] : Add Support for Storage Policy Satisfier in ViewFs
> --
>
> Key: HDFS-14096
> URL: https://issues.apache.org/jira/browse/HDFS-14096
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14096-01.patch, HDFS-14096-02.patch, 
> HDFS-14096-03.patch, HDFS-14096-04.patch
>
>
> Add support for SPS in ViewFileSystem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14096) [SPS] : Add Support for Storage Policy Satisfier in ViewFs

2018-12-16 Thread Surendra Singh Lilhore (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16722688#comment-16722688
 ] 

Surendra Singh Lilhore commented on HDFS-14096:
---

Committing this shortly.

> [SPS] : Add Support for Storage Policy Satisfier in ViewFs
> --
>
> Key: HDFS-14096
> URL: https://issues.apache.org/jira/browse/HDFS-14096
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14096-01.patch, HDFS-14096-02.patch, 
> HDFS-14096-03.patch, HDFS-14096-04.patch
>
>
> Add support for SPS in ViewFileSystem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-932) Add blockade Tests for Network partition

2018-12-16 Thread Nilotpal Nandi (JIRA)
Nilotpal Nandi created HDDS-932:
---

 Summary: Add blockade Tests for Network partition
 Key: HDDS-932
 URL: https://issues.apache.org/jira/browse/HDDS-932
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode
Affects Versions: 0.4.0
Reporter: Nilotpal Nandi
Assignee: Nilotpal Nandi
 Fix For: 0.4.0


Blockade tests need to be added pertaining to network partition.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-851) Provide official apache docker image for Ozone

2018-12-16 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16722634#comment-16722634
 ] 

Bharat Viswanadham commented on HDDS-851:
-

Thank You [~elek] for the fix.

I have verified web UI, but when I try running some ozone commands getting the 
following error.

I also don't see any logs in /opt/hadoop/logs directory to check the logs.

 

And another question, why do we do this?
{code:java}
if [ ! -d "$DIR/build/apache-rat-0.12" ]; then
 wget 
"https://www.apache.org/dyn/mirrors/mirrors.cgi?action=download=creadur/apache-rat-0.12/apache-rat-0.12-bin.tar.gz;
 -O "$DIR/build/apache-rat.tar.gz"
 cd $DIR/build
 tar zvxf apache-rat.tar.gz
 cd -
fi
java -jar $DIR/build/apache-rat-0.12/apache-rat-0.12.jar $DIR -e .dockerignore 
-e public -e apache-rat-0.12 -e .git -e .gitignore{code}




hadoop@bc20ba918f6e:~$ ozone sh volume create /vol1

 
{code:java}
2018-12-17 01:44:23 WARN  NativeCodeLoader:60 - Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
2018-12-17 01:44:24 ERROR OzoneClientFactory:294 - Couldn't create protocol 
class org.apache.hadoop.ozone.client.rpc.RpcClient exception: 
java.lang.reflect.InvocationTargetException
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
 at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:291)
 at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:169)
 at org.apache.hadoop.ozone.web.ozShell.Handler.verifyURI(Handler.java:108)
 at 
org.apache.hadoop.ozone.web.ozShell.volume.CreateVolumeHandler.call(CreateVolumeHandler.java:71)
 at 
org.apache.hadoop.ozone.web.ozShell.volume.CreateVolumeHandler.call(CreateVolumeHandler.java:41)
 at picocli.CommandLine.execute(CommandLine.java:919)
 at picocli.CommandLine.access$700(CommandLine.java:104)
 at picocli.CommandLine$RunLast.handle(CommandLine.java:1083)
 at picocli.CommandLine$RunLast.handle(CommandLine.java:1051)
 at 
picocli.CommandLine$AbstractParseResultHandler.handleParseResult(CommandLine.java:959)
 at picocli.CommandLine.parseWithHandlers(CommandLine.java:1242)
 at picocli.CommandLine.parseWithHandler(CommandLine.java:1181)
 at org.apache.hadoop.hdds.cli.GenericCli.execute(GenericCli.java:61)
 at org.apache.hadoop.hdds.cli.GenericCli.run(GenericCli.java:52)
 at org.apache.hadoop.ozone.web.ozShell.Shell.main(Shell.java:80)
Caused by: java.io.IOException: Getting service list failed, error: 
INTERNAL_ERROR
 at 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.getServiceList(OzoneManagerProtocolClientSideTranslatorPB.java:777)
 at 
org.apache.hadoop.ozone.client.rpc.RpcClient.getScmAddressForClient(RpcClient.java:155)
 at org.apache.hadoop.ozone.client.rpc.RpcClient.(RpcClient.java:127)
 ... 19 more
Getting service list failed, error: INTERNAL_ERROR
{code}
 

> Provide official apache docker image for Ozone
> --
>
> Key: HDDS-851
> URL: https://issues.apache.org/jira/browse/HDDS-851
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: docker-ozone-latest.tar.gz, ozonedocker.png
>
>
> Similar to the apache/hadoop:2 and apache/hadoop:3 images I propose to 
> provide apache/ozone docker images which includes the voted release binaries.
> The image can follow all the conventions from HADOOP-14898
> 1. BRANCHING
> I propose to create new docker branches:
> docker-ozone-0.3.0-alpha
> docker-ozone-latest
> And ask INFRA to register docker-ozone-(.*) in the dockerhub to create 
> apache/ozone: images
> 2. RUNNING
> I propose to create a default runner script which starts om + scm + datanode 
> + s3g all together. With this approach you can start a full ozone cluster as 
> easy as
> {code}
> docker run -p 9878:9878 -p 9876:9876 -p 9874:9874 -d apache/ozone
> {code}
> That's all. This is an all-in-one docker image which is ready to try out.
> 3. RUNNING with compose
> I propose to include a default docker-compose + config file in the image. To 
> start a multi-node pseudo cluster it will be enough to execute:
> {code}
> docker run apache/ozone cat docker-compose.yaml > docker-compose.yaml
> docker run apache/ozone cat docker-config > docker-config
> docker-compose up -d
> {code}
> That's all, and you have a multi-(pseudo)node ozone cluster which could be 
> scaled up and down with ozone.
> 4. k8s
> Later we can also provide k8s resource files 

[jira] [Commented] (HDFS-13869) RBF: Handle NPE for NamenodeBeanMetrics#getFederationMetrics()

2018-12-16 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16722672#comment-16722672
 ] 

Yiqun Lin commented on HDFS-13869:
--

Committed to HDFS-13891 branch.
Thanks [~RANith] for the contribution and thanks others for the reviews!

> RBF: Handle NPE for NamenodeBeanMetrics#getFederationMetrics()
> --
>
> Key: HDFS-13869
> URL: https://issues.apache.org/jira/browse/HDFS-13869
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Surendra Singh Lilhore
>Assignee: Ranith Sardar
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-13869-002.diff, HDFS-13869-003.diff, 
> HDFS-13869-004.patch, HDFS-13869-005.patch, HDFS-13869-006.patch, 
> HDFS-13869-007.patch, HDFS-13869-HDFS-13891.009.patch, 
> HDFS-13869-HDFS-13891.010.patch, HDFS-13869-HDFS-13891.011.patch, 
> HDFS-13869.patch, HDFS-13891-HDFS-13869-008.patch
>
>
> {code:java}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.federation.metrics.NamenodeBeanMetrics.getUsed(NamenodeBeanMetrics.java:205)
>   at 
> org.apache.hadoop.hdfs.server.federation.metrics.NamenodeBeanMetrics.getCapacityUsed(NamenodeBeanMetrics.java:519)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){code}
> ngMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13869) RBF: Handle NPE for NamenodeBeanMetrics#getFederationMetrics

2018-12-16 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13869:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-13891
   Status: Resolved  (was: Patch Available)

> RBF: Handle NPE for NamenodeBeanMetrics#getFederationMetrics
> 
>
> Key: HDFS-13869
> URL: https://issues.apache.org/jira/browse/HDFS-13869
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Surendra Singh Lilhore
>Assignee: Ranith Sardar
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-13869-002.diff, HDFS-13869-003.diff, 
> HDFS-13869-004.patch, HDFS-13869-005.patch, HDFS-13869-006.patch, 
> HDFS-13869-007.patch, HDFS-13869-HDFS-13891.009.patch, 
> HDFS-13869-HDFS-13891.010.patch, HDFS-13869-HDFS-13891.011.patch, 
> HDFS-13869.patch, HDFS-13891-HDFS-13869-008.patch
>
>
> {code:java}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.federation.metrics.NamenodeBeanMetrics.getUsed(NamenodeBeanMetrics.java:205)
>   at 
> org.apache.hadoop.hdfs.server.federation.metrics.NamenodeBeanMetrics.getCapacityUsed(NamenodeBeanMetrics.java:519)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){code}
> ngMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-851) Provide official apache docker image for Ozone

2018-12-16 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16722634#comment-16722634
 ] 

Bharat Viswanadham edited comment on HDDS-851 at 12/17/18 1:47 AM:
---

Thank You, [~elek] for the contribution.

I have verified web UI, but when I try running some ozone commands getting the 
following error.

I also don't see any logs in /opt/hadoop/logs directory to check the logs.

 

And another question, why do we do this?
{code:java}
if [ ! -d "$DIR/build/apache-rat-0.12" ]; then
 wget 
"https://www.apache.org/dyn/mirrors/mirrors.cgi?action=download=creadur/apache-rat-0.12/apache-rat-0.12-bin.tar.gz;
 -O "$DIR/build/apache-rat.tar.gz"
 cd $DIR/build
 tar zvxf apache-rat.tar.gz
 cd -
fi
java -jar $DIR/build/apache-rat-0.12/apache-rat-0.12.jar $DIR -e .dockerignore 
-e public -e apache-rat-0.12 -e .git -e .gitignore{code}
hadoop@bc20ba918f6e:~$ ozone sh volume create /vol1

 
{code:java}
2018-12-17 01:44:23 WARN  NativeCodeLoader:60 - Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
2018-12-17 01:44:24 ERROR OzoneClientFactory:294 - Couldn't create protocol 
class org.apache.hadoop.ozone.client.rpc.RpcClient exception: 
java.lang.reflect.InvocationTargetException
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
 at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:291)
 at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:169)
 at org.apache.hadoop.ozone.web.ozShell.Handler.verifyURI(Handler.java:108)
 at 
org.apache.hadoop.ozone.web.ozShell.volume.CreateVolumeHandler.call(CreateVolumeHandler.java:71)
 at 
org.apache.hadoop.ozone.web.ozShell.volume.CreateVolumeHandler.call(CreateVolumeHandler.java:41)
 at picocli.CommandLine.execute(CommandLine.java:919)
 at picocli.CommandLine.access$700(CommandLine.java:104)
 at picocli.CommandLine$RunLast.handle(CommandLine.java:1083)
 at picocli.CommandLine$RunLast.handle(CommandLine.java:1051)
 at 
picocli.CommandLine$AbstractParseResultHandler.handleParseResult(CommandLine.java:959)
 at picocli.CommandLine.parseWithHandlers(CommandLine.java:1242)
 at picocli.CommandLine.parseWithHandler(CommandLine.java:1181)
 at org.apache.hadoop.hdds.cli.GenericCli.execute(GenericCli.java:61)
 at org.apache.hadoop.hdds.cli.GenericCli.run(GenericCli.java:52)
 at org.apache.hadoop.ozone.web.ozShell.Shell.main(Shell.java:80)
Caused by: java.io.IOException: Getting service list failed, error: 
INTERNAL_ERROR
 at 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.getServiceList(OzoneManagerProtocolClientSideTranslatorPB.java:777)
 at 
org.apache.hadoop.ozone.client.rpc.RpcClient.getScmAddressForClient(RpcClient.java:155)
 at org.apache.hadoop.ozone.client.rpc.RpcClient.(RpcClient.java:127)
 ... 19 more
Getting service list failed, error: INTERNAL_ERROR
{code}
 


was (Author: bharatviswa):
Thank You [~elek] for the fix.

I have verified web UI, but when I try running some ozone commands getting the 
following error.

I also don't see any logs in /opt/hadoop/logs directory to check the logs.

 

And another question, why do we do this?
{code:java}
if [ ! -d "$DIR/build/apache-rat-0.12" ]; then
 wget 
"https://www.apache.org/dyn/mirrors/mirrors.cgi?action=download=creadur/apache-rat-0.12/apache-rat-0.12-bin.tar.gz;
 -O "$DIR/build/apache-rat.tar.gz"
 cd $DIR/build
 tar zvxf apache-rat.tar.gz
 cd -
fi
java -jar $DIR/build/apache-rat-0.12/apache-rat-0.12.jar $DIR -e .dockerignore 
-e public -e apache-rat-0.12 -e .git -e .gitignore{code}




hadoop@bc20ba918f6e:~$ ozone sh volume create /vol1

 
{code:java}
2018-12-17 01:44:23 WARN  NativeCodeLoader:60 - Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
2018-12-17 01:44:24 ERROR OzoneClientFactory:294 - Couldn't create protocol 
class org.apache.hadoop.ozone.client.rpc.RpcClient exception: 
java.lang.reflect.InvocationTargetException
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
 at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:291)
 at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:169)
 at 

[jira] [Updated] (HDFS-13762) Support non-volatile storage class memory(SCM) in HDFS cache directives

2018-12-16 Thread Wei Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zhou updated HDFS-13762:

Attachment: HDFS-13762.007.patch

> Support non-volatile storage class memory(SCM) in HDFS cache directives
> ---
>
> Key: HDFS-13762
> URL: https://issues.apache.org/jira/browse/HDFS-13762
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: caching, datanode
>Reporter: Sammi Chen
>Assignee: Wei Zhou
>Priority: Major
> Attachments: HDFS-13762.000.patch, HDFS-13762.001.patch, 
> HDFS-13762.002.patch, HDFS-13762.003.patch, HDFS-13762.004.patch, 
> HDFS-13762.005.patch, HDFS-13762.006.patch, HDFS-13762.007.patch, 
> SCMCacheDesign-2018-11-08.pdf, SCMCacheTestPlan.pdf
>
>
> No-volatile storage class memory is a type of memory that can keep the data 
> content after power failure or between the power cycle. Non-volatile storage 
> class memory device usually has near access speed as memory DIMM while has 
> lower cost than memory.  So today It is usually used as a supplement to 
> memory to hold long tern persistent data, such as data in cache. 
> Currently in HDFS, we have OS page cache backed read only cache and RAMDISK 
> based lazy write cache.  Non-volatile memory suits for both these functions. 
> This Jira aims to enable storage class memory first in read cache. Although 
> storage class memory has non-volatile characteristics, to keep the same 
> behavior as current read only cache, we don't use its persistent 
> characteristics currently.  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13443) RBF: Update mount table cache immediately after changing (add/update/remove) mount table entries.

2018-12-16 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16722655#comment-16722655
 ] 

Yiqun Lin commented on HDFS-13443:
--

[~arshad.mohammad], actually I'm not mean we have to remove the cleaner 
scheduler thread here, :). Could you please address my another comment of the 
logging? Let's go ahead with this JIRA. 

> RBF: Update mount table cache immediately after changing (add/update/remove) 
> mount table entries.
> -
>
> Key: HDFS-13443
> URL: https://issues.apache.org/jira/browse/HDFS-13443
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Mohammad Arshad
>Assignee: Mohammad Arshad
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-13443-012.patch, HDFS-13443-013.patch, 
> HDFS-13443-014.patch, HDFS-13443-015.patch, HDFS-13443-016.patch, 
> HDFS-13443-017.patch, HDFS-13443-HDFS-13891-001.patch, 
> HDFS-13443-branch-2.001.patch, HDFS-13443-branch-2.002.patch, 
> HDFS-13443.001.patch, HDFS-13443.002.patch, HDFS-13443.003.patch, 
> HDFS-13443.004.patch, HDFS-13443.005.patch, HDFS-13443.006.patch, 
> HDFS-13443.007.patch, HDFS-13443.008.patch, HDFS-13443.009.patch, 
> HDFS-13443.010.patch, HDFS-13443.011.patch
>
>
> Currently mount table cache is updated periodically, by default cache is 
> updated every minute. After change in mount table, user operations may still 
> use old mount table. This is bit wrong.
> To update mount table cache, maybe we can do following
>  * *Add refresh API in MountTableManager which will update mount table cache.*
>  * *When there is a change in mount table entries, router admin server can 
> update its cache and ask other routers to update their cache*. For example if 
> there are three routers R1,R2,R3 in a cluster then add mount table entry API, 
> at admin server side, will perform following sequence of action
>  ## user submit add mount table entry request on R1
>  ## R1 adds the mount table entry in state store
>  ## R1 call refresh API on R2
>  ## R1 calls refresh API on R3
>  ## R1 directly freshest its cache
>  ## Add mount table entry response send back to user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14130) Make ZKFC ObserverNode aware

2018-12-16 Thread xiangheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiangheng reassigned HDFS-14130:


Assignee: (was: xiangheng)

> Make ZKFC ObserverNode aware
> 
>
> Key: HDFS-14130
> URL: https://issues.apache.org/jira/browse/HDFS-14130
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha
>Affects Versions: HDFS-12943
>Reporter: Konstantin Shvachko
>Priority: Major
>
> Need to fix automatic failover with ZKFC. Currently it does not know about 
> ObserverNodes trying to convert them to SBNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13358) RBF: Support for Delegation Token (RPC)

2018-12-16 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16722679#comment-16722679
 ] 

CR Hota commented on HDFS-13358:


[~brahmareddy] Thanks for reviewing. The znodes will be specific to routers. 
The root znode for individual applications can be configured via the property 
"ZK_DTSM_ZNODE_WORKING_PATH" which already exists.

[~elgoiri] Were you able to verify the correctness of tokens across multiple 
routers?

> RBF: Support for Delegation Token (RPC)
> ---
>
> Key: HDFS-13358
> URL: https://issues.apache.org/jira/browse/HDFS-13358
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Sherwood Zheng
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-13358-HDFS-13891.001.patch, 
> HDFS-13358-HDFS-13891.002.patch, HDFS-13358-HDFS-13891.003.patch, RBF_ 
> Delegation token design.pdf
>
>
> HDFS Router should support issuing / managing HDFS delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14130) Make ZKFC ObserverNode aware

2018-12-16 Thread xiangheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiangheng reassigned HDFS-14130:


Assignee: xiangheng

> Make ZKFC ObserverNode aware
> 
>
> Key: HDFS-14130
> URL: https://issues.apache.org/jira/browse/HDFS-14130
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha
>Affects Versions: HDFS-12943
>Reporter: Konstantin Shvachko
>Assignee: xiangheng
>Priority: Major
>
> Need to fix automatic failover with ZKFC. Currently it does not know about 
> ObserverNodes trying to convert them to SBNs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13869) RBF: Handle NPE for NamenodeBeanMetrics#getFederationMetrics()

2018-12-16 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16722657#comment-16722657
 ] 

Yiqun Lin commented on HDFS-13869:
--

LGTM, +1. Commit this shortly.

> RBF: Handle NPE for NamenodeBeanMetrics#getFederationMetrics()
> --
>
> Key: HDFS-13869
> URL: https://issues.apache.org/jira/browse/HDFS-13869
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Surendra Singh Lilhore
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-13869-002.diff, HDFS-13869-003.diff, 
> HDFS-13869-004.patch, HDFS-13869-005.patch, HDFS-13869-006.patch, 
> HDFS-13869-007.patch, HDFS-13869-HDFS-13891.009.patch, 
> HDFS-13869-HDFS-13891.010.patch, HDFS-13869-HDFS-13891.011.patch, 
> HDFS-13869.patch, HDFS-13891-HDFS-13869-008.patch
>
>
> {code:java}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.federation.metrics.NamenodeBeanMetrics.getUsed(NamenodeBeanMetrics.java:205)
>   at 
> org.apache.hadoop.hdfs.server.federation.metrics.NamenodeBeanMetrics.getCapacityUsed(NamenodeBeanMetrics.java:519)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){code}
> ngMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13869) RBF: Handle NPE for NamenodeBeanMetrics#getFederationMetrics

2018-12-16 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13869:
-
Summary: RBF: Handle NPE for NamenodeBeanMetrics#getFederationMetrics  
(was: RBF: Handle NPE for NamenodeBeanMetrics#getFederationMetrics())

> RBF: Handle NPE for NamenodeBeanMetrics#getFederationMetrics
> 
>
> Key: HDFS-13869
> URL: https://issues.apache.org/jira/browse/HDFS-13869
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Surendra Singh Lilhore
>Assignee: Ranith Sardar
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-13869-002.diff, HDFS-13869-003.diff, 
> HDFS-13869-004.patch, HDFS-13869-005.patch, HDFS-13869-006.patch, 
> HDFS-13869-007.patch, HDFS-13869-HDFS-13891.009.patch, 
> HDFS-13869-HDFS-13891.010.patch, HDFS-13869-HDFS-13891.011.patch, 
> HDFS-13869.patch, HDFS-13891-HDFS-13869-008.patch
>
>
> {code:java}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.federation.metrics.NamenodeBeanMetrics.getUsed(NamenodeBeanMetrics.java:205)
>   at 
> org.apache.hadoop.hdfs.server.federation.metrics.NamenodeBeanMetrics.getCapacityUsed(NamenodeBeanMetrics.java:519)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){code}
> ngMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13869) RBF: Handle NPE for NamenodeBeanMetrics#getFederationMetrics

2018-12-16 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16722680#comment-16722680
 ] 

Ranith Sardar commented on HDFS-13869:
--

Thanks [~linyiqun] for committing . :)

Thanks [~elgoiri] and [~surendrasingh] , for reviewing the patch.

> RBF: Handle NPE for NamenodeBeanMetrics#getFederationMetrics
> 
>
> Key: HDFS-13869
> URL: https://issues.apache.org/jira/browse/HDFS-13869
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Surendra Singh Lilhore
>Assignee: Ranith Sardar
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-13869-002.diff, HDFS-13869-003.diff, 
> HDFS-13869-004.patch, HDFS-13869-005.patch, HDFS-13869-006.patch, 
> HDFS-13869-007.patch, HDFS-13869-HDFS-13891.009.patch, 
> HDFS-13869-HDFS-13891.010.patch, HDFS-13869-HDFS-13891.011.patch, 
> HDFS-13869.patch, HDFS-13891-HDFS-13869-008.patch
>
>
> {code:java}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.federation.metrics.NamenodeBeanMetrics.getUsed(NamenodeBeanMetrics.java:205)
>   at 
> org.apache.hadoop.hdfs.server.federation.metrics.NamenodeBeanMetrics.getCapacityUsed(NamenodeBeanMetrics.java:519)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){code}
> ngMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org