[jira] [Created] (HDFS-12005) Ozone: Web interface for SCM

2017-06-20 Thread Elek, Marton (JIRA)
Elek, Marton created HDFS-12005:
---

 Summary: Ozone: Web interface for SCM
 Key: HDFS-12005
 URL: https://issues.apache.org/jira/browse/HDFS-12005
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Elek, Marton
Assignee: Elek, Marton


This is a propsal about how a web interface could be implemented for SCM (and 
later for KSM) similar to the namenode ui.

1. JS framework

There are three big option here. 

A.) One is to use a full featured web framework with all the webpack/npm 
minify/uglify magic. Build time the webpack/npm scripts should be run and the 
result will be added to the jar file. 

B.) It could be simplified if the generated minified/uglified js files are 
added to the project on commit time. It requires an additional step for every 
new patch (to generate the new minified javascripts) but doesn't require 
additional JS build tools during the build.

C.) The third option is to make it as simple as possible similar to the current 
namenode ui which uses javascript but every dependency is commited (without JS 
minify/uglify and other preprocessing).

I prefer to the third one as:

 * I have seen a lot of problems during frequent builds od older tez-ui 
versions (bower version mismatch, npm version mismatch, npm transitive 
dependency problems, proxy problem with older versions). All they could be 
fixed but requires additional JS/NPM magic/knowledge. Without additional npm 
build step the hdfs projects build could be kept more simple.

 * The complexity of the planned SCM/KSM ui (hopefully it will remain simple) 
doesn't require more sophisticated model. (Eg. we don't need JS require as we 
need only a few controllers)

 * HDFS developers mostly backend developers and not JS developers

2. Frameworks 

The big advantages of a more modern JS framework is the simplified programming 
model (for example with two way databinding) I suggest to use a more modern 
framework (not just jquery) which supports plain js (not just 
ECMA2015/2016/typescript) and just include the required js files in the 
projects (similar to the included bootstrap or as the existing namenode ui 
works). 
 
  * React could be a good candidate but it requires more library as it's just a 
ui framework, even the REST calls need separated library. It could be used with 
plain javascript instead of JSX and classes but not straightforward, and it's 
more verbose.
 
  * Ember is used in yarnui2 but the main strength of the ember is the CLI 
which couldn't be used for the simplified approach easily. I think ember is 
best with the A.) option

  * Angular 1 is a good candidate (but not so fancy). In case of angular 1 the 
component based approach should be used (in that case later it could be easier 
to migrate to angular 2 or react)

  * The mainstream side of Angular 2 uses typescript, it could work with plain 
JS but it could require additional knowledge, most of the tutorials and 
documentation shows the typescript approach.

I suggest to use angular 1 or react. Maybe angular is easier to use as don't 
need to emulate JSX with function calls, simple HTML templates could be used.

3. Backend

I would prefer the approach of the existing namenode ui where the backend is 
just the jmx endpoint. To keep it as simple as possible I suggest to try to 
avoid dedicated REST backend if possible. Later we can use REST api of SCM/KSM 
if they will be implemented. 




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12004) Namenode UI continues to list DNs that have been removed from include and exclude

2017-06-20 Thread Erik Krogen (JIRA)
Erik Krogen created HDFS-12004:
--

 Summary: Namenode UI continues to list DNs that have been removed 
from include and exclude
 Key: HDFS-12004
 URL: https://issues.apache.org/jira/browse/HDFS-12004
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Erik Krogen
Priority: Minor


Initially in HDFS, after a DN was decommission and subsequently removed from 
the exclude file (thus removing all references to it), it would still appear in 
the NN UI as a "dead" node until the NN was restarted. In HDFS-1773, discussion 
about this was had, and it was decided that the web UI should not show these 
nodes. However when HDFS-5334 went through and the NN web UI was reimplemented 
client-side, the behavior reverted back to pre-HDFS-1773, and 
dead+decommissioned nodes once again showed in the dead list. This can be 
operationally confusing for the same reasons as discussed in HDFS-1773.

I would like to open this discussion to determine if the regression was 
intentional or if we should carry forward the logic implemented HDFS-1773 into 
the new UI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12003) Ozone: Misc : Cleanup error messages

2017-06-20 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-12003:
---

 Summary: Ozone: Misc : Cleanup error messages
 Key: HDFS-12003
 URL: https://issues.apache.org/jira/browse/HDFS-12003
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Anu Engineer


Many error messages thrown from ozone are written for developers by developers. 
We need to review all publicly visible error messages to make sure it correct, 
includes enough context (stack traces do not count) and makes sense for the 
reader.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2017-06-20 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/351/

[Jun 19, 2017 4:01:40 PM] (naganarasimha_gr) YARN-6467. CSQueueMetrics needs to 
update the current metrics for
[Jun 19, 2017 5:12:45 PM] (naganarasimha_gr) YARN-6680. Avoid locking overhead 
for NO_LABEL lookups. Contributed by
[Jun 19, 2017 5:25:20 PM] (lei) HDFS-11916. Extend
[Jun 19, 2017 11:07:42 PM] (iwasakims) HDFS-11995. HDFS Architecture 
documentation incorrectly describes
[Jun 20, 2017 3:03:56 AM] (brahma) HDFS-11890. Handle NPE in 
BlockRecoveryWorker when DN is getting
[Jun 20, 2017 4:18:26 AM] (aajisaka) HADOOP-14296. Move logging APIs over to 
slf4j in hadoop-tools.
[Jun 20, 2017 6:12:02 AM] (xiao) HADOOP-14515. Specifically configure 
zookeeper-related log levels in KMS
[Jun 20, 2017 7:35:54 AM] (aajisaka) HDFS-11345. Document the configuration key 
for FSNamesystem lock
[Jun 20, 2017 8:20:27 AM] (aajisaka) YARN-6713. Fix dead link in the Javadoc of 
FairSchedulerEventLog.java.
[Jun 20, 2017 12:44:31 PM] (brahma) HADOOP-14533. Size of args cannot be less 
than zero in TraceAdmin#run as




-1 overall


The following subsystems voted -1:
compile mvninstall unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 
   hadoop.hdfs.TestRollingUpgrade 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService 
   hadoop.yarn.server.nodemanager.TestNodeManagerShutdown 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.resourcemanager.TestRMEmbeddedElector 
   hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestShuffleHandler 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 
   hadoop.yarn.sls.nodemanager.TestNMSimulator 

Timed out junit tests :

   org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache 
   org.apache.hadoop.yarn.server.resourcemanager.TestRMStoreCommands 
   
org.apache.hadoop.yarn.server.resourcemanager.TestReservationSystemWithRMHA 
   
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA 
   
org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA 
  

   mvninstall:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/351/artifact/out/patch-mvninstall-root.txt
  [504K]

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/351/artifact/out/patch-compile-root.txt
  [20K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/351/artifact/out/patch-compile-root.txt
  [20K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/351/artifact/out/patch-compile-root.txt
  [20K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/351/artifact/out/patch-unit-hadoop-assemblies.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/351/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [456K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/351/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/351/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [52K]
   

[jira] [Created] (HDFS-12002) Ozone : SCM cli misc fixes/improvements

2017-06-20 Thread Chen Liang (JIRA)
Chen Liang created HDFS-12002:
-

 Summary: Ozone : SCM cli misc fixes/improvements
 Key: HDFS-12002
 URL: https://issues.apache.org/jira/browse/HDFS-12002
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Chen Liang
Assignee: Chen Liang
 Fix For: ozone


Currently there are a few minor issues with the SCM CLI:

1. some commands do not use -c option to take container name. an issue with 
this is that arguments need to be in a certain order to be correctly parsed, 
e.g.:
{{./bin/hdfs scm -container -del c0 -f}} works, but
{{./bin/hdfs scm -container -del -f c0}} will not

2.some subcommands are not displaying the errors in the best way it could be, 
e.g.:
{{./bin/hdfs scm -container -del}} is wrong because it misses container name. 
So cli complains 
{code}
Missing argument for option: del
Unrecognized options:[-container, -del]
usage: hdfs scm  []
where  can be one of the following
 -container   Container related options
{code}
but this does not really show that it is container name it is missing

3. probably better to rename -del to -delete to be consistent with other 
commands like -create and -info

4. when passing in invalid argument e.g. -info on a non-existing container, an 
exception will be displayed. We probably should not scare the users, and only 
display just one error message. And move the exception display to debug mode 
display or something.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-06-20 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/440/

[Jun 19, 2017 10:09:18 AM] (aajisaka) HADOOP-14538. Fix TestFilterFileSystem 
and TestHarFileSystem failures
[Jun 19, 2017 10:39:36 AM] (aajisaka) HADOOP-14540. Replace MRv1 specific terms 
in HostsFileReader.
[Jun 19, 2017 4:01:40 PM] (naganarasimha_gr) YARN-6467. CSQueueMetrics needs to 
update the current metrics for
[Jun 19, 2017 5:12:45 PM] (naganarasimha_gr) YARN-6680. Avoid locking overhead 
for NO_LABEL lookups. Contributed by
[Jun 19, 2017 5:25:20 PM] (lei) HDFS-11916. Extend
[Jun 19, 2017 11:07:42 PM] (iwasakims) HDFS-11995. HDFS Architecture 
documentation incorrectly describes
[Jun 20, 2017 3:03:56 AM] (brahma) HDFS-11890. Handle NPE in 
BlockRecoveryWorker when DN is getting
[Jun 20, 2017 4:18:26 AM] (aajisaka) HADOOP-14296. Move logging APIs over to 
slf4j in hadoop-tools.
[Jun 20, 2017 6:12:02 AM] (xiao) HADOOP-14515. Specifically configure 
zookeeper-related log levels in KMS
[Jun 20, 2017 7:35:54 AM] (aajisaka) HDFS-11345. Document the configuration key 
for FSNamesystem lock
[Jun 20, 2017 8:20:27 AM] (aajisaka) YARN-6713. Fix dead link in the Javadoc of 
FairSchedulerEventLog.java.




-1 overall


The following subsystems voted -1:
findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-common-project/hadoop-minikdc 
   Possible null pointer dereference in 
org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called 
method Dereferenced at 
MiniKdc.java:org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value 
of called method Dereferenced at MiniKdc.java:[line 368] 

FindBugs :

   module:hadoop-common-project/hadoop-auth 
   
org.apache.hadoop.security.authentication.server.MultiSchemeAuthenticationHandler.authenticate(HttpServletRequest,
 HttpServletResponse) makes inefficient use of keySet iterator instead of 
entrySet iterator At MultiSchemeAuthenticationHandler.java:of keySet iterator 
instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:[line 
192] 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   org.apache.hadoop.crypto.CipherSuite.setUnknownValue(int) 
unconditionally sets the field unknownValue At CipherSuite.java:unknownValue At 
CipherSuite.java:[line 44] 
   org.apache.hadoop.crypto.CryptoProtocolVersion.setUnknownValue(int) 
unconditionally sets the field unknownValue At 
CryptoProtocolVersion.java:unknownValue At CryptoProtocolVersion.java:[line 67] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of 
called method Dereferenced at 
FileUtil.java:org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to 
return value of called method Dereferenced at FileUtil.java:[line 118] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, 
File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path,
 File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:[line 387] 
   Return value of org.apache.hadoop.fs.permission.FsAction.or(FsAction) 
ignored, but method has no side effect At FTPFileSystem.java:but method has no 
side effect At FTPFileSystem.java:[line 421] 
   Useless condition:lazyPersist == true at this point At 
CommandWithDestination.java:[line 502] 
   org.apache.hadoop.io.DoubleWritable.compareTo(DoubleWritable) 
incorrectly handles double value At DoubleWritable.java: At 
DoubleWritable.java:[line 78] 
   org.apache.hadoop.io.DoubleWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles double value At DoubleWritable.java:int) 
incorrectly handles double value At DoubleWritable.java:[line 97] 
   org.apache.hadoop.io.FloatWritable.compareTo(FloatWritable) incorrectly 
handles float value At FloatWritable.java: At FloatWritable.java:[line 71] 
   org.apache.hadoop.io.FloatWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles float value At FloatWritable.java:int) 
incorrectly handles float value At FloatWritable.java:[line 89] 
   Possible null pointer dereference in 
org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return 
value of called method Dereferenced at 
IOUtils.java:org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) 
due to return value of called method Dereferenced at IOUtils.java:[line 351] 
   org.apache.hadoop.io.erasurecode.ECSchema.toString() makes inefficient 
use of keySet iterator instead of