[jira] [Created] (HDFS-13854) RBF: The ProcessingAvgTime and ProxyAvgTime should display by JMX with ms units.

2018-08-22 Thread yanghuafeng (JIRA)
yanghuafeng created HDFS-13854:
--

 Summary: RBF: The ProcessingAvgTime and ProxyAvgTime should 
display by JMX with ms units.
 Key: HDFS-13854
 URL: https://issues.apache.org/jira/browse/HDFS-13854
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: federation, hdfs
Affects Versions: 3.1.0, 3.0.0, 2.9.0
Reporter: yanghuafeng
Assignee: yanghuafeng


In the FederationRPCMetrics, proxy time and processing time should be exposed 
to the jmx or ganglia with ms units. Although the method toMS() exists, we 
cannot get the correct proxy time and processing time by jmx and ganglia.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13853) RouterAdmin update cmd is overwriting the entry not updating the existing

2018-08-22 Thread Dibyendu Karmakar (JIRA)
Dibyendu Karmakar created HDFS-13853:


 Summary: RouterAdmin update cmd is overwriting the entry not 
updating the existing
 Key: HDFS-13853
 URL: https://issues.apache.org/jira/browse/HDFS-13853
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Dibyendu Karmakar
Assignee: Dibyendu Karmakar


{code:java}
// Create a new entry
Map destMap = new LinkedHashMap<>();
for (String ns : nss) {
  destMap.put(ns, dest);
}
MountTable newEntry = MountTable.newInstance(mount, destMap);
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13852) RBF: The DN_REPORT_TIME_OUT and DN_REPORT_CACHE_EXPIRE should be configured in RBFConfigKeys.

2018-08-22 Thread yanghuafeng (JIRA)
yanghuafeng created HDFS-13852:
--

 Summary: RBF: The DN_REPORT_TIME_OUT and DN_REPORT_CACHE_EXPIRE 
should be configured in RBFConfigKeys.
 Key: HDFS-13852
 URL: https://issues.apache.org/jira/browse/HDFS-13852
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: federation, hdfs
Affects Versions: 3.0.1, 2.9.1, 3.1.0
Reporter: yanghuafeng
Assignee: yanghuafeng


In the NamenodeBeanMetrics the router will invokes 'getDataNodeReport' 
periodically. And we can set the dfs.federation.router.dn-report.time-out and 
dfs.federation.router.dn-report.cache-expire to avoid time out. But when we 
start the router, the FederationMetrics will also invoke the method to get node 
usage. If time out error happened, we cannot adjust the parameter time_out. And 
the time_out in the FederationMetrics and NamenodeBeanMetrics should be the 
same.



 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13851) Remove AlignmentContext from AbstractNNFailoverProxyProvider

2018-08-22 Thread Konstantin Shvachko (JIRA)
Konstantin Shvachko created HDFS-13851:
--

 Summary: Remove AlignmentContext from 
AbstractNNFailoverProxyProvider
 Key: HDFS-13851
 URL: https://issues.apache.org/jira/browse/HDFS-13851
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-12943
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko


{{AlignmentContext}} is now a part of {{ObserverReadProxyProvider}}, we can 
remove it from the base class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13850) Migrate logging to slf4j in hadoop-hdfs-client

2018-08-22 Thread Ian Pickering (JIRA)
Ian Pickering created HDFS-13850:


 Summary: Migrate logging to slf4j in hadoop-hdfs-client
 Key: HDFS-13850
 URL: https://issues.apache.org/jira/browse/HDFS-13850
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Ian Pickering
Assignee: Ian Pickering






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13849) Migrate logging to slf4j in hadoop-hdfs-httpfs, hadoop-hdfs-nfs, hadoop-hdfs-rbf, hadoop-hdfs-native-client

2018-08-22 Thread Ian Pickering (JIRA)
Ian Pickering created HDFS-13849:


 Summary: Migrate logging to slf4j in hadoop-hdfs-httpfs, 
hadoop-hdfs-nfs, hadoop-hdfs-rbf, hadoop-hdfs-native-client
 Key: HDFS-13849
 URL: https://issues.apache.org/jira/browse/HDFS-13849
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ian Pickering
Assignee: Ian Pickering






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13848) Refactor NameNode failover proxy providers

2018-08-22 Thread Konstantin Shvachko (JIRA)
Konstantin Shvachko created HDFS-13848:
--

 Summary: Refactor NameNode failover proxy providers
 Key: HDFS-13848
 URL: https://issues.apache.org/jira/browse/HDFS-13848
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, hdfs-client
Affects Versions: 2.7.5
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko


Looking at NN failover proxy providers in the context of HDFS-13782 I noticed 
that {{ConfiguredFailoverProxyProvider}} and {{IPFailoverProxyProvider}} have a 
lot of common logic. We can move this common logic into 
{{AbstractNNFailoverProxyProvider}}, which simplifies things a lot.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-370) Add and implement following functions in SCMClientProtocolServer

2018-08-22 Thread Ajay Kumar (JIRA)
Ajay Kumar created HDDS-370:
---

 Summary: Add and implement following functions in 
SCMClientProtocolServer
 Key: HDDS-370
 URL: https://issues.apache.org/jira/browse/HDDS-370
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Ajay Kumar


Modify functions impacted by SCM chill mode in StorageContainerLocationProtocol.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-369) Remove the containers of a dead node from the container state map

2018-08-22 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-369:
-

 Summary: Remove the containers of a dead node from the container 
state map
 Key: HDDS-369
 URL: https://issues.apache.org/jira/browse/HDDS-369
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: SCM
Reporter: Elek, Marton
 Fix For: 0.2.1


In case of a node is dead we need to update the container replicas information 
of the containerStateMap for all the containers from that specific node.

With removing the replica information we can detect the under replicated state 
and start the replication.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13847) Clean up ErasureCodingPolicyManager

2018-08-22 Thread Xiao Chen (JIRA)
Xiao Chen created HDFS-13847:


 Summary: Clean up ErasureCodingPolicyManager
 Key: HDFS-13847
 URL: https://issues.apache.org/jira/browse/HDFS-13847
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: erasure-coding
Affects Versions: 3.0.0
Reporter: Xiao Chen


{{ErasureCodingPolicyManager}} class is declared as LimitedPrivate for HDFS.

This doesn't seem to make sense, as I have checked all its usages are strictly 
within hadoop-hdfs project.
According to our [compat 
guide|http://hadoop.apache.org/docs/r3.1.0/hadoop-project-dist/hadoop-common/Compatibility.html]:
{quote}
Within a component Hadoop developers are free to use Private and Limited 
Private APIs,
{quote}

We should tune this down to just Private.

This is identified because an internal testing marked HDFS-13772 as 
incompatible, due to the method signature changes on the ECPM class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13846) Safe blocks counter is not decremented correctly if the block is striped

2018-08-22 Thread Kitti Nanasi (JIRA)
Kitti Nanasi created HDFS-13846:
---

 Summary: Safe blocks counter is not decremented correctly if the 
block is striped
 Key: HDFS-13846
 URL: https://issues.apache.org/jira/browse/HDFS-13846
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 3.1.0
Reporter: Kitti Nanasi
Assignee: Kitti Nanasi


In BlockManagerSafeMode class, the "safe blocks" counter is incremented if the 
number of nodes containing the block equals to the number of data units 
specified by the erasure coding policy, which looks like this in the code:
{code:java}
final int safe = storedBlock.isStriped() ?
((BlockInfoStriped)storedBlock).getRealDataBlockNum() : safeReplication;
if (storageNum == safe) {
  this.blockSafe++;
{code}
But when it is decremented the code does not check if the block is striped or 
not, just compares the number of nodes containing the block with 0 
(safeReplication - 1) if the block is complete, which is not correct.
{code:java}
if (storedBlock.isComplete() &&
blockManager.countNodes(b).liveReplicas() == safeReplication - 1) {
  this.blockSafe--;
  assert blockSafe >= 0;
  checkSafeMode();
}
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-08-22 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/876/

[Aug 21, 2018 1:36:24 AM] (stevel) HADOOP-15679. ShutdownHookManager shutdown 
time needs to be configurable
[Aug 21, 2018 4:03:19 AM] (vinayakumarb) HDFS-13772. Erasure coding: 
Unnecessary NameNode Logs displaying for
[Aug 21, 2018 6:28:07 AM] (rohithsharmaks) YARN-8129. Improve error message for 
invalid value in fields attribute.
[Aug 21, 2018 11:00:31 AM] (wwei) YARN-8683. Support to display pending 
scheduling requests in RM app
[Aug 21, 2018 2:42:28 PM] (wwei) YARN-7494. Add muti-node lookup mechanism and 
pluggable nodes sorting
[Aug 21, 2018 11:49:26 PM] (eyang) YARN-8298.  Added express upgrade for YARN 
service.




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine
 
   Unread field:FSBasedSubmarineStorageImpl.java:[line 39] 
   Found reliance on default encoding in 
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component):in 
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component): new java.io.FileWriter(File) At 
YarnServiceJobSubmitter.java:[line 192] 
   
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters,
 TaskType, Component) may fail to clean up java.io.Writer on checked exception 
Obligation to clean up resource created at YarnServiceJobSubmitter.java:to 
clean up java.io.Writer on checked exception Obligation to clean up resource 
created at YarnServiceJobSubmitter.java:[line 192] is not discharged 
   
org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceUtils.getComponentArrayJson(String,
 int, String) concatenates strings using + in a loop At 
YarnServiceUtils.java:using + in a loop At YarnServiceUtils.java:[line 72] 

Failed CTEST tests :

   test_test_libhdfs_threaded_hdfs_static 
   test_libhdfs_threaded_hdfspp_test_shim_static 

Failed junit tests :

   hadoop.security.TestRaceWhenRelogin 
   hadoop.hdfs.TestLeaseRecovery2 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestMRTimelineEventHandling 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/876/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/876/artifact/out/diff-compile-javac-root.txt
  [328K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/876/artifact/out/diff-checkstyle-root.txt
  [17M]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/876/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/876/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/876/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/876/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/876/artifact/out/whitespace-eol.txt
  [9.4M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/876/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/876/artifact/out/xml.txt
  [4.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/876/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-submarine-warnings.html
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/876/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [36K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/876/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/876/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/876/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/876/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [8.0K]
   

Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64

2018-08-22 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-trunk-win/566/

[Aug 21, 2018 2:49:50 PM] (aw) YETUS-45. the test patch script should check for 
filenames that differ
[Aug 21, 2018 9:29:07 PM] (aw) YETUS-658. Built-in Support For Unit Test 
Excluding
[Aug 21, 2018 11:07:27 PM] (aw) YETUS-642. reaper generated shelldocs output is 
busted
[Aug 21, 2018 2:42:28 PM] (wwei) YARN-7494. Add muti-node lookup mechanism and 
pluggable nodes sorting
[Aug 21, 2018 11:49:26 PM] (eyang) YARN-8298.  Added express upgrade for YARN 
service.
[Aug 22, 2018 3:43:40 AM] (yqlin) HDFS-13821. RBF: Add 
dfs.federation.router.mount-table.cache.enable so


ERROR: File 'out/email-report.txt' does not exist

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

[jira] [Created] (HDFS-13845) RBF: The default MountTableResolver cannot get multi-destination path for the default DestinationOrder.HASH

2018-08-22 Thread yanghuafeng (JIRA)
yanghuafeng created HDFS-13845:
--

 Summary: RBF: The default MountTableResolver cannot get 
multi-destination path for the default DestinationOrder.HASH
 Key: HDFS-13845
 URL: https://issues.apache.org/jira/browse/HDFS-13845
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: federation, hdfs
Affects Versions: 2.9.1, 3.1.0, 3.0.0
Reporter: yanghuafeng
Assignee: yanghuafeng


When we use the default MountTableResolver to resolve the path, we cannot get 
the destination paths for the default DestinationOrder.HASH. 

{code:java}
// Some comments here
private static PathLocation buildLocation(
  ..
List locations = new LinkedList<>();
for (RemoteLocation oneDst : entry.getDestinations()) {
  String nsId = oneDst.getNameserviceId();
  String dest = oneDst.getDest();
  String newPath = dest;
  if (!newPath.endsWith(Path.SEPARATOR) && !remainingPath.isEmpty()) {
newPath += Path.SEPARATOR;
  }
  newPath += remainingPath;
  RemoteLocation remoteLocation = new RemoteLocation(nsId, newPath, path);
  locations.add(remoteLocation);
}
DestinationOrder order = entry.getDestOrder();
return new PathLocation(srcPath, locations, order);
  }
{code}

The default order will be hash, but the HashFirstResolver will not be invoked 
to order the location.
It is ambiguous for the MountTableResolver that we will see the HASH order in 
the web ui for multi-destinations path but we cannot get the result.
In my opinion, the MountTableResolver will be a simple resolver to implement 1 
to 1 not including the 1 to n destinations. So we should check the 
addMountTable and updateMountTable when we use MountTableResolver.  If the 
entry has multi destinations, we should reject it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-368) all tests in TestOzoneRestClient failed due to "Unparseable date"

2018-08-22 Thread LiXin Ge (JIRA)
LiXin Ge created HDDS-368:
-

 Summary: all tests in TestOzoneRestClient failed due to 
"Unparseable date"
 Key: HDDS-368
 URL: https://issues.apache.org/jira/browse/HDDS-368
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Reporter: LiXin Ge


OS: Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-116-generic x86_64)

java version: 1.8.0_111

mvn: Apache Maven 3.3.9

Default locale: zh_CN, platform encoding: UTF-8

Test command: mvn test -Dtest=TestOzoneRestClient -Phdds

 
All the tests in TestOzoneRestClient failed in my local machine with exception 
like:
{noformat}
[ERROR] 
testCreateBucket(org.apache.hadoop.ozone.client.rest.TestOzoneRestClient) Time 
elapsed: 0.01 s <<< ERROR!
java.io.IOException: org.apache.hadoop.ozone.client.rest.OzoneException: 
Unparseable date: "m, 28 1970 19:23:50 GMT"
 at 
org.apache.hadoop.ozone.client.rest.RestClient.executeHttpRequest(RestClient.java:853)
 at 
org.apache.hadoop.ozone.client.rest.RestClient.createVolume(RestClient.java:252)
 at 
org.apache.hadoop.ozone.client.rest.RestClient.createVolume(RestClient.java:210)
 at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at 
org.apache.hadoop.ozone.client.OzoneClientInvocationHandler.invoke(OzoneClientInvocationHandler.java:54)
 at com.sun.proxy.$Proxy73.createVolume(Unknown Source)
 at org.apache.hadoop.ozone.client.ObjectStore.createVolume(ObjectStore.java:66)
 at 
org.apache.hadoop.ozone.client.rest.TestOzoneRestClient.testCreateBucket(TestOzoneRestClient.java:174)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
Caused by: org.apache.hadoop.ozone.client.rest.OzoneException: Unparseable 
date: "m, 28 1970 19:23:50 GMT"
at sun.reflect.GeneratedConstructorAccessor27.newInstance(Unknown 
Source)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
com.fasterxml.jackson.databind.introspect.AnnotatedConstructor.call(AnnotatedConstructor.java:119)
at 
com.fasterxml.jackson.databind.deser.std.StdValueInstantiator.createUsingDefault(StdValueInstantiator.java:270)
at 
com.fasterxml.jackson.databind.deser.std.ThrowableDeserializer.deserializeFromObject(ThrowableDeserializer.java:149)
at 
com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:159)
at 
com.fasterxml.jackson.databind.ObjectReader._bindAndClose(ObjectReader.java:1611)
at 
com.fasterxml.jackson.databind.ObjectReader.readValue(ObjectReader.java:1219)
at 
org.apache.hadoop.ozone.client.rest.OzoneException.parse(OzoneException.java:265)
... 39 more
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13844) Refactor the fmt_bytes function in the dfs-dust.js.

2018-08-22 Thread yanghuafeng (JIRA)
yanghuafeng created HDFS-13844:
--

 Summary: Refactor the fmt_bytes function in the dfs-dust.js.
 Key: HDFS-13844
 URL: https://issues.apache.org/jira/browse/HDFS-13844
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 3.1.0, 3.0.0, 2.7.2, 2.2.0, 1.2.0
Reporter: yanghuafeng
Assignee: yanghuafeng


The namenode WebUI cannot display the capacity with correct units. I have found 
that the function fmt_bytes in the dfs-dust.js missed the EB unit. This will 
lead to undefined unit in the ui.

And although the unit ZB is very large, we should take the unit overflow into 
consideration. Supposing the last unit is GB, we should get the 8192 GB with 
the total capacity 8T rather than 8 undefined.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org