[jira] [Created] (HDFS-15602) Support new Instance by non default constructor by ReflectionUtils

2020-09-26 Thread maobaolong (Jira)
maobaolong created HDFS-15602:
-

 Summary: Support new Instance by non default constructor by 
ReflectionUtils
 Key: HDFS-15602
 URL: https://issues.apache.org/jira/browse/HDFS-15602
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.4.0
Reporter: maobaolong
Assignee: maobaolong






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15399) Support include or exclude datanode by configure file

2020-06-08 Thread maobaolong (Jira)
maobaolong created HDFS-15399:
-

 Summary: Support include or exclude datanode by configure file
 Key: HDFS-15399
 URL: https://issues.apache.org/jira/browse/HDFS-15399
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode
Reporter: maobaolong
Assignee: maobaolong


When i dislike a datanode, or just want to let specific datanode join to SCM, i 
want to have this feature to limit datanode list.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15139) Use RDBStore and TypedTable to manage the blockinfo of namenode

2020-01-22 Thread maobaolong (Jira)
maobaolong created HDFS-15139:
-

 Summary: Use RDBStore and TypedTable to manage the blockinfo of 
namenode
 Key: HDFS-15139
 URL: https://issues.apache.org/jira/browse/HDFS-15139
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.3.0
Reporter: maobaolong


replace the BlockManager.BlocksMap.blocks from GSet to rocksdb



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15138) Use RDBStore and TypedTable to manage the inode of namenode

2020-01-22 Thread maobaolong (Jira)
maobaolong created HDFS-15138:
-

 Summary: Use RDBStore and TypedTable to manage the inode of 
namenode
 Key: HDFS-15138
 URL: https://issues.apache.org/jira/browse/HDFS-15138
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.3.0
Reporter: maobaolong


Replace FSDirectory.inodeMap.map from GSet to rocksdb.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15137) Move RDBStore logic from apache-ozone into hadoop-commons module of apache-hadoop

2020-01-21 Thread maobaolong (Jira)
maobaolong created HDFS-15137:
-

 Summary: Move RDBStore logic from apache-ozone into hadoop-commons 
module of apache-hadoop
 Key: HDFS-15137
 URL: https://issues.apache.org/jira/browse/HDFS-15137
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: maobaolong






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15133) Use rocksdb to store NameNode inode and blockInfo

2020-01-20 Thread maobaolong (Jira)
maobaolong created HDFS-15133:
-

 Summary: Use rocksdb to store NameNode inode and blockInfo
 Key: HDFS-15133
 URL: https://issues.apache.org/jira/browse/HDFS-15133
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: maobaolong


Maybe we don't need checkpoint to a fsimage file, the rocksdb checkpoint can 
achieve the same request.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-1606) ozone s3g cannot started caused by NoInitialContextException: xxx java.naming.factory.initial

2019-05-29 Thread maobaolong (JIRA)
maobaolong created HDDS-1606:


 Summary: ozone s3g cannot started caused by 
NoInitialContextException: xxx java.naming.factory.initial
 Key: HDDS-1606
 URL: https://issues.apache.org/jira/browse/HDDS-1606
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Affects Versions: 0.5.0
 Environment: ozone-site.xml

{code:xml}


   ozone.enabled
   true

   
  ozone.metadata.dirs
  /data0/disk1/meta
   
  
  ozone.scm.datanode.id
  /data0/disk1/meta/node/datanode.id
   

   ozone.om.address
   ozonemanager.hadoop.apache.org


   ozone.om.db.dirs
   /data0/om-db-dirs


   ozone.scm.names
   172.16.150.142


   ozone.om.address
   172.16.150.142


   hdds.datanode.http.enabled
   true


   ozone.s3g.domain.name
   s3g.internal


{code}

Reporter: maobaolong


$ ozone s3g
/software/servers/jdk1.8.0_121/bin/java -Dproc_s3g 
-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5008 
-Dhadoop.log.dir=/software/servers/ozone-0.5.0-SNAPSHOT/logs 
-Dhadoop.log.file=hadoop.log 
-Dhadoop.home.dir=/software/servers/ozone-0.5.0-SNAPSHOT -Dhadoop.id.str=hadp 
-Dhadoop.root.logger=INFO,console -Dhadoop.policy.file=hadoop-policy.xml 
-Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.ozone.s3.Gateway
2019-05-29 16:46:28,056 INFO hdfs.DFSUtil: Starting Web-server for s3gateway 
at: http://0.0.0.0:9878
2019-05-29 16:46:28,079 INFO util.log: Logging initialized @8123ms
2019-05-29 16:46:28,164 INFO server.AuthenticationFilter: Unable to initialize 
FileSignerSecretProvider, falling back to use random secrets.
2019-05-29 16:46:28,178 INFO http.HttpRequestLog: Http request log for 
http.requests.s3gateway is not defined
2019-05-29 16:46:28,188 INFO http.HttpServer2: Added global filter 'safety' 
(class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2019-05-29 16:46:28,191 INFO http.HttpServer2: Added filter static_user_filter 
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
context s3gateway
2019-05-29 16:46:28,191 INFO http.HttpServer2: Added filter static_user_filter 
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
context static
2019-05-29 16:46:28,191 INFO http.HttpServer2: Added filter static_user_filter 
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
context logs
2019-05-29 16:46:28,206 [main] INFO   - Starting Ozone S3 gateway
2019-05-29 16:46:28,212 INFO http.HttpServer2: Jetty bound to port 9878
2019-05-29 16:46:28,213 INFO server.Server: jetty-9.3.24.v20180605, build 
timestamp: 2018-06-06T01:11:56+08:00, git hash: 
84205aa28f11a4f31f2a3b86d1bba2cc8ab69827
2019-05-29 16:46:28,241 INFO handler.ContextHandler: Started 
o.e.j.s.ServletContextHandler@68f4865{/logs,file:///software/servers/ozone-0.5.0-SNAPSHOT/logs/,AVAILABLE}
2019-05-29 16:46:28,242 INFO handler.ContextHandler: Started 
o.e.j.s.ServletContextHandler@39d9314d{/static,jar:file:/software/servers/ozone-0.5.0-SNAPSHOT/share/ozone/lib/hadoop-ozone-s3gateway-0.5.0-SNAPSHOT.jar!/webapps/static,AVAILABLE}
ERROR StatusLogger No Log4j 2 configuration file found. Using default 
configuration (logging only errors to the console), or user programmatically 
provided configurations. Set system property 'log4j2.debug' to show Log4j 2 
internal initialization logging. See 
https://logging.apache.org/log4j/2.x/manual/configuration.html for instructions 
on how to configure Log4j 2
2019-05-29 16:46:28,974 WARN webapp.WebAppContext: Failed startup of context 
o.e.j.w.WebAppContext@7487b142{/,file:///tmp/jetty-0.0.0.0-9878-s3gateway-_-any-2799631504400193724.dir/webapp/,UNAVAILABLE}{/s3gateway}
org.jboss.weld.exceptions.DefinitionException: Exception List with 1 exceptions:
Exception 0 :
java.lang.RuntimeException: javax.naming.NoInitialContextException: Need to 
specify class name in environment or system property, or as an applet 
parameter, or in an application resource file:  java.naming.factory.initial
at 
com.sun.jersey.server.impl.cdi.CDIExtension.initialize(CDIExtension.java:201)
at 
com.sun.jersey.server.impl.cdi.CDIExtension.beforeBeanDiscovery(CDIExtension.java:302)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.jboss.weld.injection.StaticMethodInjectionPoint.invoke(StaticMethodInjectionPoint.java:88)
at 
org.jboss.weld.injection.MethodInvocationStrategy$SpecialParamPlusBeanManagerStrategy.invoke(MethodInvocationStrategy.java:144)
at 

[jira] [Created] (HDFS-14353) Erasure Coding: metrics xmitsInProgress become to negative.

2019-03-11 Thread maobaolong (JIRA)
maobaolong created HDFS-14353:
-

 Summary: Erasure Coding: metrics xmitsInProgress become to 
negative.
 Key: HDFS-14353
 URL: https://issues.apache.org/jira/browse/HDFS-14353
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, erasure-coding
Affects Versions: 3.3.0
Reporter: maobaolong






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14344) Erasure Coding: Miss EC block after decommission and restart NN

2019-03-06 Thread maobaolong (JIRA)
maobaolong created HDFS-14344:
-

 Summary: Erasure Coding: Miss EC block after decommission and 
restart NN
 Key: HDFS-14344
 URL: https://issues.apache.org/jira/browse/HDFS-14344
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ec, erasure-coding, namenode
Affects Versions: 3.3.0
Reporter: maobaolong






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13881) Export or Import a dirImage

2018-08-28 Thread maobaolong (JIRA)
maobaolong created HDFS-13881:
-

 Summary: Export or Import a dirImage
 Key: HDFS-13881
 URL: https://issues.apache.org/jira/browse/HDFS-13881
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: namenode
Affects Versions: 3.1.1
Reporter: maobaolong
Assignee: maobaolong






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13804) DN maxDataLength is useless except DN webui. I suggest to get maxDataLength from NN heartbeat.

2018-08-08 Thread maobaolong (JIRA)
maobaolong created HDFS-13804:
-

 Summary: DN maxDataLength is useless except DN webui. I suggest to 
get maxDataLength from NN heartbeat.
 Key: HDFS-13804
 URL: https://issues.apache.org/jira/browse/HDFS-13804
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode
Reporter: maobaolong






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13783) Balancer: make balancer to be a long service process for easy to monitor it.

2018-08-01 Thread maobaolong (JIRA)
maobaolong created HDFS-13783:
-

 Summary: Balancer: make balancer to be a long service process for 
easy to monitor it.
 Key: HDFS-13783
 URL: https://issues.apache.org/jira/browse/HDFS-13783
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: balancer  mover
Affects Versions: 3.0.3
Reporter: maobaolong


If we have a long service process of balancer, like namenode, datanode, we can 
get metrics of balancer, the metrics can tell us the status of balancer, the 
amount of block it has moved, 
We can get or set the balance plan by the balancer webUI. So many things we can 
do if we have a long balancer service process.

So, shall we start to plan the new Balancer? Hope this feature can enter the 
next release of hadoop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13527) craeteLocatedBlock IsCorrupt logic is fault when all block are corrupt.

2018-05-04 Thread maobaolong (JIRA)
maobaolong created HDFS-13527:
-

 Summary: craeteLocatedBlock IsCorrupt logic is fault when all 
block are corrupt.
 Key: HDFS-13527
 URL: https://issues.apache.org/jira/browse/HDFS-13527
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs, namenode
Affects Versions: 3.2.0
Reporter: maobaolong


the step is:

1. put a small file into hdfs FILEPATH
2. remove block replicas in all datanode blockpool.
3. restart datanode
4. restart namenode( leave safemode)
5. hdfs fsck FILEPATH -files -blocks  -locations 
6. namenode think this block is not corrupt block.


the code logic is:
{code:java}
// get block locations
NumberReplicas numReplicas = countNodes(blk);
final int numCorruptNodes = numReplicas.corruptReplicas();
final int numCorruptReplicas = corruptReplicas.numCorruptReplicas(blk);
if (numCorruptNodes != numCorruptReplicas) {
  LOG.warn("Inconsistent number of corrupt replicas for {}"
  + " blockMap has {} but corrupt replicas map has {}",
  blk, numCorruptNodes, numCorruptReplicas);
}

final int numNodes = blocksMap.numNodes(blk);
final boolean isCorrupt;
if (blk.isStriped()) {
  BlockInfoStriped sblk = (BlockInfoStriped) blk;
  isCorrupt = numCorruptReplicas != 0 &&
  numReplicas.liveReplicas() < sblk.getRealDataBlockNum();
} else {
  isCorrupt = numCorruptReplicas != 0 && numCorruptReplicas == numNodes;
}
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13480) RBF: separate namenodeHeartbeat and routerHeartbeat to different config key.

2018-04-19 Thread maobaolong (JIRA)
maobaolong created HDFS-13480:
-

 Summary: RBF: separate namenodeHeartbeat and routerHeartbeat to 
different config key.
 Key: HDFS-13480
 URL: https://issues.apache.org/jira/browse/HDFS-13480
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: maobaolong
Assignee: maobaolong


Now, if i enable the heartbeat.enable, but i do not want to monitor any 
namenode, i get an ERROR log like:


{code:java}
[2018-04-19T14:00:03.057+08:00] [ERROR] 
federation.router.Router.serviceInit(Router.java 214) [main] : Heartbeat is 
enabled but there are no namenodes to monitor
{code}

and if i disable the heartbeat.enable, we cannot get any mounttable update, 
because the following logic in Router.java:


{code:java}
if (conf.getBoolean(
RBFConfigKeys.DFS_ROUTER_HEARTBEAT_ENABLE,
RBFConfigKeys.DFS_ROUTER_HEARTBEAT_ENABLE_DEFAULT)) {

  // Create status updater for each monitored Namenode
  this.namenodeHeartbeatServices = createNamenodeHeartbeatServices();
  for (NamenodeHeartbeatService hearbeatService :
  this.namenodeHeartbeatServices) {
addService(hearbeatService);
  }

  if (this.namenodeHeartbeatServices.isEmpty()) {
LOG.error("Heartbeat is enabled but there are no namenodes to monitor");
  }

  // Periodically update the router state
  this.routerHeartbeatService = new RouterHeartbeatService(this);
  addService(this.routerHeartbeatService);
}
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-13293) RBF: The RouterRPCServer should transfer CallerContext and client ip to NamenodeRpcServer

2018-04-10 Thread maobaolong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

maobaolong resolved HDFS-13293.
---
Resolution: Duplicate

> RBF: The RouterRPCServer should transfer CallerContext and client ip to 
> NamenodeRpcServer
> -
>
> Key: HDFS-13293
> URL: https://issues.apache.org/jira/browse/HDFS-13293
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: maobaolong
>Priority: Major
>
> Otherwise, the namenode don't know the client's callerContext



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13387) Make multi-thread access class thread safe

2018-04-03 Thread maobaolong (JIRA)
maobaolong created HDFS-13387:
-

 Summary: Make multi-thread access class thread safe
 Key: HDFS-13387
 URL: https://issues.apache.org/jira/browse/HDFS-13387
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.2.0
Reporter: maobaolong
Assignee: maobaolong


This jira will lead us to make the class as BlockInfoContiguous thread-safe, 
then, we should not use the NameSystemLock to lock the full flow. This just a 
base step to achieve the plan of HDFS-8966



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13293) RBF: The RouterRPCServer should transfer CallerContext and client ip to NamenodeRpcServer

2018-03-15 Thread maobaolong (JIRA)
maobaolong created HDFS-13293:
-

 Summary: RBF: The RouterRPCServer should transfer CallerContext 
and client ip to NamenodeRpcServer
 Key: HDFS-13293
 URL: https://issues.apache.org/jira/browse/HDFS-13293
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: maobaolong


Otherwise, the namenode don't know the client's callerContext



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13278) Correct the logic of mount validate to avoid the bad mountPoint

2018-03-13 Thread maobaolong (JIRA)
maobaolong created HDFS-13278:
-

 Summary: Correct the logic of mount validate to avoid the bad 
mountPoint
 Key: HDFS-13278
 URL: https://issues.apache.org/jira/browse/HDFS-13278
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs
Affects Versions: 3.2.0
Reporter: maobaolong






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13270) RBF: Router audit logger

2018-03-13 Thread maobaolong (JIRA)
maobaolong created HDFS-13270:
-

 Summary: RBF: Router audit logger
 Key: HDFS-13270
 URL: https://issues.apache.org/jira/browse/HDFS-13270
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs
Affects Versions: 3.2.0
Reporter: maobaolong


We can use router auditlogger to log the client info and cmd, because the 
FSNamesystem#Auditlogger's log think the client are all from router.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13269) After too many open file exception occurred, the standby NN never do checkpoint

2018-03-12 Thread maobaolong (JIRA)
maobaolong created HDFS-13269:
-

 Summary: After too many open file exception occurred, the standby 
NN never do checkpoint
 Key: HDFS-13269
 URL: https://issues.apache.org/jira/browse/HDFS-13269
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 3.2.0
Reporter: maobaolong


do saveNameSpace in dfsadmin.

The output as following:

 
{code:java}
saveNamespace: No image directories available!
{code}
The Namenode log show:

 

 
{code:java}
[2018-01-13T10:32:19.903+08:00] [INFO] [Standby State Checkpointer] : 
Triggering checkpoint because there have been 10159265 txns since the last 
checkpoint, which exceeds the configured threshold 1000
[2018-01-13T10:32:19.903+08:00] [INFO] [Standby State Checkpointer] : Save 
namespace ...
...

[2018-01-13T10:37:10.539+08:00] [WARN] [1985938863@qtp-61073295-1 - Acceptor0 
HttpServer2$SelectChannelConnectorWithSafeStartup@HOST_A:50070] : EXCEPTION 
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422)
at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250)
at 
org.mortbay.jetty.nio.SelectChannelConnector$1.acceptChannel(SelectChannelConnector.java:75)
at 
org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:686)
at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
at 
org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
at 
org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
at 
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
[2018-01-13T10:37:15.421+08:00] [ERROR] [FSImageSaver for /data0/nn of type 
IMAGE_AND_EDITS] : Unable to save image for /data0/nn
java.io.FileNotFoundException: 
/data0/nn/current/fsimage_40247283317.md5.tmp (Too many open files)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.(FileOutputStream.java:213)
at java.io.FileOutputStream.(FileOutputStream.java:162)
at 
org.apache.hadoop.hdfs.util.AtomicFileOutputStream.(AtomicFileOutputStream.java:58)
at 
org.apache.hadoop.hdfs.util.MD5FileUtils.saveMD5File(MD5FileUtils.java:157)
at 
org.apache.hadoop.hdfs.util.MD5FileUtils.saveMD5File(MD5FileUtils.java:149)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImage(FSImage.java:990)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage$FSImageSaver.run(FSImage.java:1039)
at java.lang.Thread.run(Thread.java:745)
[2018-01-13T10:37:15.421+08:00] [ERROR] [Standby State Checkpointer] : Error 
reported on storage directory Storage Directory /data0/nn
[2018-01-13T10:37:15.421+08:00] [WARN] [Standby State Checkpointer] : About to 
remove corresponding storage: /data0/nn
[2018-01-13T10:37:15.429+08:00] [ERROR] [Standby State Checkpointer] : 
Exception in doCheckpoint
java.io.IOException: Failed to save in any storage directories while saving 
namespace.
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1176)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.saveNamespace(FSImage.java:1107)
at 
org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer.doCheckpoint(StandbyCheckpointer.java:185)
at 
org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer.access$1400(StandbyCheckpointer.java:62)
at 
org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$CheckpointerThread.doWork(StandbyCheckpointer.java:353)
at 
org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$CheckpointerThread.access$700(StandbyCheckpointer.java:260)
at 
org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$CheckpointerThread$1.run(StandbyCheckpointer.java:280)
at 
org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
at 
org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer$CheckpointerThread.run(StandbyCheckpointer.java:276)
...
[2018-01-13T15:52:33.783+08:00] [INFO] [Standby State Checkpointer] : Save 
namespace ...
[2018-01-13T15:52:33.783+08:00] [ERROR] [Standby State Checkpointer] : 
Exception in doCheckpoint
java.io.IOException: No image directories available!
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1152)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.saveNamespace(FSImage.java:1107)
at 
org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer.doCheckpoint(StandbyCheckpointer.java:185)
at 

[jira] [Created] (HDFS-13245) RBF: State store DBMS implementation

2018-03-07 Thread maobaolong (JIRA)
maobaolong created HDFS-13245:
-

 Summary: RBF: State store DBMS implementation
 Key: HDFS-13245
 URL: https://issues.apache.org/jira/browse/HDFS-13245
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs
Reporter: maobaolong






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13241) RBF: TestRouterSafemode failed if the port 8888 in use

2018-03-07 Thread maobaolong (JIRA)
maobaolong created HDFS-13241:
-

 Summary: RBF: TestRouterSafemode failed if the port  in use
 Key: HDFS-13241
 URL: https://issues.apache.org/jira/browse/HDFS-13241
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs, test
Affects Versions: 3.2.0
Reporter: maobaolong
Assignee: maobaolong






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13226) RBF: We should throw the failure validate and refuse this mount entry

2018-03-05 Thread maobaolong (JIRA)
maobaolong created HDFS-13226:
-

 Summary: RBF: We should throw the failure validate and refuse this 
mount entry
 Key: HDFS-13226
 URL: https://issues.apache.org/jira/browse/HDFS-13226
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs
Affects Versions: 3.2.0
Reporter: maobaolong


one of the mount entry source path rule is that the source path must start with 
'\', somebody didn't follow the rule and execute the following command:

{code:bash}
$ hdfs dfsrouteradmin -add addnode/ ns1 /addnode/
{code}

But, the console show we are successful add this entry.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13199) Fix the hdfs router page missing label icon issue

2018-02-27 Thread maobaolong (JIRA)
maobaolong created HDFS-13199:
-

 Summary: Fix the hdfs router page missing label icon issue
 Key: HDFS-13199
 URL: https://issues.apache.org/jira/browse/HDFS-13199
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: federation, hdfs
Affects Versions: 3.0.0, 3.2.0
Reporter: maobaolong


This bug is a typo error.

decommisioned should be decommissioned



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13195) DataNode conf page cannot display the current value after reconfig

2018-02-26 Thread maobaolong (JIRA)
maobaolong created HDFS-13195:
-

 Summary: DataNode conf page  cannot display the current value 
after reconfig
 Key: HDFS-13195
 URL: https://issues.apache.org/jira/browse/HDFS-13195
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.7.1
Reporter: maobaolong


Now the branch-2.7 support dfs.datanode.data.dir reconfig, but after i reconfig 
this key, the conf page's value is still the old config value.

The reason is that:


{code:java}
public DatanodeHttpServer(final Configuration conf,
  final DataNode datanode,
  final ServerSocketChannel externalHttpChannel)
throws IOException {
this.conf = conf;

Configuration confForInfoServer = new Configuration(conf);
confForInfoServer.setInt(HttpServer2.HTTP_MAX_THREADS, 10);
HttpServer2.Builder builder = new HttpServer2.Builder()
.setName("datanode")
.setConf(confForInfoServer)
.setACL(new AccessControlList(conf.get(DFS_ADMIN, " ")))
.hostName(getHostnameForSpnegoPrincipal(confForInfoServer))
.addEndpoint(URI.create("http://localhost:0;))
.setFindPort(true);

this.infoServer = builder.build();
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-13034) The DFSUsed value bigger than the Capacity

2018-01-18 Thread maobaolong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

maobaolong resolved HDFS-13034.
---
Resolution: Won't Fix

This situation may be amazing, but it is not a issue.

> The DFSUsed value bigger than the Capacity
> --
>
> Key: HDFS-13034
> URL: https://issues.apache.org/jira/browse/HDFS-13034
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.1
>Reporter: maobaolong
>Priority: Minor
>
> ||Node||Last contact||Admin State||Capacity||Used||Non DFS 
> Used||Remaining||Blocks||Block pool used||Failed Volumes||Version||
> |A|0|In Service|20.65 TB|18.26 TB|0 B|1.27 TB|24330|2.57 TB (12.42%)|0|2.7.1|
> |B|2|In Service|5.47 TB|12.78 TB|0 B|1.46 TB|27657|2.65 TB (48.37%)|0|2.7.1|



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12806) Use yarn application -kill and kill by rm webpage the app information cannot log to userlogs directory so jobhistory cannot display it.

2017-11-12 Thread maobaolong (JIRA)
maobaolong created HDFS-12806:
-

 Summary: Use yarn application -kill and kill by rm webpage the app 
information cannot log to userlogs directory so jobhistory cannot display it.
 Key: HDFS-12806
 URL: https://issues.apache.org/jira/browse/HDFS-12806
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.1
 Environment: Use `yarn application -kill` can successfully kill the 
job but the app information cannot generate into
the "/userlogs/history/done_intermediate". So the jobhistory cannot display the 
job information.

But, use `yarn application -kill` can work well.
Reporter: maobaolong






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12803) We should not lock FsNamesystem even we operate a sub directory, we should refinement the lock

2017-11-11 Thread maobaolong (JIRA)
maobaolong created HDFS-12803:
-

 Summary: We should not lock FsNamesystem even we operate a sub 
directory, we should refinement the lock
 Key: HDFS-12803
 URL: https://issues.apache.org/jira/browse/HDFS-12803
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 3.0.0-alpha3, 2.7.1
Reporter: maobaolong


An example:

If a client is doing mkdir or delete a file, other client will wait for the 
FSNamesystem's lock to do some operation.

I think we have to refinement the lock. we can lock the parent inode only.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-11752) getNonDfsUsed return 0 if reserved bigger than actualNonDfsUsed

2017-06-29 Thread maobaolong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

maobaolong resolved HDFS-11752.
---
Resolution: Not A Problem

> getNonDfsUsed return 0 if reserved bigger than actualNonDfsUsed
> ---
>
> Key: HDFS-11752
> URL: https://issues.apache.org/jira/browse/HDFS-11752
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs
>Affects Versions: 2.7.1
>Reporter: maobaolong
>  Labels: datanode, hdfs
> Fix For: 2.7.1
>
>
> {code}
> public long getNonDfsUsed() throws IOException {
> long actualNonDfsUsed = getActualNonDfsUsed();
> if (actualNonDfsUsed < reserved) {
>   return 0L;
> }
> return actualNonDfsUsed - reserved;
>   }
> {code}
> The code block above is the function to caculate nonDfsUsed, but in fact it 
> will let the result to be 0L out of expect. Such as this following situation:
> du.reserved  = 50G
> Disk Capacity = 2048G
> Disk Available = 2000G
> Dfs used = 30G
> usage.getUsed() = dirFile.getTotalSpace() - dirFile.getFreeSpace()
> = 2048G - 2000G
> = 48G
> getActualNonDfsUsed  =  usage.getUsed() - getDfsUsed()
>   =  48G - 30G
>   = 18G
> 18G < 50G, so the function `getNonDfsUsed` actualNonDfsUsed < reserved, and 
> the NonDfsUsed will return 0, is that logic make sense?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11752) getNonDfsUsed return 0 if reserved bigger than actualNonDfsUsed

2017-05-04 Thread maobaolong (JIRA)
maobaolong created HDFS-11752:
-

 Summary: getNonDfsUsed return 0 if reserved bigger than 
actualNonDfsUsed
 Key: HDFS-11752
 URL: https://issues.apache.org/jira/browse/HDFS-11752
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, hdfs
Affects Versions: 2.7.1
Reporter: maobaolong
 Fix For: 2.7.1


{code}
public long getNonDfsUsed() throws IOException {
long actualNonDfsUsed = getActualNonDfsUsed();
if (actualNonDfsUsed < reserved) {
  return 0L;
}
return actualNonDfsUsed - reserved;
  }
{code}

The code block above is the function to caculate nonDfsUsed, but in fact it 
will let the result to be 0L out of expect. Such as this following situation:

du.reserved  = 50G
Disk Capacity = 2048G
Disk Available = 2000G
Dfs used = 30G

usage.getUsed() = dirFile.getTotalSpace() - dirFile.getFreeSpace()
= 2048G - 2000G
= 48G
getActualNonDfsUsed  =  usage.getUsed() - getDfsUsed()
  =  48G - 30G
  = 18G
18G < 50G, so the function `getNonDfsUsed` actualNonDfsUsed < reserved, and the 
NonDfsUsed will return 0, is that logic make sense?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org