[jira] [Commented] (HDFS-5241) Provide alternate queuing audit logger to reduce logging contention

2015-04-06 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14481444#comment-14481444
 ] 

Arpit Agarwal commented on HDFS-5241:
-

Thanks for confirming that Kihwal.

 Provide alternate queuing audit logger to reduce logging contention
 ---

 Key: HDFS-5241
 URL: https://issues.apache.org/jira/browse/HDFS-5241
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 2.3.0

 Attachments: HDFS-5241.patch, HDFS-5241.patch


 The default audit logger has extremely poor performance.  The internal 
 synchronization of log4j causes massive contention between the call handlers 
 (100 by default) which drastically limits the throughput of the NN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7936) Erasure coding: resolving conflicts in the branch when merging trunk changes.

2015-04-06 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-7936:

Summary: Erasure coding: resolving conflicts in the branch when merging 
trunk changes.   (was: Erasure coding: resolving conflicts in the branch when 
merging)

 Erasure coding: resolving conflicts in the branch when merging trunk changes. 
 --

 Key: HDFS-7936
 URL: https://issues.apache.org/jira/browse/HDFS-7936
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Fix For: HDFS-7285

 Attachments: HDFS-7936-001.patch, HDFS-7936-002.patch, 
 HDFS-7936-003.patch, HDFS-7936-004.patch, HDFS-7936-005.patch


 This will be used to track and resolve conflicts when merging trunk changes. 
 Below is a list of trunk changes that have caused conflicts (updated weekly):
 # HDFS-7903
 # HDFS-7435
 # HDFS-7930
 # HDFS-7960
 # HDFS-7742
 # HDFS-8035



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7936) Erasure coding: resolving conflicts in the branch when merging

2015-04-06 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-7936:

Description: 
This will be used to track and resolve conflicts when merging trunk changes. 

Below is a list of trunk changes that have caused conflicts (updated weekly):
# HDFS-7903
# HDFS-7435
# HDFS-7930
# HDFS-7960
# HDFS-7742
# HDFS-8035

  was:
This will be used to track and resolve conflicts when merging trunk changes. 

Below is a list of trunk changes that have caused conflicts (updated weekly):
# HDFS-7903
# HDFS-7435
# HDFS-7930
# HDFS-7960
# HDFS-7742


 Erasure coding: resolving conflicts in the branch when merging
 --

 Key: HDFS-7936
 URL: https://issues.apache.org/jira/browse/HDFS-7936
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Fix For: HDFS-7285

 Attachments: HDFS-7936-001.patch, HDFS-7936-002.patch, 
 HDFS-7936-003.patch, HDFS-7936-004.patch, HDFS-7936-005.patch


 This will be used to track and resolve conflicts when merging trunk changes. 
 Below is a list of trunk changes that have caused conflicts (updated weekly):
 # HDFS-7903
 # HDFS-7435
 # HDFS-7930
 # HDFS-7960
 # HDFS-7742
 # HDFS-8035



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7936) Erasure coding: resolving conflicts in the branch when merging

2015-04-06 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-7936:

Attachment: HDFS-7936-005.patch

This is to resolve conflicts with HDFS-8035

 Erasure coding: resolving conflicts in the branch when merging
 --

 Key: HDFS-7936
 URL: https://issues.apache.org/jira/browse/HDFS-7936
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Fix For: HDFS-7285

 Attachments: HDFS-7936-001.patch, HDFS-7936-002.patch, 
 HDFS-7936-003.patch, HDFS-7936-004.patch, HDFS-7936-005.patch


 This will be used to track and resolve conflicts when merging trunk changes. 
 Below is a list of trunk changes that have caused conflicts (updated weekly):
 # HDFS-7903
 # HDFS-7435
 # HDFS-7930
 # HDFS-7960
 # HDFS-7742
 # HDFS-8035



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8068) Do not retry rpc calls If the proxy contains unresolved address

2015-04-06 Thread Kihwal Lee (JIRA)
Kihwal Lee created HDFS-8068:


 Summary: Do not retry rpc calls If the proxy contains unresolved 
address
 Key: HDFS-8068
 URL: https://issues.apache.org/jira/browse/HDFS-8068
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8055) NullPointerException when topology script is missing.

2015-04-06 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-8055:
---
Fix Version/s: 2.7.0
   Status: Patch Available  (was: Open)

 NullPointerException when topology script is missing.
 -

 Key: HDFS-8055
 URL: https://issues.apache.org/jira/browse/HDFS-8055
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.2.0
Reporter: Anu Engineer
Assignee: Anu Engineer
 Fix For: 2.7.0

 Attachments: hdfs-8055.001.patch


 We've received reports that the NameNode can get a NullPointerException when 
 the topology script is missing. This issue tracks investigating whether or 
 not we can improve the validation logic and give a more informative error 
 message.
 Here is a sample stack trace :
 Getting NPE from HDFS:
  
  2015-02-06 23:02:12,250 ERROR [pool-4-thread-1] util.HFileV1Detector: Got 
 exception while reading trailer for 
 file:hdfs://hqhd02nm01.pclc0.merkle.local:8020/hbase/.META./1028785192/info/1490a396aea448b693da563f76a28486^M
  org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
 java.lang.NullPointerException^M
  at 
 org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.sortLocatedBlocks(DatanodeManager.java:359)^M
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1789)^M
  at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:542)^M
  at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:362)^M
  at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)^M
  at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)^M
  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)^M
  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)^M
  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)^M
  at java.security.AccessController.doPrivileged(Native Method)^M
  at javax.security.auth.Subject.doAs(Subject.java:415)^M
  at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)^M
  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)^M
  ^M
  at org.apache.hadoop.ipc.Client.call(Client.java:1468)^M
  at org.apache.hadoop.ipc.Client.call(Client.java:1399)^M
  at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)^M
  at com.sun.proxy.$Proxy14.getBlockLocations(Unknown Source)^M
  at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:254)^M
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)^M
  at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)^M
  at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)^M
  at java.lang.reflect.Method.invoke(Method.java:606)^M
  at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)^M
  at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)^M
  at com.sun.proxy.$Proxy15.getBlockLocations(Unknown Source)^M
  at 
 org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1220)^M
  at 
 org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1210)^M
  at 
 org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1200)^M
  at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:271)^M
  at 
 org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:238)^M
  at 
 org.apache.hadoop.hdfs.DFSInputStream.init(DFSInputStream.java:231)^M
  at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1498)^M
  at 
 org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:302)^M
  at 
 org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:298)^M
  at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)^M
  at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:298)^M
  at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:766)^M
  at 
 

[jira] [Updated] (HDFS-8068) Do not retry rpc calls If the proxy contains unresolved address

2015-04-06 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-8068:
-
Description: When the InetSocketAddress object happens to be unresolvable 
(e.g. due to transient DNS issue), the rpc proxy object will not be usable 
since the client will throw UnknownHostException when a Connection object is 
created. If FailoverOnNetworkExceptionRetry is used as in the standard HA 
failover proxy, the call will be retried, but this will never recover.  
Instead, the validity of address must be checked on pxoy creation and throw if 
it is invalid.

 Do not retry rpc calls If the proxy contains unresolved address
 ---

 Key: HDFS-8068
 URL: https://issues.apache.org/jira/browse/HDFS-8068
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee

 When the InetSocketAddress object happens to be unresolvable (e.g. due to 
 transient DNS issue), the rpc proxy object will not be usable since the 
 client will throw UnknownHostException when a Connection object is created. 
 If FailoverOnNetworkExceptionRetry is used as in the standard HA failover 
 proxy, the call will be retried, but this will never recover.  Instead, the 
 validity of address must be checked on pxoy creation and throw if it is 
 invalid.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7923) The DataNodes should rate-limit their full block reports by asking the NN on heartbeat messages

2015-04-06 Thread Charles Lamb (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14481715#comment-14481715
 ] 

Charles Lamb commented on HDFS-7923:


Here is a description of the heuristic that my patch has implemented for the NN 
to determine what to send back in response to the should I send a BR? 
question. In the vein of keeping it relatively simple, let's consider 3 
parameters:


*   The max # of FBR requests that the NN is willing to process at any given 
time (to be called 'dfs.namenode.max.concurrent.block.reports', with a default 
of Integer.MAX_INTEGER)
*   The DN's configured block report interval (dfs.blockreport.intervalMsec). 
This parameter already exists.
*   The max time we ever want the NN to go without receiving an FBR from a 
given DN ('dfs.blockreport.max.deferMsec').

If the time since the last FBR received from the DN is less than 
dfs.blockreport.intervalMsec, then it returns false (No, don't send an FBR). 
In theory, this should never happen if the DN is obeying 
dfs.blockreport.intervalMsec.

If the number of block reports currently being processed by an NN is less than 
dfs.namenode.max.concurrent.block.reports, and the time since it last received 
an FBR from the DN sending the heartbeat is greater than 
dfs.blockreport.intervalMsec, then the NN automatically answers true (Yes, 
send along an FBR).

If the number of BRs being processed by an NN is  than 
dfs.namenode.max.concurrent.block.reports when it receives the heartbeat, then 
it checks the last time that it received an FBR from the DN sending the 
heartbeat and if it's greater than dfs.blockreport.max.deferMsec, then it 
returns true (Yes, send along an FBR). If the time-since-last-FBR is less 
than dfs.blockreport.max.deferMsec, then it returns false.


 The DataNodes should rate-limit their full block reports by asking the NN on 
 heartbeat messages
 ---

 Key: HDFS-7923
 URL: https://issues.apache.org/jira/browse/HDFS-7923
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Colin Patrick McCabe
Assignee: Charles Lamb
 Attachments: HDFS-7923.000.patch


 The DataNodes should rate-limit their full block reports.  They can do this 
 by first sending a heartbeat message to the NN with an optional boolean set 
 which requests permission to send a full block report.  If the NN responds 
 with another optional boolean set, the DN will send an FBR... if not, it will 
 wait until later.  This can be done compatibly with optional fields.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8032) Erasure coding: shrink size of indices in BlockInfoStriped

2015-04-06 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14481828#comment-14481828
 ] 

Zhe Zhang commented on HDFS-8032:
-

Thanks Yi for the comment. I should have put a more clear description. This 
JIRA is mainly about cutting the number of indices. We can certainly shrink the 
size of each index as you suggested.

 Erasure coding: shrink size of indices in BlockInfoStriped
 --

 Key: HDFS-8032
 URL: https://issues.apache.org/jira/browse/HDFS-8032
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang

 In a _m+k_ erasure coding schema, the first _(m+k)_ slots in 
 {{BlockInfoStriped}} do not need indexing, since the _ith_ storage directly 
 maps to the _ith_ block in the erasure coding block group.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8068) Do not retry rpc calls If the proxy contains unresolved address

2015-04-06 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14481923#comment-14481923
 ] 

Kihwal Lee commented on HDFS-8068:
--

The patch adds a check in {{NameNodeProxies#createNonHAProxy()}}. If the 
address is unresolved (i.e. cannot be resolved), it throws. This makes the rpc 
proxy creation fail for both HA and non-HA case. In HA, failover proxy 
providers get this exception and thow a {{RunTimeException}} in {{getProxy()}}, 
which is called by {{RetryInvocationHandler}} in its ctor or {{invoke()}} 
during failover.  If {{ConfiguredFailoverProxyProvider}} is used and the 
initial proxy object was okay, the second {{getProxy()}} call from {{invoke()}} 
will throw. In this case, the particular call will fail instead of the proxy 
creation. The (ha)proxy in the {{DFSClient}} instance is still intact, so 
creation of underlying non-HA proxy will be retried in the next call.

 Do not retry rpc calls If the proxy contains unresolved address
 ---

 Key: HDFS-8068
 URL: https://issues.apache.org/jira/browse/HDFS-8068
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: HDFS-8068.v1.patch


 When the InetSocketAddress object happens to be unresolvable (e.g. due to 
 transient DNS issue), the rpc proxy object will not be usable since the 
 client will throw UnknownHostException when a Connection object is created. 
 If FailoverOnNetworkExceptionRetry is used as in the standard HA failover 
 proxy, the call will be retried, but this will never recover.  Instead, the 
 validity of address must be checked on pxoy creation and throw if it is 
 invalid.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7969) Erasure coding: NameNode support for lease recovery of striped block groups

2015-04-06 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-7969:

Attachment: HDFS-7969-001.patch

Thanks Yi for the review! The updated patch address the 2 comments and I just 
committed it. I guess we can add more tests after HDFS-8064.

 Erasure coding: NameNode support for lease recovery of striped block groups
 ---

 Key: HDFS-7969
 URL: https://issues.apache.org/jira/browse/HDFS-7969
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-7969-000.patch, HDFS-7969-001.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-7969) Erasure coding: NameNode support for lease recovery of striped block groups

2015-04-06 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang resolved HDFS-7969.
-
   Resolution: Fixed
Fix Version/s: HDFS-7285

 Erasure coding: NameNode support for lease recovery of striped block groups
 ---

 Key: HDFS-7969
 URL: https://issues.apache.org/jira/browse/HDFS-7969
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Fix For: HDFS-7285

 Attachments: HDFS-7969-000.patch, HDFS-7969-001.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6440) Support more than 2 NameNodes

2015-04-06 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14481831#comment-14481831
 ] 

Aaron T. Myers commented on HDFS-6440:
--

Sorry, [~jesse_yates], been busy. I got partway through a review of the patch a 
few weeks ago, but then haven't gotten back to it yet. Will post my feedback 
soon here.

 Support more than 2 NameNodes
 -

 Key: HDFS-6440
 URL: https://issues.apache.org/jira/browse/HDFS-6440
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: auto-failover, ha, namenode
Affects Versions: 2.4.0
Reporter: Jesse Yates
Assignee: Jesse Yates
 Attachments: Multiple-Standby-NameNodes_V1.pdf, 
 hdfs-6440-cdh-4.5-full.patch, hdfs-6440-trunk-v1.patch, 
 hdfs-6440-trunk-v1.patch, hdfs-multiple-snn-trunk-v0.patch


 Most of the work is already done to support more than 2 NameNodes (one 
 active, one standby). This would be the last bit to support running multiple 
 _standby_ NameNodes; one of the standbys should be available for fail-over.
 Mostly, this is a matter of updating how we parse configurations, some 
 complexity around managing the checkpointing, and updating a whole lot of 
 tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8038) PBImageDelimitedTextWriter#getEntry output HDFS path in platform-specific format.

2015-04-06 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-8038:
-
Attachment: HDFS-8038.02.patch

Thanks [~cnauroth] for the review. I've fixed the path issues in the 
PBImageTextWriter.java and updated the patch.

 PBImageDelimitedTextWriter#getEntry output HDFS path in platform-specific 
 format.
 -

 Key: HDFS-8038
 URL: https://issues.apache.org/jira/browse/HDFS-8038
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: tools
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor
 Attachments: HDFS-8038.00.patch, HDFS-8038.01.patch, 
 HDFS-8038.02.patch


 PBImageDelimitedTextWriter#getEntry taking the HDFS path and passing it 
 through java.io.File, which causes platform-specific behavior as the actual 
 results shown in TestOfflineImageViewer#testPBDelimitedWriter() on Windows OS.
 {code}
 expected:[/emptydir, /dir0, /dir1/file2, /dir1, /dir1/file3, /dir2/file3, 
 /dir1/file0, /dir1/file1, /dir2/file1, /dir2/file2, /dir2, /dir0/file0, 
 /dir2/file0, /dir0/file1, /dir0/file2, /dir0/file3, /xattr] 
 but was:[\dir0, \dir0\file3, \dir0\file2, \dir0\file1, \xattr, \emptydir, 
 \dir0\file0, \dir1\file1, \dir1\file0, \dir1\file3, \dir1\file2, \dir2\file3, 
 \, \dir1, \dir2\file0, \dirContainingInvalidXMLChar#0;here, \dir2, 
 \dir2\file2, \dir2\file1]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7923) The DataNodes should rate-limit their full block reports by asking the NN on heartbeat messages

2015-04-06 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Lamb updated HDFS-7923:
---
Attachment: HDFS-7923.000.patch

Attached is a patch that implements the behavior I described.

 The DataNodes should rate-limit their full block reports by asking the NN on 
 heartbeat messages
 ---

 Key: HDFS-7923
 URL: https://issues.apache.org/jira/browse/HDFS-7923
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Colin Patrick McCabe
Assignee: Charles Lamb
 Attachments: HDFS-7923.000.patch


 The DataNodes should rate-limit their full block reports.  They can do this 
 by first sending a heartbeat message to the NN with an optional boolean set 
 which requests permission to send a full block report.  If the NN responds 
 with another optional boolean set, the DN will send an FBR... if not, it will 
 wait until later.  This can be done compatibly with optional fields.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HDFS-7923) The DataNodes should rate-limit their full block reports by asking the NN on heartbeat messages

2015-04-06 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-7923 started by Charles Lamb.
--
 The DataNodes should rate-limit their full block reports by asking the NN on 
 heartbeat messages
 ---

 Key: HDFS-7923
 URL: https://issues.apache.org/jira/browse/HDFS-7923
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Colin Patrick McCabe
Assignee: Charles Lamb
 Attachments: HDFS-7923.000.patch


 The DataNodes should rate-limit their full block reports.  They can do this 
 by first sending a heartbeat message to the NN with an optional boolean set 
 which requests permission to send a full block report.  If the NN responds 
 with another optional boolean set, the DN will send an FBR... if not, it will 
 wait until later.  This can be done compatibly with optional fields.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8070) ShortCircuitShmManager goes into dead mode, stopping all operations

2015-04-06 Thread Gopal V (JIRA)
Gopal V created HDFS-8070:
-

 Summary: ShortCircuitShmManager goes into dead mode, stopping all 
operations
 Key: HDFS-8070
 URL: https://issues.apache.org/jira/browse/HDFS-8070
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: caching
Affects Versions: 2.8.0
Reporter: Gopal V


HDFS ShortCircuitShm layer keeps the task locked up during multi-threaded 
split-generation.

I hit this immediately after I upgraded the data, so I wonder if the 
ShortCircuitShim wire protocol has trouble when 2.8.0 DN talks to a 2.7.0 
Client?

{code}
2015-04-06 00:04:30,780 INFO [ORC_GET_SPLITS #3] orc.OrcInputFormat: ORC 
pushdown predicate: leaf-0 = (IS_NULL ss_sold_date_sk)
expr = (not leaf-0)
2015-04-06 00:04:30,781 ERROR [ShortCircuitCache_SlotReleaser] 
shortcircuit.ShortCircuitCache: ShortCircuitCache(0x29e82045): failed to 
release short-circuit shared memory slot Slot(slotIdx=2, 
shm=DfsClientShm(a86ee34576d93c4964005d90b0d97c38)) by sending 
ReleaseShortCircuitAccessRequestProto to /grid/0/cluster/hdfs/dn_socket.  
Closing shared memory segment.
java.io.IOException: ERROR_INVALID: there is no shared memory segment 
registered with shmId a86ee34576d93c4964005d90b0d97c38
at 
org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache$SlotReleaser.run(ShortCircuitCache.java:208)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2015-04-06 00:04:30,781 INFO [ORC_GET_SPLITS #5] orc.OrcInputFormat: ORC 
pushdown predicate: leaf-0 = (IS_NULL ss_sold_date_sk)
expr = (not leaf-0)
2015-04-06 00:04:30,781 WARN [ShortCircuitCache_SlotReleaser] 
shortcircuit.DfsClientShmManager: EndpointShmManager(172.19.128.60:50010, 
parent=ShortCircuitShmManager(5e763476)): error shutting down shm: got 
IOException calling shutdown(SHUT_RDWR)
java.nio.channels.ClosedChannelException
at 
org.apache.hadoop.util.CloseableReferenceCount.reference(CloseableReferenceCount.java:57)
at 
org.apache.hadoop.net.unix.DomainSocket.shutdown(DomainSocket.java:387)
at 
org.apache.hadoop.hdfs.shortcircuit.DfsClientShmManager$EndpointShmManager.shutdown(DfsClientShmManager.java:378)
at 
org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache$SlotReleaser.run(ShortCircuitCache.java:223)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2015-04-06 00:04:30,783 INFO [ORC_GET_SPLITS #7] orc.OrcInputFormat: ORC 
pushdown predicate: leaf-0 = (IS_NULL cs_sold_date_sk)
expr = (not leaf-0)
2015-04-06 00:04:30,785 ERROR [ShortCircuitCache_SlotReleaser] 
shortcircuit.ShortCircuitCache: ShortCircuitCache(0x29e82045): failed to 
release short-circuit shared memory slot Slot(slotIdx=4, 
shm=DfsClientShm(a86ee34576d93c4964005d90b0d97c38)) by sending 
ReleaseShortCircuitAccessRequestProto to /grid/0/cluster/hdfs/dn_socket.  
Closing shared memory segment.
java.io.IOException: ERROR_INVALID: there is no shared memory segment 
registered with shmId a86ee34576d93c4964005d90b0d97c38
at 
org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache$SlotReleaser.run(ShortCircuitCache.java:208)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}

Looks like a double free-fd condition?

{code}

[jira] [Commented] (HDFS-6666) Abort NameNode and DataNode startup if security is enabled but block access token is not enabled.

2015-04-06 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14481610#comment-14481610
 ] 

Arpit Agarwal commented on HDFS-:
-

Hi Vijay, the code change looks fine. You don't need the {{ 
UserGroupInformation.isSecurityEnabled()}} clause in 
{{DataNode#checkSecureConfig}}. Also suggest rewording _when clients attempt to 
talk to a DataNode_ to _when clients attempt to connect to DataNodes_.

The behavior of {{TestSecureNameNode#testName}} has changed. We used to login 
as user1 using keytab, now the test runs as the currently logged in user.

 Abort NameNode and DataNode startup if security is enabled but block access 
 token is not enabled.
 -

 Key: HDFS-
 URL: https://issues.apache.org/jira/browse/HDFS-
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, namenode, security
Affects Versions: 3.0.0, 2.5.0
Reporter: Chris Nauroth
Assignee: Vijay Bhat
Priority: Minor
 Attachments: HDFS-.001.patch


 Currently, if security is enabled by setting hadoop.security.authentication 
 to kerberos, but HDFS block access tokens are disabled by setting 
 dfs.block.access.token.enable to false (which is the default), then the 
 NameNode logs an error and proceeds, and the DataNode proceeds without even 
 logging an error.  This jira proposes that this it's invalid to turn on 
 security but not turn on block access tokens, and that it would be better to 
 fail fast and abort the daemons during startup if this happens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8068) Do not retry rpc calls If the proxy contains unresolved address

2015-04-06 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-8068:
-
Target Version/s: 2.8.0
  Status: Patch Available  (was: Open)

 Do not retry rpc calls If the proxy contains unresolved address
 ---

 Key: HDFS-8068
 URL: https://issues.apache.org/jira/browse/HDFS-8068
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
 Attachments: HDFS-8068.v1.patch


 When the InetSocketAddress object happens to be unresolvable (e.g. due to 
 transient DNS issue), the rpc proxy object will not be usable since the 
 client will throw UnknownHostException when a Connection object is created. 
 If FailoverOnNetworkExceptionRetry is used as in the standard HA failover 
 proxy, the call will be retried, but this will never recover.  Instead, the 
 validity of address must be checked on pxoy creation and throw if it is 
 invalid.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-6666) Abort NameNode and DataNode startup if security is enabled but block access token is not enabled.

2015-04-06 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14481610#comment-14481610
 ] 

Arpit Agarwal edited comment on HDFS- at 4/6/15 6:54 PM:
-

Hi Vijay, the code change looks fine. You don't need the {{ 
UserGroupInformation.isSecurityEnabled()}} clause in 
{{DataNode#checkSecureConfig}}. Also suggest rewording _when clients attempt to 
talk to a DataNode_ to _when clients attempt to connect to DataNodes_.

The behavior of {{TestSecureNameNode#testName}} has changed. We used to login 
as user1 using keytab, now the test runs as the currently logged in user. Was 
this intentional?


was (Author: arpitagarwal):
Hi Vijay, the code change looks fine. You don't need the {{ 
UserGroupInformation.isSecurityEnabled()}} clause in 
{{DataNode#checkSecureConfig}}. Also suggest rewording _when clients attempt to 
talk to a DataNode_ to _when clients attempt to connect to DataNodes_.

The behavior of {{TestSecureNameNode#testName}} has changed. We used to login 
as user1 using keytab, now the test runs as the currently logged in user.

 Abort NameNode and DataNode startup if security is enabled but block access 
 token is not enabled.
 -

 Key: HDFS-
 URL: https://issues.apache.org/jira/browse/HDFS-
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, namenode, security
Affects Versions: 3.0.0, 2.5.0
Reporter: Chris Nauroth
Assignee: Vijay Bhat
Priority: Minor
 Attachments: HDFS-.001.patch


 Currently, if security is enabled by setting hadoop.security.authentication 
 to kerberos, but HDFS block access tokens are disabled by setting 
 dfs.block.access.token.enable to false (which is the default), then the 
 NameNode logs an error and proceeds, and the DataNode proceeds without even 
 logging an error.  This jira proposes that this it's invalid to turn on 
 security but not turn on block access tokens, and that it would be better to 
 fail fast and abort the daemons during startup if this happens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6440) Support more than 2 NameNodes

2015-04-06 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14481709#comment-14481709
 ] 

Jesse Yates commented on HDFS-6440:
---

Us too. We are waiting on a committer to have time to look at it. Head from Lei 
that he is happy with the state and had passed it onto [~atm] for review and 
commit, but that's the last I head about any progress (that was mid february).

[~patrickwhite] maybe you can get one of the FB commiters to help get it 
committed? I'm just tentative to do _another_ rebase of this patch to not have 
it be committed.

Honestly, I'm surprised that the various companies that have a stake in HDFS 
being successful in production haven't been more supportive of getting this 
patch committed.

 Support more than 2 NameNodes
 -

 Key: HDFS-6440
 URL: https://issues.apache.org/jira/browse/HDFS-6440
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: auto-failover, ha, namenode
Affects Versions: 2.4.0
Reporter: Jesse Yates
Assignee: Jesse Yates
 Attachments: Multiple-Standby-NameNodes_V1.pdf, 
 hdfs-6440-cdh-4.5-full.patch, hdfs-6440-trunk-v1.patch, 
 hdfs-6440-trunk-v1.patch, hdfs-multiple-snn-trunk-v0.patch


 Most of the work is already done to support more than 2 NameNodes (one 
 active, one standby). This would be the last bit to support running multiple 
 _standby_ NameNodes; one of the standbys should be available for fail-over.
 Mostly, this is a matter of updating how we parse configurations, some 
 complexity around managing the checkpointing, and updating a whole lot of 
 tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8032) Erasure coding: shrink size of indices in BlockInfoStriped

2015-04-06 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-8032:

Description: In a _m+k_ erasure coding schema, the first _(m+k)_ slots in 
{{BlockInfoStriped}} do not need indexing, since the _ith_ storage directly 
maps to the _ith_ block in the erasure coding block group.

 Erasure coding: shrink size of indices in BlockInfoStriped
 --

 Key: HDFS-8032
 URL: https://issues.apache.org/jira/browse/HDFS-8032
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang

 In a _m+k_ erasure coding schema, the first _(m+k)_ slots in 
 {{BlockInfoStriped}} do not need indexing, since the _ith_ storage directly 
 maps to the _ith_ block in the erasure coding block group.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8055) NullPointerException when topology script is missing.

2015-04-06 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-8055:
---
Attachment: hdfs-8055.001.patch

Handles the null returned by the ShellExecutor correctly in case of busted 
shell scripts.

Added test scripts with correct and incorrect handling of topology

 NullPointerException when topology script is missing.
 -

 Key: HDFS-8055
 URL: https://issues.apache.org/jira/browse/HDFS-8055
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.2.0
Reporter: Anu Engineer
Assignee: Anu Engineer
 Attachments: hdfs-8055.001.patch


 We've received reports that the NameNode can get a NullPointerException when 
 the topology script is missing. This issue tracks investigating whether or 
 not we can improve the validation logic and give a more informative error 
 message.
 Here is a sample stack trace :
 Getting NPE from HDFS:
  
  2015-02-06 23:02:12,250 ERROR [pool-4-thread-1] util.HFileV1Detector: Got 
 exception while reading trailer for 
 file:hdfs://hqhd02nm01.pclc0.merkle.local:8020/hbase/.META./1028785192/info/1490a396aea448b693da563f76a28486^M
  org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
 java.lang.NullPointerException^M
  at 
 org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.sortLocatedBlocks(DatanodeManager.java:359)^M
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1789)^M
  at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:542)^M
  at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:362)^M
  at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)^M
  at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)^M
  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)^M
  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)^M
  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)^M
  at java.security.AccessController.doPrivileged(Native Method)^M
  at javax.security.auth.Subject.doAs(Subject.java:415)^M
  at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)^M
  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)^M
  ^M
  at org.apache.hadoop.ipc.Client.call(Client.java:1468)^M
  at org.apache.hadoop.ipc.Client.call(Client.java:1399)^M
  at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)^M
  at com.sun.proxy.$Proxy14.getBlockLocations(Unknown Source)^M
  at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:254)^M
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)^M
  at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)^M
  at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)^M
  at java.lang.reflect.Method.invoke(Method.java:606)^M
  at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)^M
  at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)^M
  at com.sun.proxy.$Proxy15.getBlockLocations(Unknown Source)^M
  at 
 org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1220)^M
  at 
 org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1210)^M
  at 
 org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1200)^M
  at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:271)^M
  at 
 org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:238)^M
  at 
 org.apache.hadoop.hdfs.DFSInputStream.init(DFSInputStream.java:231)^M
  at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1498)^M
  at 
 org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:302)^M
  at 
 org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:298)^M
  at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)^M
  at 
 org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:298)^M
  at 

[jira] [Commented] (HDFS-8050) Separate the client conf key from DFSConfigKeys

2015-04-06 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14481674#comment-14481674
 ] 

Steve Loughran commented on HDFS-8050:
--

Given past experience, moving stuff out of {{DFSConfigKeys}} breaks code 
downstream.

If the client/impl stuff is split into separate interfaces, then 
{{DFSConfigKeys}} must declare itself as implementing/extending all of them, to 
pull them back into place

 Separate the client conf key from DFSConfigKeys
 ---

 Key: HDFS-8050
 URL: https://issues.apache.org/jira/browse/HDFS-8050
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze

 Currently, all the conf keys are in DFSConfigKeys.  We should separate the 
 public client DFSConfigKeys to a new class in org.apache.hadoop.hdfs.client 
 as described by [~wheat9] in HDFS-6566.
 For the private conf keys, they may be moved to a new class in 
 org.apache.hadoop.hdfs.client.impl.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6440) Support more than 2 NameNodes

2015-04-06 Thread Patrick White (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14481679#comment-14481679
 ] 

Patrick White commented on HDFS-6440:
-

We're pretty interested in this as well, how's it coming?

 Support more than 2 NameNodes
 -

 Key: HDFS-6440
 URL: https://issues.apache.org/jira/browse/HDFS-6440
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: auto-failover, ha, namenode
Affects Versions: 2.4.0
Reporter: Jesse Yates
Assignee: Jesse Yates
 Attachments: Multiple-Standby-NameNodes_V1.pdf, 
 hdfs-6440-cdh-4.5-full.patch, hdfs-6440-trunk-v1.patch, 
 hdfs-6440-trunk-v1.patch, hdfs-multiple-snn-trunk-v0.patch


 Most of the work is already done to support more than 2 NameNodes (one 
 active, one standby). This would be the last bit to support running multiple 
 _standby_ NameNodes; one of the standbys should be available for fail-over.
 Mostly, this is a matter of updating how we parse configurations, some 
 complexity around managing the checkpointing, and updating a whole lot of 
 tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8069) Tracing implementation on DFSInputStream seriously degrades performance

2015-04-06 Thread Josh Elser (JIRA)
Josh Elser created HDFS-8069:


 Summary: Tracing implementation on DFSInputStream seriously 
degrades performance
 Key: HDFS-8069
 URL: https://issues.apache.org/jira/browse/HDFS-8069
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.7.0
Reporter: Josh Elser
Priority: Critical


I've been doing some testing of Accumulo with HDFS 2.7.0 and have noticed a 
serious performance impact when Accumulo registers itself as a SpanReceiver.

The context of the test which I noticed the impact is that an Accumulo process 
reads a series of updates from a write-ahead log. This is just reading a series 
of Writable objects from a file in HDFS. With tracing enabled, I waited for at 
least 10 minutes and the server still hadn't read a ~300MB file.

Doing a poor-man's inspection via repeated thread dumps, I always see something 
like the following:

{noformat}
replication task 2 daemon prio=10 tid=0x02842800 nid=0x794d runnable 
[0x7f6c7b1ec000]
   java.lang.Thread.State: RUNNABLE
at 
java.util.concurrent.CopyOnWriteArrayList.iterator(CopyOnWriteArrayList.java:959)
at org.apache.htrace.Tracer.deliver(Tracer.java:80)
at org.apache.htrace.impl.MilliSpan.stop(MilliSpan.java:177)
- locked 0x00077a770730 (a org.apache.htrace.impl.MilliSpan)
at org.apache.htrace.TraceScope.close(TraceScope.java:78)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:898)
- locked 0x00079fa39a48 (a org.apache.hadoop.hdfs.DFSInputStream)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:697)
- locked 0x00079fa39a48 (a org.apache.hadoop.hdfs.DFSInputStream)
at java.io.DataInputStream.readByte(DataInputStream.java:265)
at org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:308)
at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:329)
at org.apache.accumulo.core.data.Mutation.readFields(Mutation.java:951)
   ... more accumulo code omitted...
{noformat}

What I'm seeing here is that reading a single byte (in WritableUtils.readVLong) 
is causing a new Span creation and close (which includes a flush to the 
SpanReceiver). This results in an extreme amount of spans for 
{{DFSInputStream.byteArrayRead}} just for reading a file from HDFS -- over 700k 
spans for just reading a few hundred MB file.

Perhaps there's something different we need to do for the SpanReceiver in 
Accumulo? I'm not entirely sure, but this was rather unexpected.

cc/ [~cmccabe]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6440) Support more than 2 NameNodes

2015-04-06 Thread Patrick White (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14481863#comment-14481863
 ] 

Patrick White commented on HDFS-6440:
-

[~jesse_yates] I'm not sure I know any HDFS committers here, lemme go bug 
[~eclark] and see what I can shake out of him

 Support more than 2 NameNodes
 -

 Key: HDFS-6440
 URL: https://issues.apache.org/jira/browse/HDFS-6440
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: auto-failover, ha, namenode
Affects Versions: 2.4.0
Reporter: Jesse Yates
Assignee: Jesse Yates
 Attachments: Multiple-Standby-NameNodes_V1.pdf, 
 hdfs-6440-cdh-4.5-full.patch, hdfs-6440-trunk-v1.patch, 
 hdfs-6440-trunk-v1.patch, hdfs-multiple-snn-trunk-v0.patch


 Most of the work is already done to support more than 2 NameNodes (one 
 active, one standby). This would be the last bit to support running multiple 
 _standby_ NameNodes; one of the standbys should be available for fail-over.
 Mostly, this is a matter of updating how we parse configurations, some 
 complexity around managing the checkpointing, and updating a whole lot of 
 tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-8068) Do not retry rpc calls If the proxy contains unresolved address

2015-04-06 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee reassigned HDFS-8068:


Assignee: Kihwal Lee

 Do not retry rpc calls If the proxy contains unresolved address
 ---

 Key: HDFS-8068
 URL: https://issues.apache.org/jira/browse/HDFS-8068
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: HDFS-8068.v1.patch


 When the InetSocketAddress object happens to be unresolvable (e.g. due to 
 transient DNS issue), the rpc proxy object will not be usable since the 
 client will throw UnknownHostException when a Connection object is created. 
 If FailoverOnNetworkExceptionRetry is used as in the standard HA failover 
 proxy, the call will be retried, but this will never recover.  Instead, the 
 validity of address must be checked on pxoy creation and throw if it is 
 invalid.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8068) Do not retry rpc calls If the proxy contains unresolved address

2015-04-06 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-8068:
-
Attachment: HDFS-8068.v1.patch

 Do not retry rpc calls If the proxy contains unresolved address
 ---

 Key: HDFS-8068
 URL: https://issues.apache.org/jira/browse/HDFS-8068
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
 Attachments: HDFS-8068.v1.patch


 When the InetSocketAddress object happens to be unresolvable (e.g. due to 
 transient DNS issue), the rpc proxy object will not be usable since the 
 client will throw UnknownHostException when a Connection object is created. 
 If FailoverOnNetworkExceptionRetry is used as in the standard HA failover 
 proxy, the call will be retried, but this will never recover.  Instead, the 
 validity of address must be checked on pxoy creation and throw if it is 
 invalid.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6440) Support more than 2 NameNodes

2015-04-06 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14481957#comment-14481957
 ] 

Lars Hofhansl commented on HDFS-6440:
-

Let me also restate that we are running this in production on hundreds of 
clusters at Salesforce; we haven't seen any issues. It _is_ a pretty intricate 
patch, so I understand the hesitation.


 Support more than 2 NameNodes
 -

 Key: HDFS-6440
 URL: https://issues.apache.org/jira/browse/HDFS-6440
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: auto-failover, ha, namenode
Affects Versions: 2.4.0
Reporter: Jesse Yates
Assignee: Jesse Yates
 Attachments: Multiple-Standby-NameNodes_V1.pdf, 
 hdfs-6440-cdh-4.5-full.patch, hdfs-6440-trunk-v1.patch, 
 hdfs-6440-trunk-v1.patch, hdfs-multiple-snn-trunk-v0.patch


 Most of the work is already done to support more than 2 NameNodes (one 
 active, one standby). This would be the last bit to support running multiple 
 _standby_ NameNodes; one of the standbys should be available for fail-over.
 Mostly, this is a matter of updating how we parse configurations, some 
 complexity around managing the checkpointing, and updating a whole lot of 
 tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5215) dfs.datanode.du.reserved is not taking effect as it's not considered while getting the available space

2015-04-06 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14481985#comment-14481985
 ] 

Yongjun Zhang commented on HDFS-5215:
-

Hi [~brahmareddy],

Sorry for the delay, I was buried in other stuff.

I agree with [~sinago] that including rbwReserved is reasonable. Suggest to 
change the comment

1.
remove {{* the freeSpace is now excluding reserved + rbw after HDFS-5215 }} and 
add comments like blow
{code}
   final long usedSpace; // size of space used by HDFS
   final long freeSpace; // size of free space excluding reserved space
   final long reservedSpace; // size of space reserved for non-HDFS and RBW
{code}

2.
{code}
 /**
   * Return either the configured capacity of the file system if configured;
   * or the capacity of the file system excluding space reserved for non-HDFS.
   * @return the unreserved number of bytes left in this filesystem. May be 
zero.
   */
  @VisibleForTesting
  public long getCapacity() {
{code}

3. 
{code}
 /*
   * Calculate the available space of the filesystem, excluding space reserved
   * for non-HDFS and space reserved for RBW
   * 
   * @return the available number of bytes left in this filesystem. May be zero.
   */
  @Override
  public long getAvailable() throws IOException {
{code}

Thanks.



 dfs.datanode.du.reserved is not taking effect as it's not considered while 
 getting the available space
 --

 Key: HDFS-5215
 URL: https://issues.apache.org/jira/browse/HDFS-5215
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Attachments: HDFS-5215-002.patch, HDFS-5215-003.patch, 
 HDFS-5215-004.patch, HDFS-5215.patch


 {code}public long getAvailable() throws IOException {
 long remaining = getCapacity()-getDfsUsed();
 long available = usage.getAvailable();
 if (remaining  available) {
   remaining = available;
 }
 return (remaining  0) ? remaining : 0;
   } 
 {code}
 Here we are not considering the reserved space while getting the Available 
 Space.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8059) Erasure coding: move dataBlockNum and parityBlockNum from BlockInfoStriped to INodeFile

2015-04-06 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482080#comment-14482080
 ] 

Jing Zhao commented on HDFS-8059:
-

Thanks for working on this, Yi! The patch looks good to me.

I thought about putting these two fields in INodeFile before. But my only 
concern is that this way may mix the FileSystem layer (i.e., namespace) and 
storage layer (i.e., block management) in a not very clean way. More 
specifically, given a {{BlockInfoStriped}}, which is a basic unit in storage 
layer, we now have to go to the corresponding INodeFile to learn its 
composition. Imagine someday if we move the BlockManager out of the NameNode 
into a separate service (which has been proposed by some jira), this can cause 
unnecessary communication between the NN and the new BM service.

In terms of memory, moving these two numbers to INodeFile can save memory only 
when the file size is greater than BLOCKSIZE * NUM_DATA_BLOCKS. Currently in a 
lot of cluster each file only contains ~1 block on average. Maybe we should 
wait and see more EC use cases in practice to decide if we want to do this 
optimization?

 Erasure coding: move dataBlockNum and parityBlockNum from BlockInfoStriped to 
 INodeFile
 ---

 Key: HDFS-8059
 URL: https://issues.apache.org/jira/browse/HDFS-8059
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7285
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-8059.001.patch


 Move {{dataBlockNum}} and {{parityBlockNum}} from BlockInfoStriped to 
 INodeFile, and store them in {{FileWithStripedBlocksFeature}}.
 Ideally these two nums are the same for all striped blocks in a file, and 
 store them in BlockInfoStriped will waste NN memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8055) NullPointerException when topology script is missing.

2015-04-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482128#comment-14482128
 ] 

Hadoop QA commented on HDFS-8055:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12723415/hdfs-8055.001.patch
  against trunk revision 28bebc8.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10181//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10181//console

This message is automatically generated.

 NullPointerException when topology script is missing.
 -

 Key: HDFS-8055
 URL: https://issues.apache.org/jira/browse/HDFS-8055
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.2.0
Reporter: Anu Engineer
Assignee: Anu Engineer
 Fix For: 2.7.0

 Attachments: hdfs-8055.001.patch


 We've received reports that the NameNode can get a NullPointerException when 
 the topology script is missing. This issue tracks investigating whether or 
 not we can improve the validation logic and give a more informative error 
 message.
 Here is a sample stack trace :
 Getting NPE from HDFS:
  
  2015-02-06 23:02:12,250 ERROR [pool-4-thread-1] util.HFileV1Detector: Got 
 exception while reading trailer for 
 file:hdfs://hqhd02nm01.pclc0.merkle.local:8020/hbase/.META./1028785192/info/1490a396aea448b693da563f76a28486^M
  org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
 java.lang.NullPointerException^M
  at 
 org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.sortLocatedBlocks(DatanodeManager.java:359)^M
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1789)^M
  at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:542)^M
  at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:362)^M
  at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)^M
  at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)^M
  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)^M
  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)^M
  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)^M
  at java.security.AccessController.doPrivileged(Native Method)^M
  at javax.security.auth.Subject.doAs(Subject.java:415)^M
  at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)^M
  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)^M
  ^M
  at org.apache.hadoop.ipc.Client.call(Client.java:1468)^M
  at org.apache.hadoop.ipc.Client.call(Client.java:1399)^M
  at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)^M
  at com.sun.proxy.$Proxy14.getBlockLocations(Unknown Source)^M
  at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:254)^M
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)^M
  at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)^M
  at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)^M
  at java.lang.reflect.Method.invoke(Method.java:606)^M
  at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)^M
  at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)^M
  at 

[jira] [Commented] (HDFS-8068) Do not retry rpc calls If the proxy contains unresolved address

2015-04-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482238#comment-14482238
 ] 

Hadoop QA commented on HDFS-8068:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12723430/HDFS-8068.v1.patch
  against trunk revision 28bebc8.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10183//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10183//console

This message is automatically generated.

 Do not retry rpc calls If the proxy contains unresolved address
 ---

 Key: HDFS-8068
 URL: https://issues.apache.org/jira/browse/HDFS-8068
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: HDFS-8068.v1.patch


 When the InetSocketAddress object happens to be unresolvable (e.g. due to 
 transient DNS issue), the rpc proxy object will not be usable since the 
 client will throw UnknownHostException when a Connection object is created. 
 If FailoverOnNetworkExceptionRetry is used as in the standard HA failover 
 proxy, the call will be retried, but this will never recover.  Instead, the 
 validity of address must be checked on pxoy creation and throw if it is 
 invalid.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7792) Add links to FaultInjectFramework and SLGUserGuide to site index

2015-04-06 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HDFS-7792:
---
Attachment: HDFS-7792.001.patch

LoadGenerator works on current trunk. Attached patch adds link to SLGUserGuide 
to site index and makes command line usage more specific.

 Add links to FaultInjectFramework and SLGUserGuide to site index
 

 Key: HDFS-7792
 URL: https://issues.apache.org/jira/browse/HDFS-7792
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Attachments: HDFS-7792.001.patch


 FaultInjectFramework.html SLGUserGuide.html are not linked from anywhere. Add 
 link to them to site.xml if the contents are not outdated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8071) Redundant checkFileProgress() in PART II of getAdditionalBlock()

2015-04-06 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-8071:
--
Attachment: HDFS-8071-01.patch

This is minor optimization. I removed {{checkFileProgress()}} from 
{{analyzeFileState()}} and call it once in Part 1.
I also added {{checkFileProgress()}} to {{TestAddBlockRetry()}} to make sure 
the condition holds after Part I and before Part II.

 Redundant checkFileProgress() in PART II of getAdditionalBlock()
 

 Key: HDFS-8071
 URL: https://issues.apache.org/jira/browse/HDFS-8071
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.3-alpha
Reporter: Konstantin Shvachko
 Attachments: HDFS-8071-01.patch


 {{FSN.getAdditionalBlock()}} consists of two parts I and II. Each part calls 
 {{analyzeFileState()}}, which among other things check replication of the 
 penultimate block via {{checkFileProgress()}}. See details in HDFS-4452.
 Checking file progress in Part II is not necessary, because Part I already 
 assured the penultimate block is complete. It cannot change to incomplete, 
 unless the file is truncated, which is not allowed for files under 
 construction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7937) Erasure Coding: INodeFile quota computation unit tests

2015-04-06 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482181#comment-14482181
 ] 

Zhe Zhang commented on HDFS-7937:
-

To create a file in striping / EC format, we should set EC policy on its parent 
dir. 

You can refer to the newly created {{TestErasureCodingZones}} test suite for 
more concrete details.

 Erasure Coding: INodeFile quota computation unit tests
 --

 Key: HDFS-7937
 URL: https://issues.apache.org/jira/browse/HDFS-7937
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Sasaki
Assignee: Kai Sasaki
Priority: Minor
 Attachments: HDFS-7937.1.patch, HDFS-7937.2.patch, HDFS-7937.3.patch, 
 HDFS-7937.4.patch


 Unit test for [HDFS-7826|https://issues.apache.org/jira/browse/HDFS-7826]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8069) Tracing implementation on DFSInputStream seriously degrades performance

2015-04-06 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482224#comment-14482224
 ] 

Billie Rinaldi commented on HDFS-8069:
--

We aren't tracing in the span collector.  We are only tracing one Accumulo 
operation, but it is a fairly complex operation.  So even if we traced this 
operation less often, we would still run into this issue.  I'm not sure I 
understand how the DFSInputStream tracing is supposed to be used.  Is it 
possible introduce sampling of DFSInputStream read operations within a current 
trace that has been enabled?  I'm also confused about why it would create a 
span for a single read operation, which is usually just pulling some bytes from 
an in-memory buffer (?), rather than only creating spans in the BlockReaders.

 Tracing implementation on DFSInputStream seriously degrades performance
 ---

 Key: HDFS-8069
 URL: https://issues.apache.org/jira/browse/HDFS-8069
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.7.0
Reporter: Josh Elser
Priority: Critical

 I've been doing some testing of Accumulo with HDFS 2.7.0 and have noticed a 
 serious performance impact when Accumulo registers itself as a SpanReceiver.
 The context of the test which I noticed the impact is that an Accumulo 
 process reads a series of updates from a write-ahead log. This is just 
 reading a series of Writable objects from a file in HDFS. With tracing 
 enabled, I waited for at least 10 minutes and the server still hadn't read a 
 ~300MB file.
 Doing a poor-man's inspection via repeated thread dumps, I always see 
 something like the following:
 {noformat}
 replication task 2 daemon prio=10 tid=0x02842800 nid=0x794d 
 runnable [0x7f6c7b1ec000]
java.lang.Thread.State: RUNNABLE
 at 
 java.util.concurrent.CopyOnWriteArrayList.iterator(CopyOnWriteArrayList.java:959)
 at org.apache.htrace.Tracer.deliver(Tracer.java:80)
 at org.apache.htrace.impl.MilliSpan.stop(MilliSpan.java:177)
 - locked 0x00077a770730 (a org.apache.htrace.impl.MilliSpan)
 at org.apache.htrace.TraceScope.close(TraceScope.java:78)
 at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:898)
 - locked 0x00079fa39a48 (a 
 org.apache.hadoop.hdfs.DFSInputStream)
 at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:697)
 - locked 0x00079fa39a48 (a 
 org.apache.hadoop.hdfs.DFSInputStream)
 at java.io.DataInputStream.readByte(DataInputStream.java:265)
 at 
 org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:308)
 at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:329)
 at 
 org.apache.accumulo.core.data.Mutation.readFields(Mutation.java:951)
... more accumulo code omitted...
 {noformat}
 What I'm seeing here is that reading a single byte (in 
 WritableUtils.readVLong) is causing a new Span creation and close (which 
 includes a flush to the SpanReceiver). This results in an extreme amount of 
 spans for {{DFSInputStream.byteArrayRead}} just for reading a file from HDFS 
 -- over 700k spans for just reading a few hundred MB file.
 Perhaps there's something different we need to do for the SpanReceiver in 
 Accumulo? I'm not entirely sure, but this was rather unexpected.
 cc/ [~cmccabe]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8068) Do not retry rpc calls If the proxy contains unresolved address

2015-04-06 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14481989#comment-14481989
 ] 

Kihwal Lee commented on HDFS-8068:
--

{{FailoverOnNetworkExceptionRetry#shouldRetry()}} thinks 
{{UnknownHostException}} is retriable, but it's not in the current form. If we 
are to support transparent retry and recovery, there has to be a way to tell 
the failover proxy provider to abandon underlying broken proxy and recreate on 
reception of  {{UnknownHostException}}. This can be a bit ugly.  An alternative 
way is to have the failover proxy provider check the address of the existing 
proxy object in {{getProxy()}} and recreate if it is bad. The newly created one 
may still be broken causing calls to throw {{UnklnowHostException}}, but if DNS 
recovers, it will eventually succeed after some number of retries(recreations). 
But it has a drawback of having to be fixed in the pluggable failover proxy 
provider level.  Is namenode HA also supposed to cover infrastructure outages? 
If not, the v1 patch should be sufficient.

 Do not retry rpc calls If the proxy contains unresolved address
 ---

 Key: HDFS-8068
 URL: https://issues.apache.org/jira/browse/HDFS-8068
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: HDFS-8068.v1.patch


 When the InetSocketAddress object happens to be unresolvable (e.g. due to 
 transient DNS issue), the rpc proxy object will not be usable since the 
 client will throw UnknownHostException when a Connection object is created. 
 If FailoverOnNetworkExceptionRetry is used as in the standard HA failover 
 proxy, the call will be retried, but this will never recover.  Instead, the 
 validity of address must be checked on pxoy creation and throw if it is 
 invalid.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8017) Erasure Coding: perform non-stripping erasure decoding/recovery work given block reader and writer

2015-04-06 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HDFS-8017:

Description: This assumes the facilities like block reader and writer are 
ready, implements and performs erasure decoding/recovery work in 
*non-stripping* case utilizing erasure codec and coder provided by the codec 
framework.  (was: This assumes the facilities like block reader and writer are 
ready, implements and performs erasure decoding work in *non-stripping* case 
utilizing erasure codec and coder provided by the codec framework.)

 Erasure Coding: perform non-stripping erasure decoding/recovery work given 
 block reader and writer
 --

 Key: HDFS-8017
 URL: https://issues.apache.org/jira/browse/HDFS-8017
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Zhe Zhang

 This assumes the facilities like block reader and writer are ready, 
 implements and performs erasure decoding/recovery work in *non-stripping* 
 case utilizing erasure codec and coder provided by the codec framework.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8017) Erasure Coding: perform non-stripping erasure decoding/recovery work given block reader and writer

2015-04-06 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HDFS-8017:

Summary: Erasure Coding: perform non-stripping erasure decoding/recovery 
work given block reader and writer  (was: Erasure Coding: perform non-stripping 
erasure decoding work given block reader and writer)

 Erasure Coding: perform non-stripping erasure decoding/recovery work given 
 block reader and writer
 --

 Key: HDFS-8017
 URL: https://issues.apache.org/jira/browse/HDFS-8017
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Zhe Zhang

 This assumes the facilities like block reader and writer are ready, 
 implements and performs erasure decoding work in *non-stripping* case 
 utilizing erasure codec and coder provided by the codec framework.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8071) Redundant checkFileProgress() in PART II of getAdditionalBlock()

2015-04-06 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482219#comment-14482219
 ] 

Tsz Wo Nicholas Sze commented on HDFS-8071:
---

Patch looks good.  The readLock() call in checkFileProgress(..) is unnecessary 
since all the callers already has the lock.  How about changing it to assert 
hasReadLock()?


 Redundant checkFileProgress() in PART II of getAdditionalBlock()
 

 Key: HDFS-8071
 URL: https://issues.apache.org/jira/browse/HDFS-8071
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.3-alpha
Reporter: Konstantin Shvachko
 Attachments: HDFS-8071-01.patch


 {{FSN.getAdditionalBlock()}} consists of two parts I and II. Each part calls 
 {{analyzeFileState()}}, which among other things check replication of the 
 penultimate block via {{checkFileProgress()}}. See details in HDFS-4452.
 Checking file progress in Part II is not necessary, because Part I already 
 assured the penultimate block is complete. It cannot change to incomplete, 
 unless the file is truncated, which is not allowed for files under 
 construction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8069) Tracing implementation on DFSInputStream seriously degrades performance

2015-04-06 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482261#comment-14482261
 ] 

Josh Elser commented on HDFS-8069:
--

Thanks for chiming in, [~billie.rinaldi]. I had been chatting with her about 
what I was seeing.

I tried to break down what I see as the problem to the most trivial usecase, 
but perhaps I didn't do it well enough the first time. Take a class

{code}
public class Foo implements Writable
{code}

I write some instances of this class to a file in HDFS, and then later read 
them back out again:

{code}
FSDataInputStream inputstream = filesystem.open(new Path(/my/file));
for (int i = 0; i  100; i++) {
  Foo myFoo = new Foo();
  myFoo.readFields(inputstream);
}
{code}

As Billie said, the above is one step in a larger traced operation in Accumulo, 
but we *do* want to have this information from HDFS (e.g. is the time due to 
something wrong in Accumulo or HDFS, etc). It just struck me as extremely odd 
that something as (seemingly) simple as this would cause me such performance 
issues. Maybe the answer is don't do that? I just wanted to bring it up 
because it came across as very unexpected to me.

 Tracing implementation on DFSInputStream seriously degrades performance
 ---

 Key: HDFS-8069
 URL: https://issues.apache.org/jira/browse/HDFS-8069
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.7.0
Reporter: Josh Elser
Priority: Critical

 I've been doing some testing of Accumulo with HDFS 2.7.0 and have noticed a 
 serious performance impact when Accumulo registers itself as a SpanReceiver.
 The context of the test which I noticed the impact is that an Accumulo 
 process reads a series of updates from a write-ahead log. This is just 
 reading a series of Writable objects from a file in HDFS. With tracing 
 enabled, I waited for at least 10 minutes and the server still hadn't read a 
 ~300MB file.
 Doing a poor-man's inspection via repeated thread dumps, I always see 
 something like the following:
 {noformat}
 replication task 2 daemon prio=10 tid=0x02842800 nid=0x794d 
 runnable [0x7f6c7b1ec000]
java.lang.Thread.State: RUNNABLE
 at 
 java.util.concurrent.CopyOnWriteArrayList.iterator(CopyOnWriteArrayList.java:959)
 at org.apache.htrace.Tracer.deliver(Tracer.java:80)
 at org.apache.htrace.impl.MilliSpan.stop(MilliSpan.java:177)
 - locked 0x00077a770730 (a org.apache.htrace.impl.MilliSpan)
 at org.apache.htrace.TraceScope.close(TraceScope.java:78)
 at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:898)
 - locked 0x00079fa39a48 (a 
 org.apache.hadoop.hdfs.DFSInputStream)
 at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:697)
 - locked 0x00079fa39a48 (a 
 org.apache.hadoop.hdfs.DFSInputStream)
 at java.io.DataInputStream.readByte(DataInputStream.java:265)
 at 
 org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:308)
 at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:329)
 at 
 org.apache.accumulo.core.data.Mutation.readFields(Mutation.java:951)
... more accumulo code omitted...
 {noformat}
 What I'm seeing here is that reading a single byte (in 
 WritableUtils.readVLong) is causing a new Span creation and close (which 
 includes a flush to the SpanReceiver). This results in an extreme amount of 
 spans for {{DFSInputStream.byteArrayRead}} just for reading a file from HDFS 
 -- over 700k spans for just reading a few hundred MB file.
 Perhaps there's something different we need to do for the SpanReceiver in 
 Accumulo? I'm not entirely sure, but this was rather unexpected.
 cc/ [~cmccabe]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8071) Redundant checkFileProgress() in PART II of getAdditionalBlock()

2015-04-06 Thread Konstantin Shvachko (JIRA)
Konstantin Shvachko created HDFS-8071:
-

 Summary: Redundant checkFileProgress() in PART II of 
getAdditionalBlock()
 Key: HDFS-8071
 URL: https://issues.apache.org/jira/browse/HDFS-8071
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.3-alpha
Reporter: Konstantin Shvachko


{{FSN.getAdditionalBlock()}} consists of two parts I and II. Each part calls 
{{analyzeFileState()}}, which among other things check replication of the 
penultimate block via {{checkFileProgress()}}. See details in HDFS-4452.
Checking file progress in Part II is not necessary, because Part I already 
assured the penultimate block is complete. It cannot change to incomplete, 
unless the file is truncated, which is not allowed for files under construction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7782) Erasure coding: pread from files in striped layout

2015-04-06 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-7782:

Attachment: HDFS-7782-010.patch

Thanks a lot Jing! The test suite {{testPlanReadPortions}} looks great! I just 
added 1 more test where read size is smaller than a cell.

I took another look at how {{getBlockAt}} is used and made a small refactor in 
the signatures {{fetchBlockByteRange}} and {{actualGetFromOneDataNode}}. 
Instead of taking a {{LocatedBlock}}, I think they should just take the 
starting offset of that block, since they'll later call {{getBlockAt}} to 
refresh the location anyway. I think we should make it clear so the callers are 
not surprised if the finally used {{LocatedBlock}} is not the one they passed 
in. Let me know if you agree. We can also take it out and commit the rest.

 Erasure coding: pread from files in striped layout
 --

 Key: HDFS-7782
 URL: https://issues.apache.org/jira/browse/HDFS-7782
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Zhe Zhang
 Attachments: HDFS-7782-000.patch, HDFS-7782-001.patch, 
 HDFS-7782-002.patch, HDFS-7782-003.patch, HDFS-7782-004.patch, 
 HDFS-7782-005.patch, HDFS-7782-006.patch, HDFS-7782-008.patch, 
 HDFS-7782-010.patch, HDFS-7782.007.patch, HDFS-7782.009.patch


 If client wants to read a file, he is not necessary to know and handle what 
 layout the file is. This sub task adds logic to DFSInputStream to support 
 reading striping layout files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7715) Implement the Hitchhiker erasure coding algorithm

2015-04-06 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482258#comment-14482258
 ] 

Kai Zheng commented on HDFS-7715:
-

A quick review:

High level:
1. Please write a comprehensive class header comments about the new code and 
coder, also acknowledging the original author's effort.
2. For now, we need to figure out how to map these raw HH coders to 
corresponding high level {{ErasureCoder}}s, if we decide to implement them as 
raw coders directly;
3. Do we have tests for the new coders?

Minors:
1. Any better name for variable *pb_vec*?
2. Move the codes about computing generating polynomial to {{HHUtil}}?
3. The following variables are not good. Please use numDataUnits, 
numParityUnits instead for consistency in all places.
{code}
+final int stripeSize = getNumDataUnits();
+final int paritySize = getNumParityUnits();
{code}
4. In HHUtil.getPiggyBacksFromInput, the parameter encoder isn't used.

 Implement the Hitchhiker erasure coding algorithm
 -

 Key: HDFS-7715
 URL: https://issues.apache.org/jira/browse/HDFS-7715
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: jack liuquan
 Attachments: 7715-hitchhikerXOR-v2.patch, 
 HDFS-7715-hhxor-decoder.patch, HDFS-7715-hhxor-encoder.patch


 [Hitchhiker | 
 http://www.eecs.berkeley.edu/~nihar/publications/Hitchhiker_SIGCOMM14.pdf] is 
 a new erasure coding algorithm developed as a research project at UC 
 Berkeley. It has been shown to reduce network traffic and disk I/O by 25%-45% 
 during data reconstruction. This JIRA aims to introduce Hitchhiker to the 
 HDFS-EC framework, as one of the pluggable codec algorithms.
 The existing implementation is based on HDFS-RAID. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7782) Erasure coding: pread from files in striped layout

2015-04-06 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-7782:

Attachment: HDFS-7782.009.patch

Thanks Zhe! The 009 patch adds more unit tests, and also fixes several small 
bugs in {{planReadPortions}} and {{parseStripedBlockGroup}}.

 Erasure coding: pread from files in striped layout
 --

 Key: HDFS-7782
 URL: https://issues.apache.org/jira/browse/HDFS-7782
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Zhe Zhang
 Attachments: HDFS-7782-000.patch, HDFS-7782-001.patch, 
 HDFS-7782-002.patch, HDFS-7782-003.patch, HDFS-7782-004.patch, 
 HDFS-7782-005.patch, HDFS-7782-006.patch, HDFS-7782-008.patch, 
 HDFS-7782.007.patch, HDFS-7782.009.patch


 If client wants to read a file, he is not necessary to know and handle what 
 layout the file is. This sub task adds logic to DFSInputStream to support 
 reading striping layout files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-8069) Tracing implementation on DFSInputStream seriously degrades performance

2015-04-06 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482094#comment-14482094
 ] 

Colin Patrick McCabe edited comment on HDFS-8069 at 4/6/15 10:32 PM:
-

Hi [~elserj],

I'm not familiar with the Accumulo span receiver.  Just off of the top of my 
head:

1. Accumulo uses HDFS underneath.  When processing trace spans using Accumulo, 
are you creating more trace spans inside HDFS?  This will lead to a kind of 
infinite recursion.

2. You should be tracing less than 1% of all requests.  We don't really support 
tracing 100% of all requests on a 300 MB file, or at least not without serious 
performance degradation.

There are ways to avoid issue #1.  The easiest way is to use htraced, a trace 
sink built specifically for the purpose of storing spans.  htraced is also 
developed inside the HTrace project rather than externally.  Issue #2 can be 
fixed by setting an appropriate sampler such as ProbabilitySampler.

I do think we could potentially make the read pathway less chatty but that's 
somewhat of a separate issue.  No matter how few spans we create on the read 
pathway, you still will have problems with issue #1 and #2 if you have not 
configured correctly.


was (Author: cmccabe):
Hi [~elserj],

I'm not familiar with the Accumulo span receiver.  Just off of the top of my 
head:

1. Accumulo uses HDFS underneath.  When processing trace spans using Accumulo, 
are you creating more trace spans inside HDFS?  This will lead to a kind of 
infinite recursion.

2. You should be tracing less than 1% of all requests.  We don't really support 
tracing 100% of all requests on a 300 MB file, or at least not without serious 
performance degradation.

There are ways to avoid issue #1.  The easiest way is to use htraced, a trace 
sink build specifically for the purpose of storing spans.  htraced is also 
developed inside the HTrace project rather than externally.  Issue #2 can be 
fixed by setting an appropriate sampler such as ProbabilitySampler.

I do think we could potentially make the read pathway less chatty but that's 
somewhat of a separate issue.  No matter how few spans we create on the read 
pathway, you still will have problems with issue #1 and #2 if you have not 
configured correctly.

 Tracing implementation on DFSInputStream seriously degrades performance
 ---

 Key: HDFS-8069
 URL: https://issues.apache.org/jira/browse/HDFS-8069
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.7.0
Reporter: Josh Elser
Priority: Critical

 I've been doing some testing of Accumulo with HDFS 2.7.0 and have noticed a 
 serious performance impact when Accumulo registers itself as a SpanReceiver.
 The context of the test which I noticed the impact is that an Accumulo 
 process reads a series of updates from a write-ahead log. This is just 
 reading a series of Writable objects from a file in HDFS. With tracing 
 enabled, I waited for at least 10 minutes and the server still hadn't read a 
 ~300MB file.
 Doing a poor-man's inspection via repeated thread dumps, I always see 
 something like the following:
 {noformat}
 replication task 2 daemon prio=10 tid=0x02842800 nid=0x794d 
 runnable [0x7f6c7b1ec000]
java.lang.Thread.State: RUNNABLE
 at 
 java.util.concurrent.CopyOnWriteArrayList.iterator(CopyOnWriteArrayList.java:959)
 at org.apache.htrace.Tracer.deliver(Tracer.java:80)
 at org.apache.htrace.impl.MilliSpan.stop(MilliSpan.java:177)
 - locked 0x00077a770730 (a org.apache.htrace.impl.MilliSpan)
 at org.apache.htrace.TraceScope.close(TraceScope.java:78)
 at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:898)
 - locked 0x00079fa39a48 (a 
 org.apache.hadoop.hdfs.DFSInputStream)
 at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:697)
 - locked 0x00079fa39a48 (a 
 org.apache.hadoop.hdfs.DFSInputStream)
 at java.io.DataInputStream.readByte(DataInputStream.java:265)
 at 
 org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:308)
 at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:329)
 at 
 org.apache.accumulo.core.data.Mutation.readFields(Mutation.java:951)
... more accumulo code omitted...
 {noformat}
 What I'm seeing here is that reading a single byte (in 
 WritableUtils.readVLong) is causing a new Span creation and close (which 
 includes a flush to the SpanReceiver). This results in an extreme amount of 
 spans for {{DFSInputStream.byteArrayRead}} just for reading a file from HDFS 
 -- over 700k spans for just reading a few hundred MB file.
 Perhaps there's something different we need to do 

[jira] [Commented] (HDFS-8069) Tracing implementation on DFSInputStream seriously degrades performance

2015-04-06 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482094#comment-14482094
 ] 

Colin Patrick McCabe commented on HDFS-8069:


Hi [~elserj],

I'm not familiar with the Accumulo span receiver.  Just off of the top of my 
head:

1. Accumulo uses HDFS underneath.  When processing trace spans using Accumulo, 
are you creating more trace spans inside HDFS?  This will lead to a kind of 
infinite recursion.

2. You should be tracing less than 1% of all requests.  We don't really support 
tracing 100% of all requests on a 300 MB file, or at least not without serious 
performance degradation.

There are ways to avoid issue #1.  The easiest way is to use htraced, a trace 
sink build specifically for the purpose of storing spans.  htraced is also 
developed inside the HTrace project rather than externally.  Issue #2 can be 
fixed by setting an appropriate sampler such as ProbabilitySampler.

I do think we could potentially make the read pathway less chatty but that's 
somewhat of a separate issue.  No matter how few spans we create on the read 
pathway, you still will have problems with issue #1 and #2 if you have not 
configured correctly.

 Tracing implementation on DFSInputStream seriously degrades performance
 ---

 Key: HDFS-8069
 URL: https://issues.apache.org/jira/browse/HDFS-8069
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.7.0
Reporter: Josh Elser
Priority: Critical

 I've been doing some testing of Accumulo with HDFS 2.7.0 and have noticed a 
 serious performance impact when Accumulo registers itself as a SpanReceiver.
 The context of the test which I noticed the impact is that an Accumulo 
 process reads a series of updates from a write-ahead log. This is just 
 reading a series of Writable objects from a file in HDFS. With tracing 
 enabled, I waited for at least 10 minutes and the server still hadn't read a 
 ~300MB file.
 Doing a poor-man's inspection via repeated thread dumps, I always see 
 something like the following:
 {noformat}
 replication task 2 daemon prio=10 tid=0x02842800 nid=0x794d 
 runnable [0x7f6c7b1ec000]
java.lang.Thread.State: RUNNABLE
 at 
 java.util.concurrent.CopyOnWriteArrayList.iterator(CopyOnWriteArrayList.java:959)
 at org.apache.htrace.Tracer.deliver(Tracer.java:80)
 at org.apache.htrace.impl.MilliSpan.stop(MilliSpan.java:177)
 - locked 0x00077a770730 (a org.apache.htrace.impl.MilliSpan)
 at org.apache.htrace.TraceScope.close(TraceScope.java:78)
 at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:898)
 - locked 0x00079fa39a48 (a 
 org.apache.hadoop.hdfs.DFSInputStream)
 at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:697)
 - locked 0x00079fa39a48 (a 
 org.apache.hadoop.hdfs.DFSInputStream)
 at java.io.DataInputStream.readByte(DataInputStream.java:265)
 at 
 org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:308)
 at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:329)
 at 
 org.apache.accumulo.core.data.Mutation.readFields(Mutation.java:951)
... more accumulo code omitted...
 {noformat}
 What I'm seeing here is that reading a single byte (in 
 WritableUtils.readVLong) is causing a new Span creation and close (which 
 includes a flush to the SpanReceiver). This results in an extreme amount of 
 spans for {{DFSInputStream.byteArrayRead}} just for reading a file from HDFS 
 -- over 700k spans for just reading a few hundred MB file.
 Perhaps there's something different we need to do for the SpanReceiver in 
 Accumulo? I'm not entirely sure, but this was rather unexpected.
 cc/ [~cmccabe]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-8069) Tracing implementation on DFSInputStream seriously degrades performance

2015-04-06 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482120#comment-14482120
 ] 

Colin Patrick McCabe edited comment on HDFS-8069 at 4/6/15 10:48 PM:
-

I can think of a few ways to solve issue #1:

1. Disable tracing in Hadoop, by setting {{hadoop.htrace.sampler}} to 
{{NeverSampler}}.  Needless to say, this will allow you to get tracing from 
Accumlo, which you have currently, but not Hadoop.  So it's not a regression 
but it won't give you additional functionality.

2. Send the trace spans to a different Accumlo instance than the one you are 
tracing.  The different Accumlo instance can have tracing turned off (both 
Accumlo tracing and Hadoop tracing) and so avoid the amplification effect.

3. Just use htraced.  We could add security to htraced if that is a concern.

I wonder if we could simply have Accumlo use a shim API that we could later 
change over to call HTrace under the covers, once these issues have been worked 
out.  I'm a little concerned that we may want to change the HTrace API in the 
future and we might find that Accumlo has done some stuff we weren't expecting 
with it.  What do you think?


was (Author: cmccabe):
I can think of a few ways to solve issue #1:

1. Disable tracing in Hadoop, by setting {{hadoop.htrace.sampler}} to 
{[NeverSampler}}.  Needless to say, this will allow you to get tracing from 
Accumlo, which you have currently, but not Hadoop.  So it's not a regression 
but it won't give you additional functionality.

2. Send the trace spans to a different Accumlo instance than the one you are 
tracing.  The different Accumlo instance can have tracing turned off (both 
Accumlo tracing and Hadoop tracing) and so avoid the amplification effect.

3. Just use htraced.  We could add security to htraced if that is a concern.

I wonder if we could simply have Accumlo use a shim API that we could later 
change over to call HTrace under the covers, once these issues have been worked 
out.  I'm a little concerned that we may want to change the HTrace API in the 
future and we might find that Accumlo has done some stuff we weren't expecting 
with it.  What do you think?

 Tracing implementation on DFSInputStream seriously degrades performance
 ---

 Key: HDFS-8069
 URL: https://issues.apache.org/jira/browse/HDFS-8069
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.7.0
Reporter: Josh Elser
Priority: Critical

 I've been doing some testing of Accumulo with HDFS 2.7.0 and have noticed a 
 serious performance impact when Accumulo registers itself as a SpanReceiver.
 The context of the test which I noticed the impact is that an Accumulo 
 process reads a series of updates from a write-ahead log. This is just 
 reading a series of Writable objects from a file in HDFS. With tracing 
 enabled, I waited for at least 10 minutes and the server still hadn't read a 
 ~300MB file.
 Doing a poor-man's inspection via repeated thread dumps, I always see 
 something like the following:
 {noformat}
 replication task 2 daemon prio=10 tid=0x02842800 nid=0x794d 
 runnable [0x7f6c7b1ec000]
java.lang.Thread.State: RUNNABLE
 at 
 java.util.concurrent.CopyOnWriteArrayList.iterator(CopyOnWriteArrayList.java:959)
 at org.apache.htrace.Tracer.deliver(Tracer.java:80)
 at org.apache.htrace.impl.MilliSpan.stop(MilliSpan.java:177)
 - locked 0x00077a770730 (a org.apache.htrace.impl.MilliSpan)
 at org.apache.htrace.TraceScope.close(TraceScope.java:78)
 at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:898)
 - locked 0x00079fa39a48 (a 
 org.apache.hadoop.hdfs.DFSInputStream)
 at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:697)
 - locked 0x00079fa39a48 (a 
 org.apache.hadoop.hdfs.DFSInputStream)
 at java.io.DataInputStream.readByte(DataInputStream.java:265)
 at 
 org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:308)
 at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:329)
 at 
 org.apache.accumulo.core.data.Mutation.readFields(Mutation.java:951)
... more accumulo code omitted...
 {noformat}
 What I'm seeing here is that reading a single byte (in 
 WritableUtils.readVLong) is causing a new Span creation and close (which 
 includes a flush to the SpanReceiver). This results in an extreme amount of 
 spans for {{DFSInputStream.byteArrayRead}} just for reading a file from HDFS 
 -- over 700k spans for just reading a few hundred MB file.
 Perhaps there's something different we need to do for the SpanReceiver in 
 Accumulo? I'm not entirely sure, but this was rather unexpected.
 

[jira] [Updated] (HDFS-8062) Remove hard-coded values in favor of EC schema

2015-04-06 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HDFS-8062:
-
Attachment: HDFS-8062.1.patch

 Remove hard-coded values in favor of EC schema
 --

 Key: HDFS-8062
 URL: https://issues.apache.org/jira/browse/HDFS-8062
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Sasaki
 Attachments: HDFS-8062.1.patch


 Related issues about EC schema in NameNode side:
 HDFS-7859 is to change fsimage and editlog in NameNode to persist EC schemas;
 HDFS-7866 is to manage EC schemas in NameNode, loading, syncing between 
 persisted ones in image and predefined ones in XML.
 This is to revisit all the places in NameNode that uses hard-coded values in 
 favor of {{ECSchema}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8058) Erasure coding: use BlockInfo[] for both striped and contiguous blocks in INodeFile

2015-04-06 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482164#comment-14482164
 ] 

Zhe Zhang commented on HDFS-8058:
-

Thanks for the work Yi! The overall approach looks fine to me.

The only concern I can think of is type safety:  on the surface, a generic 
{{BlockInfo[]}} could contain mixed types.. We need some extra logic to prevent 
a striped block to be added to a non-striped file, and vice versa.

We also _might_ need to maintain both striped and contiguous blocks of a file 
during the window of conversion (that part of design is not finalized yet).

 Erasure coding: use BlockInfo[] for both striped and contiguous blocks in 
 INodeFile
 ---

 Key: HDFS-8058
 URL: https://issues.apache.org/jira/browse/HDFS-8058
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7285
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-8058.001.patch, HDFS-8058.002.patch


 This JIRA is to use {{BlockInfo[] blocks}} for both striped and contiguous 
 blocks in INodeFile.
 Currently {{FileWithStripedBlocksFeature}} keeps separate list for striped 
 blocks, and the methods there duplicate with those in INodeFile, and current 
 code need to judge {{isStriped}} then do different things. Also if file is 
 striped, the {{blocks}} in INodeFile occupy a reference memory space.
 These are not necessary, and we can use the same {{blocks}} to make code more 
 clear.
 I keep {{FileWithStripedBlocksFeature}} as empty for follow use: I will file 
 a new JIRA to move {{dataBlockNum}} and {{parityBlockNum}} from 
 *BlockInfoStriped* to INodeFile, since ideally they are the same for all 
 striped blocks in a file, and store them in block will waste NN memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8038) PBImageDelimitedTextWriter#getEntry output HDFS path in platform-specific format.

2015-04-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482182#comment-14482182
 ] 

Hadoop QA commented on HDFS-8038:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12723426/HDFS-8038.02.patch
  against trunk revision 28bebc8.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.TestFileCreation

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10182//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10182//console

This message is automatically generated.

 PBImageDelimitedTextWriter#getEntry output HDFS path in platform-specific 
 format.
 -

 Key: HDFS-8038
 URL: https://issues.apache.org/jira/browse/HDFS-8038
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: tools
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor
 Attachments: HDFS-8038.00.patch, HDFS-8038.01.patch, 
 HDFS-8038.02.patch


 PBImageDelimitedTextWriter#getEntry taking the HDFS path and passing it 
 through java.io.File, which causes platform-specific behavior as the actual 
 results shown in TestOfflineImageViewer#testPBDelimitedWriter() on Windows OS.
 {code}
 expected:[/emptydir, /dir0, /dir1/file2, /dir1, /dir1/file3, /dir2/file3, 
 /dir1/file0, /dir1/file1, /dir2/file1, /dir2/file2, /dir2, /dir0/file0, 
 /dir2/file0, /dir0/file1, /dir0/file2, /dir0/file3, /xattr] 
 but was:[\dir0, \dir0\file3, \dir0\file2, \dir0\file1, \xattr, \emptydir, 
 \dir0\file0, \dir1\file1, \dir1\file0, \dir1\file3, \dir1\file2, \dir2\file3, 
 \, \dir1, \dir2\file0, \dirContainingInvalidXMLChar#0;here, \dir2, 
 \dir2\file2, \dir2\file1]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8071) Redundant checkFileProgress() in PART II of getAdditionalBlock()

2015-04-06 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-8071:
--
Status: Patch Available  (was: Open)

 Redundant checkFileProgress() in PART II of getAdditionalBlock()
 

 Key: HDFS-8071
 URL: https://issues.apache.org/jira/browse/HDFS-8071
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.3-alpha
Reporter: Konstantin Shvachko
 Attachments: HDFS-8071-01.patch


 {{FSN.getAdditionalBlock()}} consists of two parts I and II. Each part calls 
 {{analyzeFileState()}}, which among other things check replication of the 
 penultimate block via {{checkFileProgress()}}. See details in HDFS-4452.
 Checking file progress in Part II is not necessary, because Part I already 
 assured the penultimate block is complete. It cannot change to incomplete, 
 unless the file is truncated, which is not allowed for files under 
 construction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8069) Tracing implementation on DFSInputStream seriously degrades performance

2015-04-06 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482120#comment-14482120
 ] 

Colin Patrick McCabe commented on HDFS-8069:


I can think of a few ways to solve issue #1:

1. Disable tracing in Hadoop, by setting {{hadoop.htrace.sampler}} to 
{[NeverSampler}}.  Needless to say, this will allow you to get tracing from 
Accumlo, which you have currently, but not Hadoop.  So it's not a regression 
but it won't give you additional functionality.

2. Send the trace spans to a different Accumlo instance than the one you are 
tracing.  The different Accumlo instance can have tracing turned off (both 
Accumlo tracing and Hadoop tracing) and so avoid the amplification effect.

3. Just use htraced.  We could add security to htraced if that is a concern.

I wonder if we could simply have Accumlo use a shim API that we could later 
change over to call HTrace under the covers, once these issues have been worked 
out.  I'm a little concerned that we may want to change the HTrace API in the 
future and we might find that Accumlo has done some stuff we weren't expecting 
with it.  What do you think?

 Tracing implementation on DFSInputStream seriously degrades performance
 ---

 Key: HDFS-8069
 URL: https://issues.apache.org/jira/browse/HDFS-8069
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.7.0
Reporter: Josh Elser
Priority: Critical

 I've been doing some testing of Accumulo with HDFS 2.7.0 and have noticed a 
 serious performance impact when Accumulo registers itself as a SpanReceiver.
 The context of the test which I noticed the impact is that an Accumulo 
 process reads a series of updates from a write-ahead log. This is just 
 reading a series of Writable objects from a file in HDFS. With tracing 
 enabled, I waited for at least 10 minutes and the server still hadn't read a 
 ~300MB file.
 Doing a poor-man's inspection via repeated thread dumps, I always see 
 something like the following:
 {noformat}
 replication task 2 daemon prio=10 tid=0x02842800 nid=0x794d 
 runnable [0x7f6c7b1ec000]
java.lang.Thread.State: RUNNABLE
 at 
 java.util.concurrent.CopyOnWriteArrayList.iterator(CopyOnWriteArrayList.java:959)
 at org.apache.htrace.Tracer.deliver(Tracer.java:80)
 at org.apache.htrace.impl.MilliSpan.stop(MilliSpan.java:177)
 - locked 0x00077a770730 (a org.apache.htrace.impl.MilliSpan)
 at org.apache.htrace.TraceScope.close(TraceScope.java:78)
 at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:898)
 - locked 0x00079fa39a48 (a 
 org.apache.hadoop.hdfs.DFSInputStream)
 at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:697)
 - locked 0x00079fa39a48 (a 
 org.apache.hadoop.hdfs.DFSInputStream)
 at java.io.DataInputStream.readByte(DataInputStream.java:265)
 at 
 org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:308)
 at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:329)
 at 
 org.apache.accumulo.core.data.Mutation.readFields(Mutation.java:951)
... more accumulo code omitted...
 {noformat}
 What I'm seeing here is that reading a single byte (in 
 WritableUtils.readVLong) is causing a new Span creation and close (which 
 includes a flush to the SpanReceiver). This results in an extreme amount of 
 spans for {{DFSInputStream.byteArrayRead}} just for reading a file from HDFS 
 -- over 700k spans for just reading a few hundred MB file.
 Perhaps there's something different we need to do for the SpanReceiver in 
 Accumulo? I'm not entirely sure, but this was rather unexpected.
 cc/ [~cmccabe]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3233) Move IP to FQDN conversion from DatanodeJSPHelper to DatanodeID

2015-04-06 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482117#comment-14482117
 ] 

Eli Collins commented on HDFS-3233:
---

Looks like DatanodeJSPHelper has been removed, and don't see the relevant code 
in JspHelper.java so this may no longer apply.

 Move IP to FQDN conversion from DatanodeJSPHelper to DatanodeID
 ---

 Key: HDFS-3233
 URL: https://issues.apache.org/jira/browse/HDFS-3233
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Eli Collins
Priority: Minor
  Labels: newbie

 In a handful of places DatanodeJSPHelper looks up the IP for a DN and then 
 determines a FQDN for the IP. We should move this code to a single place, a 
 new DatanodeID to return the FQDN for a DatanodeID.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7348) Erasure Coding: perform stripping erasure decoding work given block reader and writer

2015-04-06 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HDFS-7348:

Description: This assumes the facilities like block reader and writer are 
ready, implements and performs erasure decoding/recovery work in *stripping* 
case utilizing erasure codec and coder provided by the codec framework.  (was: 
This assumes the facilities like block reader and writer are ready, implements 
and performs erasure decoding work in *stripping* case utilizing erasure codec 
and coder provided by the codec framework.)

 Erasure Coding: perform stripping erasure decoding work given block reader 
 and writer
 -

 Key: HDFS-7348
 URL: https://issues.apache.org/jira/browse/HDFS-7348
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Reporter: Kai Zheng
Assignee: Li Bo

 This assumes the facilities like block reader and writer are ready, 
 implements and performs erasure decoding/recovery work in *stripping* case 
 utilizing erasure codec and coder provided by the codec framework.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7348) Erasure Coding: perform stripping erasure decoding/recovery work given block reader and writer

2015-04-06 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HDFS-7348:

Summary: Erasure Coding: perform stripping erasure decoding/recovery work 
given block reader and writer  (was: Erasure Coding: perform stripping erasure 
decoding work given block reader and writer)

 Erasure Coding: perform stripping erasure decoding/recovery work given block 
 reader and writer
 --

 Key: HDFS-7348
 URL: https://issues.apache.org/jira/browse/HDFS-7348
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Reporter: Kai Zheng
Assignee: Li Bo

 This assumes the facilities like block reader and writer are ready, 
 implements and performs erasure decoding/recovery work in *stripping* case 
 utilizing erasure codec and coder provided by the codec framework.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8064) Erasure coding: DataNode support for block recovery of striped block groups

2015-04-06 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng resolved HDFS-8064.
-
  Resolution: Duplicate
Release Note: This is a duplicate of HDFS-7348.

 Erasure coding: DataNode support for block recovery of striped block groups
 ---

 Key: HDFS-8064
 URL: https://issues.apache.org/jira/browse/HDFS-8064
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: HDFS-7285
Reporter: Yi Liu
Assignee: Yi Liu

 This JIRA is for block recovery of striped block groups on DataNodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HDFS-8062) Remove hard-coded values in favor of EC schema

2015-04-06 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-8062 started by Kai Sasaki.

 Remove hard-coded values in favor of EC schema
 --

 Key: HDFS-8062
 URL: https://issues.apache.org/jira/browse/HDFS-8062
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Sasaki
 Attachments: HDFS-8062.1.patch


 Related issues about EC schema in NameNode side:
 HDFS-7859 is to change fsimage and editlog in NameNode to persist EC schemas;
 HDFS-7866 is to manage EC schemas in NameNode, loading, syncing between 
 persisted ones in image and predefined ones in XML.
 This is to revisit all the places in NameNode that uses hard-coded values in 
 favor of {{ECSchema}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8049) Annotation client implementation as private

2015-04-06 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482202#comment-14482202
 ] 

Tsz Wo Nicholas Sze commented on HDFS-8049:
---

The patch looks good.  Could you also add private to 
- BlockReader and its implementation classes;
- BlockReaderUtil;
- DataStreamer, DFSPacket;
- LeaseRenewer;
- RemotePeerFactory.


 Annotation client implementation as private
 ---

 Key: HDFS-8049
 URL: https://issues.apache.org/jira/browse/HDFS-8049
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Tsz Wo Nicholas Sze
Assignee: Takuya Fukudome
 Attachments: HDFS-8049.1.patch


 The @InterfaceAudience Annotation is missing for quite a few client classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7955) Improve naming of classes, methods, and variables related to block replication and recovery

2015-04-06 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482200#comment-14482200
 ] 

Zhe Zhang commented on HDFS-7955:
-

[~rakeshr] Thanks for the question. Yes I will create a separate JIRA for the 
2nd point in the description. 

Also, regarding the 1st point, Jing, Nicholas, and I agreed that it's better to 
rename EC-related block repair logic to reconstruction. Does it sound good to 
you?

 Improve naming of classes, methods, and variables related to block 
 replication and recovery
 ---

 Key: HDFS-7955
 URL: https://issues.apache.org/jira/browse/HDFS-7955
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Rakesh R

 Many existing names should be revised to avoid confusion when blocks can be 
 both replicated and erasure coded. This JIRA aims to solicit opinions on 
 making those names more consistent and intuitive.
 # In current HDFS _block recovery_ refers to the process of finalizing the 
 last block of a file, triggered by _lease recovery_. It is different from the 
 intuitive meaning of _recovering a lost block_. To avoid confusion, I can 
 think of 2 options:
 #* Rename this process as _block finalization_ or _block completion_. I 
 prefer this option because this is literally not a recovery.
 #* If we want to keep existing terms unchanged we can name all EC recovery 
 and re-replication logics as _reconstruction_.  
 # As Kai [suggested | 
 https://issues.apache.org/jira/browse/HDFS-7369?focusedCommentId=14361131page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14361131]
  under HDFS-7369, several replication-based names should be made more generic:
 #* {{UnderReplicatedBlocks}} and {{neededReplications}}. E.g. we can use 
 {{LowRedundancyBlocks}}/{{AtRiskBlocks}}, and 
 {{neededRecovery}}/{{neededReconstruction}}.
 #* {{PendingReplicationBlocks}}
 #* {{ReplicationMonitor}}
 I'm sure the above list is incomplete; discussions and comments are very 
 welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8071) Redundant checkFileProgress() in PART II of getAdditionalBlock()

2015-04-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482402#comment-14482402
 ] 

Hadoop QA commented on HDFS-8071:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12723463/HDFS-8071-01.patch
  against trunk revision 3fb5abf.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10184//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10184//console

This message is automatically generated.

 Redundant checkFileProgress() in PART II of getAdditionalBlock()
 

 Key: HDFS-8071
 URL: https://issues.apache.org/jira/browse/HDFS-8071
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.3-alpha
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Attachments: HDFS-8071-01.patch, HDFS-8071-02.patch


 {{FSN.getAdditionalBlock()}} consists of two parts I and II. Each part calls 
 {{analyzeFileState()}}, which among other things check replication of the 
 penultimate block via {{checkFileProgress()}}. See details in HDFS-4452.
 Checking file progress in Part II is not necessary, because Part I already 
 assured the penultimate block is complete. It cannot change to incomplete, 
 unless the file is truncated, which is not allowed for files under 
 construction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8069) Tracing implementation on DFSInputStream seriously degrades performance

2015-04-06 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482440#comment-14482440
 ] 

Masatake Iwasaki commented on HDFS-8069:


bq. Is it possible introduce sampling of DFSInputStream read operations within 
a current trace that has been enabled?

No. There are some discussions in HADOOP-11758 and HTRACE-69.


 Tracing implementation on DFSInputStream seriously degrades performance
 ---

 Key: HDFS-8069
 URL: https://issues.apache.org/jira/browse/HDFS-8069
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.7.0
Reporter: Josh Elser
Priority: Critical

 I've been doing some testing of Accumulo with HDFS 2.7.0 and have noticed a 
 serious performance impact when Accumulo registers itself as a SpanReceiver.
 The context of the test which I noticed the impact is that an Accumulo 
 process reads a series of updates from a write-ahead log. This is just 
 reading a series of Writable objects from a file in HDFS. With tracing 
 enabled, I waited for at least 10 minutes and the server still hadn't read a 
 ~300MB file.
 Doing a poor-man's inspection via repeated thread dumps, I always see 
 something like the following:
 {noformat}
 replication task 2 daemon prio=10 tid=0x02842800 nid=0x794d 
 runnable [0x7f6c7b1ec000]
java.lang.Thread.State: RUNNABLE
 at 
 java.util.concurrent.CopyOnWriteArrayList.iterator(CopyOnWriteArrayList.java:959)
 at org.apache.htrace.Tracer.deliver(Tracer.java:80)
 at org.apache.htrace.impl.MilliSpan.stop(MilliSpan.java:177)
 - locked 0x00077a770730 (a org.apache.htrace.impl.MilliSpan)
 at org.apache.htrace.TraceScope.close(TraceScope.java:78)
 at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:898)
 - locked 0x00079fa39a48 (a 
 org.apache.hadoop.hdfs.DFSInputStream)
 at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:697)
 - locked 0x00079fa39a48 (a 
 org.apache.hadoop.hdfs.DFSInputStream)
 at java.io.DataInputStream.readByte(DataInputStream.java:265)
 at 
 org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:308)
 at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:329)
 at 
 org.apache.accumulo.core.data.Mutation.readFields(Mutation.java:951)
... more accumulo code omitted...
 {noformat}
 What I'm seeing here is that reading a single byte (in 
 WritableUtils.readVLong) is causing a new Span creation and close (which 
 includes a flush to the SpanReceiver). This results in an extreme amount of 
 spans for {{DFSInputStream.byteArrayRead}} just for reading a file from HDFS 
 -- over 700k spans for just reading a few hundred MB file.
 Perhaps there's something different we need to do for the SpanReceiver in 
 Accumulo? I'm not entirely sure, but this was rather unexpected.
 cc/ [~cmccabe]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8072) Reserved RBW space is not released if client terminates while writing block

2015-04-06 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-8072:

Attachment: HDFS-8072.01.patch

Patch to release reserved space when the BlockReceiver encounters an 
IOException.

 Reserved RBW space is not released if client terminates while writing block
 ---

 Key: HDFS-8072
 URL: https://issues.apache.org/jira/browse/HDFS-8072
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.6.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Attachments: HDFS-8072.01.patch


 The DataNode reserves space for a full block when creating an RBW block 
 (introduced in HDFS-6898).
 The reserved space is released incrementally as data is written to disk and 
 fully when the block is finalized. However if the client process terminates 
 unexpectedly mid-write then the reserved space is not released until the DN 
 is restarted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8072) Reserved RBW space is not released if client terminates while writing block

2015-04-06 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-8072:

Status: Patch Available  (was: Open)

 Reserved RBW space is not released if client terminates while writing block
 ---

 Key: HDFS-8072
 URL: https://issues.apache.org/jira/browse/HDFS-8072
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.6.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Attachments: HDFS-8072.01.patch


 The DataNode reserves space for a full block when creating an RBW block 
 (introduced in HDFS-6898).
 The reserved space is released incrementally as data is written to disk and 
 fully when the block is finalized. However if the client process terminates 
 unexpectedly mid-write then the reserved space is not released until the DN 
 is restarted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8067) haadmin commands doesn't work in Federation with HA

2015-04-06 Thread Ajith S (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482498#comment-14482498
 ] 

Ajith S commented on HDFS-8067:
---

Thanks for the comment Dave. But i think the -ns option is somehow got removed 
after HDFS-7808 and HDFS-7324

 haadmin commands doesn't work in Federation with HA
 ---

 Key: HDFS-8067
 URL: https://issues.apache.org/jira/browse/HDFS-8067
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ajith S
Assignee: Ajith S

 Scenario : Setting up multiple nameservices with HA configuration for each 
 nameservice (manual failover)
 After starting the journal nodes and namenodes, both the nodes are in standby 
 mode. 
 all the following haadmin commands
  *haadmin*
-transitionToActive
-transitionToStandby 
-failover 
-getServiceState 
-checkHealth  
 failed with exception
 _Illegal argument: Unable to determine the nameservice id._



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8071) Redundant checkFileProgress() in PART II of getAdditionalBlock()

2015-04-06 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-8071:
--
Attachment: HDFS-8071-02.patch

Good point. New patch removes redundant readLock().

 Redundant checkFileProgress() in PART II of getAdditionalBlock()
 

 Key: HDFS-8071
 URL: https://issues.apache.org/jira/browse/HDFS-8071
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.3-alpha
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Attachments: HDFS-8071-01.patch, HDFS-8071-02.patch


 {{FSN.getAdditionalBlock()}} consists of two parts I and II. Each part calls 
 {{analyzeFileState()}}, which among other things check replication of the 
 penultimate block via {{checkFileProgress()}}. See details in HDFS-4452.
 Checking file progress in Part II is not necessary, because Part I already 
 assured the penultimate block is complete. It cannot change to incomplete, 
 unless the file is truncated, which is not allowed for files under 
 construction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8071) Redundant checkFileProgress() in PART II of getAdditionalBlock()

2015-04-06 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-8071:
--
Hadoop Flags: Reviewed

+1 patch looks good.

 Redundant checkFileProgress() in PART II of getAdditionalBlock()
 

 Key: HDFS-8071
 URL: https://issues.apache.org/jira/browse/HDFS-8071
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.3-alpha
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Attachments: HDFS-8071-01.patch, HDFS-8071-02.patch


 {{FSN.getAdditionalBlock()}} consists of two parts I and II. Each part calls 
 {{analyzeFileState()}}, which among other things check replication of the 
 penultimate block via {{checkFileProgress()}}. See details in HDFS-4452.
 Checking file progress in Part II is not necessary, because Part I already 
 assured the penultimate block is complete. It cannot change to incomplete, 
 unless the file is truncated, which is not allowed for files under 
 construction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7792) Add links to FaultInjectFramework and SLGUserGuide to site index

2015-04-06 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482267#comment-14482267
 ] 

Masatake Iwasaki commented on HDFS-7792:


I just left FaultInjectFramework.md as is. It could be used when HDFS-2261 is 
fixed.

 Add links to FaultInjectFramework and SLGUserGuide to site index
 

 Key: HDFS-7792
 URL: https://issues.apache.org/jira/browse/HDFS-7792
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Attachments: HDFS-7792.001.patch


 FaultInjectFramework.html SLGUserGuide.html are not linked from anywhere. Add 
 link to them to site.xml if the contents are not outdated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7792) Add links to FaultInjectFramework and SLGUserGuide to site index

2015-04-06 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HDFS-7792:
---
Status: Patch Available  (was: Open)

 Add links to FaultInjectFramework and SLGUserGuide to site index
 

 Key: HDFS-7792
 URL: https://issues.apache.org/jira/browse/HDFS-7792
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Attachments: HDFS-7792.001.patch


 FaultInjectFramework.html SLGUserGuide.html are not linked from anywhere. Add 
 link to them to site.xml if the contents are not outdated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8059) Erasure coding: move dataBlockNum and parityBlockNum from BlockInfoStriped to INodeFile

2015-04-06 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482483#comment-14482483
 ] 

Yi Liu commented on HDFS-8059:
--

Jing, Thanks for your detailed and nice comments !

I agree with your analysis, but currently for contiguous block, the storage 
related info (replication, storgePolicy) are stored in INodeFile,  they are the 
same for all blocks, so it's nature we keep (dataBlockNum and parityBlockNum) 
in INodeFile itself too, this can save NN memory in case of large files as you 
said.  

Furthermore, for NN ops, like {{getAdditionalBlock}}, if these these two 
information are available in INodeFile, then it's more easier and efficient to 
construct {{BlockInfoStripedUnderConstruction}}, right? Otherwise we should use 
other ways to get these two information, maybe again through the ECZone? In 
current branch, they are hard-code.

{quote}
Maybe we should wait and see more EC use cases in practice to decide if we want 
to do this optimization?
{quote}
Sure.

 Erasure coding: move dataBlockNum and parityBlockNum from BlockInfoStriped to 
 INodeFile
 ---

 Key: HDFS-8059
 URL: https://issues.apache.org/jira/browse/HDFS-8059
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7285
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-8059.001.patch


 Move {{dataBlockNum}} and {{parityBlockNum}} from BlockInfoStriped to 
 INodeFile, and store them in {{FileWithStripedBlocksFeature}}.
 Ideally these two nums are the same for all striped blocks in a file, and 
 store them in BlockInfoStriped will waste NN memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7715) Implement the Hitchhiker erasure coding algorithm

2015-04-06 Thread jack liuquan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482497#comment-14482497
 ] 

jack liuquan commented on HDFS-7715:


High level:
1. Please write a comprehensive class header comments about the new code and 
coder, also acknowledging the original author's effort.
bq.OK,sure.
2. For now, we need to figure out how to map these raw HH coders to 
corresponding high level {{ErasureCoder}}s, if we decide to implement them as 
raw coders directly;
bq.When do you have available time to make a phone call, I want to disscuss 
with you by phone, Thanks.:)
3. Do we have tests for the new coders?
bq.Yes, I have test the news coders, and It's right. But for the 30K limit, I 
didn't upload the test codes. Can I upload the test codes alone?

1. Any better name for variable *pb_vec*?
bq.It's named by Rashmi. Maybe Rashmi can give a suggestion. I think *pb_vec* 
is a index for storing piggybacks of first sub-stripe, maybe pb_index is ok.
2. Move the codes about computing generating polynomial to HHUtil?
bq.Sounds good, I will do it in new patch.
3. The following variables are not good. Please use numDataUnits, 
numParityUnits instead for consistency in all places.
bq.If use numDataUnits, numParityUnits instead for consistency, we need change 
{{private}} to {{protected}} of numDataUnits, numParityUnits in 
{{AbstractRawErasureCoder}}

4. In HHUtil.getPiggyBacksFromInput, the parameter encoder isn't used.
bq. the parameter encoder is used in line 62:
{code}
+encoder.encode(tempInput, tempOutput);
{code}


 Implement the Hitchhiker erasure coding algorithm
 -

 Key: HDFS-7715
 URL: https://issues.apache.org/jira/browse/HDFS-7715
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: jack liuquan
 Attachments: 7715-hitchhikerXOR-v2.patch, 
 HDFS-7715-hhxor-decoder.patch, HDFS-7715-hhxor-encoder.patch


 [Hitchhiker | 
 http://www.eecs.berkeley.edu/~nihar/publications/Hitchhiker_SIGCOMM14.pdf] is 
 a new erasure coding algorithm developed as a research project at UC 
 Berkeley. It has been shown to reduce network traffic and disk I/O by 25%-45% 
 during data reconstruction. This JIRA aims to introduce Hitchhiker to the 
 HDFS-EC framework, as one of the pluggable codec algorithms.
 The existing implementation is based on HDFS-RAID. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8071) Redundant checkFileProgress() in PART II of getAdditionalBlock()

2015-04-06 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-8071:
-
Assignee: Konstantin Shvachko

 Redundant checkFileProgress() in PART II of getAdditionalBlock()
 

 Key: HDFS-8071
 URL: https://issues.apache.org/jira/browse/HDFS-8071
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.3-alpha
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Attachments: HDFS-8071-01.patch


 {{FSN.getAdditionalBlock()}} consists of two parts I and II. Each part calls 
 {{analyzeFileState()}}, which among other things check replication of the 
 penultimate block via {{checkFileProgress()}}. See details in HDFS-4452.
 Checking file progress in Part II is not necessary, because Part I already 
 assured the penultimate block is complete. It cannot change to incomplete, 
 unless the file is truncated, which is not allowed for files under 
 construction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8058) Erasure coding: use BlockInfo[] for both striped and contiguous blocks in INodeFile

2015-04-06 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482332#comment-14482332
 ] 

Jing Zhao commented on HDFS-8058:
-

Thanks for working on this, Yi!

bq. The only concern I can think of is type safety

Yes, I agree with Zhe here. Type safety is our main concern when making the 
current design.

 Erasure coding: use BlockInfo[] for both striped and contiguous blocks in 
 INodeFile
 ---

 Key: HDFS-8058
 URL: https://issues.apache.org/jira/browse/HDFS-8058
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7285
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-8058.001.patch, HDFS-8058.002.patch


 This JIRA is to use {{BlockInfo[] blocks}} for both striped and contiguous 
 blocks in INodeFile.
 Currently {{FileWithStripedBlocksFeature}} keeps separate list for striped 
 blocks, and the methods there duplicate with those in INodeFile, and current 
 code need to judge {{isStriped}} then do different things. Also if file is 
 striped, the {{blocks}} in INodeFile occupy a reference memory space.
 These are not necessary, and we can use the same {{blocks}} to make code more 
 clear.
 I keep {{FileWithStripedBlocksFeature}} as empty for follow use: I will file 
 a new JIRA to move {{dataBlockNum}} and {{parityBlockNum}} from 
 *BlockInfoStriped* to INodeFile, since ideally they are the same for all 
 striped blocks in a file, and store them in block will waste NN memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3233) Move IP to FQDN conversion from DatanodeJSPHelper to DatanodeID

2015-04-06 Thread Gabor Liptak (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482358#comment-14482358
 ] 

Gabor Liptak commented on HDFS-3233:


Eli, is this issue ready to be resolved? Thanks

 Move IP to FQDN conversion from DatanodeJSPHelper to DatanodeID
 ---

 Key: HDFS-3233
 URL: https://issues.apache.org/jira/browse/HDFS-3233
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Eli Collins
Priority: Minor
  Labels: newbie

 In a handful of places DatanodeJSPHelper looks up the IP for a DN and then 
 determines a FQDN for the IP. We should move this code to a single place, a 
 new DatanodeID to return the FQDN for a DatanodeID.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7792) Add links to FaultInjectFramework and SLGUserGuide to site index

2015-04-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482411#comment-14482411
 ] 

Hadoop QA commented on HDFS-7792:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12723495/HDFS-7792.001.patch
  against trunk revision 3fb5abf.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10185//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10185//console

This message is automatically generated.

 Add links to FaultInjectFramework and SLGUserGuide to site index
 

 Key: HDFS-7792
 URL: https://issues.apache.org/jira/browse/HDFS-7792
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Attachments: HDFS-7792.001.patch


 FaultInjectFramework.html SLGUserGuide.html are not linked from anywhere. Add 
 link to them to site.xml if the contents are not outdated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8049) Annotation client implementation as private

2015-04-06 Thread Takuya Fukudome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takuya Fukudome updated HDFS-8049:
--
Attachment: HDFS-8049.2.patch

Hi Nicholas,

Thank you for the review. I attached a new patch which is addressed your 
comment. Please review the patch. Thanks.

 Annotation client implementation as private
 ---

 Key: HDFS-8049
 URL: https://issues.apache.org/jira/browse/HDFS-8049
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Tsz Wo Nicholas Sze
Assignee: Takuya Fukudome
 Attachments: HDFS-8049.1.patch, HDFS-8049.2.patch


 The @InterfaceAudience Annotation is missing for quite a few client classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8072) Reserved RBW space is not released if client terminates while writing block

2015-04-06 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482456#comment-14482456
 ] 

Arpit Agarwal commented on HDFS-8072:
-

Found by [~cnauroth] (thanks!).

 Reserved RBW space is not released if client terminates while writing block
 ---

 Key: HDFS-8072
 URL: https://issues.apache.org/jira/browse/HDFS-8072
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.6.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal

 The DataNode reserves space for a full block when creating an RBW block 
 (introduced in HDFS-6898).
 The reserved space is released incrementally as data is written to disk and 
 fully when the block is finalized. However if the client process terminates 
 unexpectedly mid-write then the reserved space is not released until the DN 
 is restarted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8072) Reserved RBW space is not released if client terminates while writing block

2015-04-06 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDFS-8072:
---

 Summary: Reserved RBW space is not released if client terminates 
while writing block
 Key: HDFS-8072
 URL: https://issues.apache.org/jira/browse/HDFS-8072
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.6.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


The DataNode reserves space for a full block when creating an RBW block 
(introduced in HDFS-6898).

The reserved space is released incrementally as data is written to disk and 
fully when the block is finalized. However if the client process terminates 
unexpectedly mid-write then the reserved space is not released until the DN is 
restarted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8069) Tracing implementation on DFSInputStream seriously degrades performance

2015-04-06 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482274#comment-14482274
 ] 

Josh Elser commented on HDFS-8069:
--

With regards to your other points:

Comments to solutions on point 1:
# As Billie said, we're not tracing the tracing code :). 
# A non-starter for me. We've had distributed tracing support built into 
Accumulo for years without issue. To suddenly inform users that they need to 
spin up a second cluster is a no-go.
# If htraced had support for Accumulo as a backing store, I'd jump for joy. 
But, running one big-table application at a time is more than enough for me. 
Security isn't really relevant here -- there's more to Accumulo than just the 
security aspect. Kind of goes back to point 2: we have this support internally 
to Accumulo for some time. We really want to see it transparently go down 
through HDFS for the added insight.

Point 2:
Again, I think Billie got this already: this was caused by the tracing of a 
single operation. The traced operation in Accumulo read a file off of disk. 
Performance tanked due to excessive spans from one parent span.

bq. I wonder if we could simply have Accumlo use a shim API that we could later 
change over to call HTrace under the covers, once these issues have been worked 
out. I'm a little concerned that we may want to change the HTrace API in the 
future and we might find that Accumlo has done some stuff we weren't expecting 
with it. What do you think?

It would certainly be much nicer to get rid of our tracer sink code and push it 
up into HTrace. Catching API changes early (instead of after a new HTrace 
version was released and Accumulo tried to use it) is ideal. Perhaps this is 
something we can start considering. The other side of the coin is that we could 
(will) be a good consumer that will try to hold you to some semblance of a 
stable API. Either way, a good discussion we can have over in HTrace rather 
than here :)

 Tracing implementation on DFSInputStream seriously degrades performance
 ---

 Key: HDFS-8069
 URL: https://issues.apache.org/jira/browse/HDFS-8069
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.7.0
Reporter: Josh Elser
Priority: Critical

 I've been doing some testing of Accumulo with HDFS 2.7.0 and have noticed a 
 serious performance impact when Accumulo registers itself as a SpanReceiver.
 The context of the test which I noticed the impact is that an Accumulo 
 process reads a series of updates from a write-ahead log. This is just 
 reading a series of Writable objects from a file in HDFS. With tracing 
 enabled, I waited for at least 10 minutes and the server still hadn't read a 
 ~300MB file.
 Doing a poor-man's inspection via repeated thread dumps, I always see 
 something like the following:
 {noformat}
 replication task 2 daemon prio=10 tid=0x02842800 nid=0x794d 
 runnable [0x7f6c7b1ec000]
java.lang.Thread.State: RUNNABLE
 at 
 java.util.concurrent.CopyOnWriteArrayList.iterator(CopyOnWriteArrayList.java:959)
 at org.apache.htrace.Tracer.deliver(Tracer.java:80)
 at org.apache.htrace.impl.MilliSpan.stop(MilliSpan.java:177)
 - locked 0x00077a770730 (a org.apache.htrace.impl.MilliSpan)
 at org.apache.htrace.TraceScope.close(TraceScope.java:78)
 at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:898)
 - locked 0x00079fa39a48 (a 
 org.apache.hadoop.hdfs.DFSInputStream)
 at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:697)
 - locked 0x00079fa39a48 (a 
 org.apache.hadoop.hdfs.DFSInputStream)
 at java.io.DataInputStream.readByte(DataInputStream.java:265)
 at 
 org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:308)
 at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:329)
 at 
 org.apache.accumulo.core.data.Mutation.readFields(Mutation.java:951)
... more accumulo code omitted...
 {noformat}
 What I'm seeing here is that reading a single byte (in 
 WritableUtils.readVLong) is causing a new Span creation and close (which 
 includes a flush to the SpanReceiver). This results in an extreme amount of 
 spans for {{DFSInputStream.byteArrayRead}} just for reading a file from HDFS 
 -- over 700k spans for just reading a few hundred MB file.
 Perhaps there's something different we need to do for the SpanReceiver in 
 Accumulo? I'm not entirely sure, but this was rather unexpected.
 cc/ [~cmccabe]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8058) Erasure coding: use BlockInfo[] for both striped and contiguous blocks in INodeFile

2015-04-06 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482423#comment-14482423
 ] 

Yi Liu commented on HDFS-8058:
--

Zhe, Jing, thanks for your comments!

{quote}
The only concern I can think of is type safety: on the surface, a generic 
BlockInfo[] could contain mixed types.. We need some extra logic to prevent a 
striped block to be added to a non-striped file, and vice versa.
{quote}
In my thought, this would never happen, although the type is {{BlockInfo[]}}, 
the actual instance type is {{BlockInfoContiguous[]}} or 
{{BlockInfoStriped[]}}, and we also do {{assertBlock}} check before 
{{setBlock}}, {{addBlock}}, etc.  So I think it can never contain mixed types, 
since you can't add {{BlockInfoContiguous}} instance to a 
{{BlockInfoStriped[]}}, and vice versa.  Do I miss anything?
{code}
private void assertBlock(BlockInfo blk) {
  if (isStriped()) {
assert blk.isStriped();
  } else {
assert !blk.isStriped();
  }
}
{code}

 Erasure coding: use BlockInfo[] for both striped and contiguous blocks in 
 INodeFile
 ---

 Key: HDFS-8058
 URL: https://issues.apache.org/jira/browse/HDFS-8058
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7285
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-8058.001.patch, HDFS-8058.002.patch


 This JIRA is to use {{BlockInfo[] blocks}} for both striped and contiguous 
 blocks in INodeFile.
 Currently {{FileWithStripedBlocksFeature}} keeps separate list for striped 
 blocks, and the methods there duplicate with those in INodeFile, and current 
 code need to judge {{isStriped}} then do different things. Also if file is 
 striped, the {{blocks}} in INodeFile occupy a reference memory space.
 These are not necessary, and we can use the same {{blocks}} to make code more 
 clear.
 I keep {{FileWithStripedBlocksFeature}} as empty for follow use: I will file 
 a new JIRA to move {{dataBlockNum}} and {{parityBlockNum}} from 
 *BlockInfoStriped* to INodeFile, since ideally they are the same for all 
 striped blocks in a file, and store them in block will waste NN memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8072) Reserved RBW space is not released if client terminates while writing block

2015-04-06 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-8072:

Attachment: HDFS-8072.02.patch

v02 patch with updated test case (explicitly set replica count to 3).

 Reserved RBW space is not released if client terminates while writing block
 ---

 Key: HDFS-8072
 URL: https://issues.apache.org/jira/browse/HDFS-8072
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.6.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Attachments: HDFS-8072.01.patch, HDFS-8072.02.patch


 The DataNode reserves space for a full block when creating an RBW block 
 (introduced in HDFS-6898).
 The reserved space is released incrementally as data is written to disk and 
 fully when the block is finalized. However if the client process terminates 
 unexpectedly mid-write then the reserved space is not released until the DN 
 is restarted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7980) Incremental BlockReport will dramatically slow down the startup of a namenode

2015-04-06 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482513#comment-14482513
 ] 

Walter Su commented on HDFS-7980:
-

I don't think we need to add a test case. It's too obvious 
{{processFirstBlockReport}} will be called once.

 Incremental BlockReport will dramatically slow down the startup of  a namenode
 --

 Key: HDFS-7980
 URL: https://issues.apache.org/jira/browse/HDFS-7980
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Hui Zheng
Assignee: Walter Su
 Attachments: HDFS-7980.001.patch


 In the current implementation the datanode will call the 
 reportReceivedDeletedBlocks() method that is a IncrementalBlockReport before 
 calling the bpNamenode.blockReport() method. So in a large(several thousands 
 of datanodes) and busy cluster it will slow down(more than one hour) the 
 startup of namenode. 
 {code}
 ListDatanodeCommand blockReport() throws IOException {
 // send block report if timer has expired.
 final long startTime = now();
 if (startTime - lastBlockReport = dnConf.blockReportInterval) {
   return null;
 }
 final ArrayListDatanodeCommand cmds = new ArrayListDatanodeCommand();
 // Flush any block information that precedes the block report. Otherwise
 // we have a chance that we will miss the delHint information
 // or we will report an RBW replica after the BlockReport already reports
 // a FINALIZED one.
 reportReceivedDeletedBlocks();
 lastDeletedReport = startTime;
 .
 // Send the reports to the NN.
 int numReportsSent = 0;
 int numRPCs = 0;
 boolean success = false;
 long brSendStartTime = now();
 try {
   if (totalBlockCount  dnConf.blockReportSplitThreshold) {
 // Below split threshold, send all reports in a single message.
 DatanodeCommand cmd = bpNamenode.blockReport(
 bpRegistration, bpos.getBlockPoolId(), reports);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8058) Erasure coding: use BlockInfo[] for both striped and contiguous blocks in INodeFile

2015-04-06 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482568#comment-14482568
 ] 

Zhe Zhang commented on HDFS-8058:
-

Thanks Yi for clarifying. Yes {{assertBlock}} is a good way to guard against 
mixed types. I will upload my full review tomorrow.

 Erasure coding: use BlockInfo[] for both striped and contiguous blocks in 
 INodeFile
 ---

 Key: HDFS-8058
 URL: https://issues.apache.org/jira/browse/HDFS-8058
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7285
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-8058.001.patch, HDFS-8058.002.patch


 This JIRA is to use {{BlockInfo[] blocks}} for both striped and contiguous 
 blocks in INodeFile.
 Currently {{FileWithStripedBlocksFeature}} keeps separate list for striped 
 blocks, and the methods there duplicate with those in INodeFile, and current 
 code need to judge {{isStriped}} then do different things. Also if file is 
 striped, the {{blocks}} in INodeFile occupy a reference memory space.
 These are not necessary, and we can use the same {{blocks}} to make code more 
 clear.
 I keep {{FileWithStripedBlocksFeature}} as empty for follow use: I will file 
 a new JIRA to move {{dataBlockNum}} and {{parityBlockNum}} from 
 *BlockInfoStriped* to INodeFile, since ideally they are the same for all 
 striped blocks in a file, and store them in block will waste NN memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8073) Split BlockPlacementPolicyDefault.chooseTarget(..) so it can be easily overrided.

2015-04-06 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-8073:

Description: We want to implement a placement policy( eg. HDFS-7891) based 
on default policy.  It'll easier if we could split 
BlockPlacementPolicyDefault.chooseTarget(..).  (was: We want to implement a 
placement policy( eg. HDFS-7891) based on default policy. )

 Split BlockPlacementPolicyDefault.chooseTarget(..) so it can be easily 
 overrided.
 -

 Key: HDFS-8073
 URL: https://issues.apache.org/jira/browse/HDFS-8073
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Walter Su
Assignee: Walter Su
Priority: Trivial

 We want to implement a placement policy( eg. HDFS-7891) based on default 
 policy.  It'll easier if we could split 
 BlockPlacementPolicyDefault.chooseTarget(..).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8059) Erasure coding: move dataBlockNum and parityBlockNum from BlockInfoStriped to INodeFile

2015-04-06 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482620#comment-14482620
 ] 

Jing Zhao commented on HDFS-8059:
-

bq. Otherwise we should use other ways to get these two information, maybe 
again through the ECZone?

Yes. We should get this information when resolving the path. Therefore if we 
really need to save memory in the future, we can even skip the two numbers in 
the INodeFile level maybe.

 Erasure coding: move dataBlockNum and parityBlockNum from BlockInfoStriped to 
 INodeFile
 ---

 Key: HDFS-8059
 URL: https://issues.apache.org/jira/browse/HDFS-8059
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7285
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-8059.001.patch


 Move {{dataBlockNum}} and {{parityBlockNum}} from BlockInfoStriped to 
 INodeFile, and store them in {{FileWithStripedBlocksFeature}}.
 Ideally these two nums are the same for all striped blocks in a file, and 
 store them in BlockInfoStriped will waste NN memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8073) Split BlockPlacementPolicyDefault.chooseTarget(..) so it can be easily overrided.

2015-04-06 Thread Walter Su (JIRA)
Walter Su created HDFS-8073:
---

 Summary: Split BlockPlacementPolicyDefault.chooseTarget(..) so it 
can be easily overrided.
 Key: HDFS-8073
 URL: https://issues.apache.org/jira/browse/HDFS-8073
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Walter Su
Assignee: Walter Su
Priority: Trivial






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7937) Erasure Coding: INodeFile quota computation unit tests

2015-04-06 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HDFS-7937:
-
Attachment: HDFS-7937.5.patch

 Erasure Coding: INodeFile quota computation unit tests
 --

 Key: HDFS-7937
 URL: https://issues.apache.org/jira/browse/HDFS-7937
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Sasaki
Assignee: Kai Sasaki
Priority: Minor
 Attachments: HDFS-7937.1.patch, HDFS-7937.2.patch, HDFS-7937.3.patch, 
 HDFS-7937.4.patch, HDFS-7937.5.patch


 Unit test for [HDFS-7826|https://issues.apache.org/jira/browse/HDFS-7826]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8071) Redundant checkFileProgress() in PART II of getAdditionalBlock()

2015-04-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482607#comment-14482607
 ] 

Hadoop QA commented on HDFS-8071:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12723508/HDFS-8071-02.patch
  against trunk revision 3fb5abf.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10186//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HDFS-Build/10186//console

This message is automatically generated.

 Redundant checkFileProgress() in PART II of getAdditionalBlock()
 

 Key: HDFS-8071
 URL: https://issues.apache.org/jira/browse/HDFS-8071
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.3-alpha
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Attachments: HDFS-8071-01.patch, HDFS-8071-02.patch


 {{FSN.getAdditionalBlock()}} consists of two parts I and II. Each part calls 
 {{analyzeFileState()}}, which among other things check replication of the 
 penultimate block via {{checkFileProgress()}}. See details in HDFS-4452.
 Checking file progress in Part II is not necessary, because Part I already 
 assured the penultimate block is complete. It cannot change to incomplete, 
 unless the file is truncated, which is not allowed for files under 
 construction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7980) Incremental BlockReport will dramatically slow down the startup of a namenode

2015-04-06 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482590#comment-14482590
 ] 

Yongjun Zhang commented on HDFS-7980:
-

Hi Guys,

Thanks for reporting and working on this jira. I have a question here.

Patch 001 does
{code}
  if (storageInfo.getBlockReportCount() == 0) {
// The first block report can be processed a lot more efficiently than
// ordinary block reports.  This shortens restart times.
processFirstBlockReport(storageInfo, newReport);
  } else {
invalidatedBlocks = processReport(storageInfo, newReport);
  }
  storageInfo.receivedBlockReport();
{code}

where  {{storageInfo.receivedBlockReport();}} increments the blockReportCount 
by 1, which means {{processFirstBlockReport(storageInfo, newReport);}} will be 
called only once (for the first block report, incremental or full).  

However, it's stated We can still use processFirstBlockReport() even when 
storageInfo.numBlocks()  0:
{quote}
How about the 001 patch? I think it works too. The first arrived incremental 
report only add a few block to NN. There is no need to calculate a toRemove 
list. We can still use processFirstBlockReport() even when 
storageInfo.numBlocks()  0
{quote}

The question is, with patch 001, how can  {{processFirstBlockReport()}} be used 
even when storageInfo.numBlocks()  0? I mean, after the first use of 
{{processFirstBlockReport()}}, blockReportCount is incremented by 1, thus 
preventing {{processFirstBlockReport()}} from being used again for later 
reports.

Thanks.




 Incremental BlockReport will dramatically slow down the startup of  a namenode
 --

 Key: HDFS-7980
 URL: https://issues.apache.org/jira/browse/HDFS-7980
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Hui Zheng
Assignee: Walter Su
 Attachments: HDFS-7980.001.patch


 In the current implementation the datanode will call the 
 reportReceivedDeletedBlocks() method that is a IncrementalBlockReport before 
 calling the bpNamenode.blockReport() method. So in a large(several thousands 
 of datanodes) and busy cluster it will slow down(more than one hour) the 
 startup of namenode. 
 {code}
 ListDatanodeCommand blockReport() throws IOException {
 // send block report if timer has expired.
 final long startTime = now();
 if (startTime - lastBlockReport = dnConf.blockReportInterval) {
   return null;
 }
 final ArrayListDatanodeCommand cmds = new ArrayListDatanodeCommand();
 // Flush any block information that precedes the block report. Otherwise
 // we have a chance that we will miss the delHint information
 // or we will report an RBW replica after the BlockReport already reports
 // a FINALIZED one.
 reportReceivedDeletedBlocks();
 lastDeletedReport = startTime;
 .
 // Send the reports to the NN.
 int numReportsSent = 0;
 int numRPCs = 0;
 boolean success = false;
 long brSendStartTime = now();
 try {
   if (totalBlockCount  dnConf.blockReportSplitThreshold) {
 // Below split threshold, send all reports in a single message.
 DatanodeCommand cmd = bpNamenode.blockReport(
 bpRegistration, bpos.getBlockPoolId(), reports);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7889) Subclass DFSOutputStream to support writing striping layout files

2015-04-06 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482606#comment-14482606
 ] 

Zhe Zhang commented on HDFS-7889:
-

Would also be nice to verify the parity data content in the unit tests.

 Subclass DFSOutputStream to support writing striping layout files
 -

 Key: HDFS-7889
 URL: https://issues.apache.org/jira/browse/HDFS-7889
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Li Bo
 Attachments: HDFS-7889-001.patch, HDFS-7889-002.patch, 
 HDFS-7889-003.patch, HDFS-7889-004.patch, HDFS-7889-005.patch, 
 HDFS-7889-006.patch, HDFS-7889-007.patch, HDFS-7889-008.patch, 
 HDFS-7889-009.patch


 After HDFS-7888, we can subclass  {{DFSOutputStream}} to support writing 
 striping layout files. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7937) Erasure Coding: INodeFile quota computation unit tests

2015-04-06 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482633#comment-14482633
 ] 

Kai Sasaki commented on HDFS-7937:
--

Thank you. I updated patch to create {{INodeFile}} under EC Zone. For the 
efficient use of {{MiniDFSCluster}}, I separated test case. 

 Erasure Coding: INodeFile quota computation unit tests
 --

 Key: HDFS-7937
 URL: https://issues.apache.org/jira/browse/HDFS-7937
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Sasaki
Assignee: Kai Sasaki
Priority: Minor
 Attachments: HDFS-7937.1.patch, HDFS-7937.2.patch, HDFS-7937.3.patch, 
 HDFS-7937.4.patch, HDFS-7937.5.patch


 Unit test for [HDFS-7826|https://issues.apache.org/jira/browse/HDFS-7826]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8073) Split BlockPlacementPolicyDefault.chooseTarget(..) so it can be easily overrided.

2015-04-06 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-8073:

Description: We want to implement a placement policy( eg. HDFS-7891) based 
on default policy. 

 Split BlockPlacementPolicyDefault.chooseTarget(..) so it can be easily 
 overrided.
 -

 Key: HDFS-8073
 URL: https://issues.apache.org/jira/browse/HDFS-8073
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Walter Su
Assignee: Walter Su
Priority: Trivial

 We want to implement a placement policy( eg. HDFS-7891) based on default 
 policy. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7969) Erasure coding: NameNode support for lease recovery of striped block groups

2015-04-06 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14480995#comment-14480995
 ] 

Yi Liu commented on HDFS-7969:
--

Hi Zhe, 
The logic in {{initializeBlockRecovery}} to choose primary replica for block 
recovery should be the same for striped and contiguous block, so I think you 
are right to keep them same.  Your patch looks very good overall, few nits:
*1.* Add description for all interfaces in {{BlockInfoUnderConstruction}}
*2.* Remove the *TODO* comment in 
{{BlockInfoStripedUnderConstruction#initializeBlockRecovery}}

+1 after addressing.

 Erasure coding: NameNode support for lease recovery of striped block groups
 ---

 Key: HDFS-7969
 URL: https://issues.apache.org/jira/browse/HDFS-7969
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HDFS-7969-000.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7617) Add unit tests for editlog transactions for EC

2015-04-06 Thread Hui Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14481024#comment-14481024
 ] 

Hui Zheng commented on HDFS-7617:
-

I am sorry for late reply.

 Add unit tests for editlog transactions for EC
 --

 Key: HDFS-7617
 URL: https://issues.apache.org/jira/browse/HDFS-7617
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Tsz Wo Nicholas Sze
Assignee: Hui Zheng
 Fix For: HDFS-7285

 Attachments: HDFS-7617.001.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7617) Add unit tests for editlog transactions for EC

2015-04-06 Thread Hui Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14481023#comment-14481023
 ] 

Hui Zheng commented on HDFS-7617:
-

I am sorry for reply.Thanks [~zhz]. 

 Add unit tests for editlog transactions for EC
 --

 Key: HDFS-7617
 URL: https://issues.apache.org/jira/browse/HDFS-7617
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Tsz Wo Nicholas Sze
Assignee: Hui Zheng
 Fix For: HDFS-7285

 Attachments: HDFS-7617.001.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8066) Erasure coding: Decommission handle for EC blocks.

2015-04-06 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu resolved HDFS-8066.
--
Resolution: Implemented

Take a look at the latest branch, this has been implemented in branch, so close 
it as implemented.

 Erasure coding: Decommission handle for EC blocks.
 --

 Key: HDFS-8066
 URL: https://issues.apache.org/jira/browse/HDFS-8066
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7285
Reporter: Yi Liu
Assignee: Yi Liu

 This JIRA is to handle decommission for EC blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8065) Support truncate at striped group boundary.

2015-04-06 Thread Yi Liu (JIRA)
Yi Liu created HDFS-8065:


 Summary: Support truncate at striped group boundary.
 Key: HDFS-8065
 URL: https://issues.apache.org/jira/browse/HDFS-8065
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7285
Reporter: Yi Liu
Assignee: Yi Liu


We can support truncate at striped group boundary firstly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8049) Annotation client implementation as private

2015-04-06 Thread Takuya Fukudome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takuya Fukudome updated HDFS-8049:
--
Attachment: HDFS-8049.1.patch

I have attached a patch which adds InterfaceAudience Annotation to some public 
classes in the hdfs directory. Please let me know, if I haven't finished all 
client classes. Thank you.

 Annotation client implementation as private
 ---

 Key: HDFS-8049
 URL: https://issues.apache.org/jira/browse/HDFS-8049
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Tsz Wo Nicholas Sze
Assignee: Takuya Fukudome
 Attachments: HDFS-8049.1.patch


 The @InterfaceAudience Annotation is missing for quite a few client classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >