[jira] [Updated] (HDFS-8891) HDFS concat should keep srcs order

2016-01-08 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-8891:

Attachment: HDFS-8891-test-only-branch-2.6.patch

The regression test passes in branch-2.6. Attaching a 
regression-test-only-patch for branch-2.6.

I looked into the source code and I don't see the srcs are into {{HashMap}} and 
then converted to an array in branch-2.6. In branch-2.6, the order or the srcs 
is kept.

> HDFS concat should keep srcs order
> --
>
> Key: HDFS-8891
> URL: https://issues.apache.org/jira/browse/HDFS-8891
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yong Zhang
>Assignee: Yong Zhang
>Priority: Blocker
> Fix For: 2.7.2
>
> Attachments: HDFS-8891-test-only-branch-2.6.patch, 
> HDFS-8891.001.patch, HDFS-8891.002.patch
>
>
> FSDirConcatOp.verifySrcFiles may change src files order, but it should their 
> order as input.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9617) my java client use muti-thread to put a same file to a same hdfs uri, after no lease error,then client OutOfMemoryError

2016-01-08 Thread zuotingbing (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088904#comment-15088904
 ] 

zuotingbing commented on HDFS-9617:
---

Thank you for your reply. 

my client code :
【main class】
public class HadoopLoader {
  public HadoopLoader() {
  }

  public static void main(String[] args) {
HadoopLoader hadoopLoader = new HadoopLoader();

//上传数据
hadoopLoader.upload();
  }

  private void upload() {
new UploadProcess().upload();
  }

public class UploadProcess {
  private ExecutorService executorService;
  private Map processingFileMap = new 
ConcurrentHashMap();

  public void upload() {
executorService = 
Executors.newFixedThreadPool(HadoopLoader.CONFIG_PROPERTIES.getHandleNum());

for (int i = 0; i < 1000; i++) {
  processLoad("/home/ztb/testdata/43.bmp", 
"hdfs://10.43.156.157:9000/ztbtest");
}
  }

  private void processLoad(String filePathName, String hdfsFilePathName) {
LoadThread loadThread = new LoadThread(filePathName, hdfsFilePathName);
executorService.execute(loadThread);
  }

}

===
public class LoadThread implements Runnable {
  private static final org.apache.commons.logging.Log LOG = 
LogFactory.getLog(LoadThread.class);

  String filePathName; //数据文件完整名称(路径名+文件名)
  String hdfsFilePathName; //数据文件完整名称(路径名+文件名)

  public LoadThread(String filePathName, String hdfsFilePathName) {
this.filePathName = filePathName;
this.hdfsFilePathName = hdfsFilePathName;
  }

  public void writeToHdfs(String filePathName, String hdfsFilePathName) throws 
IOException {
LOG.info("Start to upload " + filePathName + " to " + hdfsFilePathName);
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(URI.create(hdfsFilePathName), conf);
InputStream in = null;
OutputStream out = null;
Path hdfsFilePath;
try {
  in = new BufferedInputStream(new FileInputStream(filePathName));
  hdfsFilePath = new Path(hdfsFilePathName);
  out = fs.create(hdfsFilePath);
  IOUtils.copyBytes(in, out, conf);
} finally {
  if (in != null) {
in.close();
  }
  if (out != null) {
out.close();
  }
}
LOG.info("Finish uploading " + filePathName + " to " + hdfsFilePathName);
  }

  @Override
  public void run() {
try {
  writeToHdfs(filePathName, hdfsFilePathName);
} catch (IOException e) {
  LOG.error(e.getMessage(), e);
}
  }

}




i get java_pid8820.hprof when i set -XX:+HeapDumpOnOutOfMemoryError


> my java client use muti-thread to put a same file to a same hdfs uri, after 
> no lease error,then client OutOfMemoryError
> ---
>
> Key: HDFS-9617
> URL: https://issues.apache.org/jira/browse/HDFS-9617
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: zuotingbing
>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>  No lease on /Tmp2/43.bmp.tmp (inode 2913263): File does not exist. [Lease.  
> Holder: DFSClient_NONMAPREDUCE_2084151715_1, pendingcreates: 250]
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3358)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:3160)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3042)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:615)
>   at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:188)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:476)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1653)
>   at 

[jira] [Commented] (HDFS-8891) HDFS concat should keep srcs order

2016-01-08 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088915#comment-15088915
 ] 

Chris Douglas commented on HDFS-8891:
-

Akira is right; this isn't in 2.6

> HDFS concat should keep srcs order
> --
>
> Key: HDFS-8891
> URL: https://issues.apache.org/jira/browse/HDFS-8891
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yong Zhang
>Assignee: Yong Zhang
>Priority: Blocker
> Fix For: 2.7.2
>
> Attachments: HDFS-8891-test-only-branch-2.6.patch, 
> HDFS-8891.001.patch, HDFS-8891.002.patch
>
>
> FSDirConcatOp.verifySrcFiles may change src files order, but it should their 
> order as input.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8891) HDFS concat should keep srcs order

2016-01-08 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-8891:

Target Version/s:   (was: 2.6.4)

Thanks Chris. Removing target version 2.6.4.

> HDFS concat should keep srcs order
> --
>
> Key: HDFS-8891
> URL: https://issues.apache.org/jira/browse/HDFS-8891
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yong Zhang
>Assignee: Yong Zhang
>Priority: Blocker
> Fix For: 2.7.2
>
> Attachments: HDFS-8891-test-only-branch-2.6.patch, 
> HDFS-8891.001.patch, HDFS-8891.002.patch
>
>
> FSDirConcatOp.verifySrcFiles may change src files order, but it should their 
> order as input.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9455) In distcp, Invalid Argument Error thrown in case of filesystem operation failure

2016-01-08 Thread Archana T (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Archana T updated HDFS-9455:

Assignee: Daisuke Kobayashi  (was: Archana T)

> In distcp, Invalid Argument Error thrown in case of filesystem operation 
> failure
> 
>
> Key: HDFS-9455
> URL: https://issues.apache.org/jira/browse/HDFS-9455
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp, security
>Reporter: Archana T
>Assignee: Daisuke Kobayashi
>Priority: Minor
>
> When Filesystem Operation failure happens during discp, 
> Wrong exception : Invalid Argument thrown along with distcp command Usage.
> {color:red} 
> hadoop distcp webhdfs://IP:25003/test/testfile webhdfs://IP:25003/myp
> Invalid arguments: Unexpected end of file from server
> usage: distcp OPTIONS [source_path...] 
>   OPTIONS
>  -append   Reuse existing data in target files and
>append new data to them if possible
>  -asyncShould distcp execution be blocking
>  -atomic   Commit all changes or none
>  -bandwidth   Specify bandwidth per map in MB
>  -delete   Delete from target, files missing in source
>  -diffUse snapshot diff report to identify the
>difference between source and target
>  -f   List of files that need to be copied
>  -filelimit   (Deprecated!) Limit number of files copied
>to <= n
>  -iIgnore failures during copy
> .
> {color} 
> Instead Proper Exception has to be thrown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9455) In distcp, Invalid Argument Error thrown in case of filesystem operation failure

2016-01-08 Thread Archana T (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088929#comment-15088929
 ] 

Archana T commented on HDFS-9455:
-

Hi [~daisuke.kobayashi]
I agee with the above proposal.

Assigning this Jira to you.

> In distcp, Invalid Argument Error thrown in case of filesystem operation 
> failure
> 
>
> Key: HDFS-9455
> URL: https://issues.apache.org/jira/browse/HDFS-9455
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp, security
>Reporter: Archana T
>Assignee: Archana T
>Priority: Minor
>
> When Filesystem Operation failure happens during discp, 
> Wrong exception : Invalid Argument thrown along with distcp command Usage.
> {color:red} 
> hadoop distcp webhdfs://IP:25003/test/testfile webhdfs://IP:25003/myp
> Invalid arguments: Unexpected end of file from server
> usage: distcp OPTIONS [source_path...] 
>   OPTIONS
>  -append   Reuse existing data in target files and
>append new data to them if possible
>  -asyncShould distcp execution be blocking
>  -atomic   Commit all changes or none
>  -bandwidth   Specify bandwidth per map in MB
>  -delete   Delete from target, files missing in source
>  -diffUse snapshot diff report to identify the
>difference between source and target
>  -f   List of files that need to be copied
>  -filelimit   (Deprecated!) Limit number of files copied
>to <= n
>  -iIgnore failures during copy
> .
> {color} 
> Instead Proper Exception has to be thrown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8891) HDFS concat should keep srcs order

2016-01-08 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-8891:

Affects Version/s: 2.7.1

> HDFS concat should keep srcs order
> --
>
> Key: HDFS-8891
> URL: https://issues.apache.org/jira/browse/HDFS-8891
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Yong Zhang
>Assignee: Yong Zhang
>Priority: Blocker
> Fix For: 2.7.2
>
> Attachments: HDFS-8891-test-only-branch-2.6.patch, 
> HDFS-8891.001.patch, HDFS-8891.002.patch
>
>
> FSDirConcatOp.verifySrcFiles may change src files order, but it should their 
> order as input.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9617) my java client use muti-thread to put a same file to a same hdfs uri, after no lease error,then client OutOfMemoryError

2016-01-08 Thread zuotingbing (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088936#comment-15088936
 ] 

zuotingbing commented on HDFS-9617:
---

Looking forward to your reply, I am very grateful for your help

> my java client use muti-thread to put a same file to a same hdfs uri, after 
> no lease error,then client OutOfMemoryError
> ---
>
> Key: HDFS-9617
> URL: https://issues.apache.org/jira/browse/HDFS-9617
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: zuotingbing
>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>  No lease on /Tmp2/43.bmp.tmp (inode 2913263): File does not exist. [Lease.  
> Holder: DFSClient_NONMAPREDUCE_2084151715_1, pendingcreates: 250]
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3358)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:3160)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3042)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:615)
>   at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:188)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:476)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1653)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1411)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1364)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>   at com.sun.proxy.$Proxy14.addBlock(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:391)
>   at sun.reflect.GeneratedMethodAccessor66.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>   at com.sun.proxy.$Proxy15.addBlock(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1473)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1290)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:536)
> my java client(JVM -Xmx=2G) :
> jmap TOP15:
> num #instances #bytes  class name
> --
>1: 48072 2053976792  [B
>2: 458525987568  
>3: 458525878944  
>4:  33634193112  
>5:  33632548168  
>6:  27332299008  
>7:   5332191696  [Ljava.nio.ByteBuffer;
>8: 247332026600  [C
>9: 312872002368  
> org.apache.hadoop.hdfs.DFSOutputStream$Packet
>   10: 31972 767328  java.util.LinkedList$Node
>   11: 22845 548280  java.lang.String
>   12: 20372 488928  java.util.concurrent.atomic.AtomicLong
>   13:  3700 452984  java.lang.Class
>   14:   981 439576  
>   15:  5583 376344  [S



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8562) HDFS Performance is impacted by FileInputStream Finalizer

2016-01-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088870#comment-15088870
 ] 

Hadoop QA commented on HDFS-8562:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 41s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 34s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
5s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 40s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 2s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 28s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 25s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
2s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 43s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 19m 57s 
{color} | {color:red} root-jdk1.8.0_66 with JDK v1.8.0_66 generated 2 new 
issues (was 731, now 733). {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 24s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 29m 21s 
{color} | {color:red} root-jdk1.7.0_91 with JDK v1.7.0_91 generated 2 new 
issues (was 724, now 726). {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 24s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 7s 
{color} | {color:red} Patch generated 4 new checkstyle issues in root (total 
was 240, now 237). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 32s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 27s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 55s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 56s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 2s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 54s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 3s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | 

[jira] [Updated] (HDFS-8767) RawLocalFileSystem.listStatus() returns null for UNIX pipefile

2016-01-08 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-8767:

Attachment: HDFS-8767-branch-2.6.patch

Cherry-picked this to branch-2.6. I ran the regression test in the patch and it 
passed. Attaching the diff.
Thanks [~djp] and [~cnauroth]!

> RawLocalFileSystem.listStatus() returns null for UNIX pipefile
> --
>
> Key: HDFS-8767
> URL: https://issues.apache.org/jira/browse/HDFS-8767
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Kanaka Kumar Avvaru
>Priority: Critical
> Fix For: 2.7.2
>
> Attachments: HDFS-8767-00.patch, HDFS-8767-01.patch, 
> HDFS-8767-02.patch, HDFS-8767-branch-2.6.patch, HDFS-8767.003.patch, 
> HDFS-8767.004.patch
>
>
> Calling FileSystem.listStatus() on a UNIX pipe file returns null instead of 
> the file. The bug breaks Hive when Hive loads data from UNIX pipe file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8767) RawLocalFileSystem.listStatus() returns null for UNIX pipefile

2016-01-08 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-8767:

Fix Version/s: 2.6.4

> RawLocalFileSystem.listStatus() returns null for UNIX pipefile
> --
>
> Key: HDFS-8767
> URL: https://issues.apache.org/jira/browse/HDFS-8767
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Kanaka Kumar Avvaru
>Priority: Critical
> Fix For: 2.7.2, 2.6.4
>
> Attachments: HDFS-8767-00.patch, HDFS-8767-01.patch, 
> HDFS-8767-02.patch, HDFS-8767-branch-2.6.patch, HDFS-8767.003.patch, 
> HDFS-8767.004.patch
>
>
> Calling FileSystem.listStatus() on a UNIX pipe file returns null instead of 
> the file. The bug breaks Hive when Hive loads data from UNIX pipe file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9624) DataNode start slowly due to the initial DU command operations

2016-01-08 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088877#comment-15088877
 ] 

Kai Zheng commented on HDFS-9624:
-

Thanks Yiqun for the update! Is it possible to refactor the test so the two 
test methods can share most of the codes?

> DataNode start slowly due to the initial DU command operations
> --
>
> Key: HDFS-9624
> URL: https://issues.apache.org/jira/browse/HDFS-9624
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Lin Yiqun
>Assignee: Lin Yiqun
> Attachments: HDFS-9624.001.patch, HDFS-9624.002.patch, 
> HDFS-9624.003.patch
>
>
> It seems starting datanode so slowly when I am finishing migration of 
> datanodes and restart them.I look the dn logs:
> {code}
> 2016-01-06 16:05:08,118 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added 
> new volume: DS-70097061-42f8-4c33-ac27-2a6ca21e60d4
> 2016-01-06 16:05:08,118 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added 
> volume - /home/data/data/hadoop/dfs/data/data12/current, StorageType: DISK
> 2016-01-06 16:05:08,176 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: 
> Registered FSDatasetState MBean
> 2016-01-06 16:05:08,177 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding 
> block pool BP-1942012336-xx.xx.xx.xx-1406726500544
> 2016-01-06 16:05:08,178 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning 
> block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume 
> /home/data/data/hadoop/dfs/data/data2/current...
> 2016-01-06 16:05:08,179 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning 
> block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume 
> /home/data/data/hadoop/dfs/data/data3/current...
> 2016-01-06 16:05:08,179 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning 
> block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume 
> /home/data/data/hadoop/dfs/data/data4/current...
> 2016-01-06 16:05:08,179 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning 
> block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume 
> /home/data/data/hadoop/dfs/data/data5/current...
> 2016-01-06 16:05:08,180 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning 
> block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume 
> /home/data/data/hadoop/dfs/data/data6/current...
> 2016-01-06 16:05:08,180 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning 
> block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume 
> /home/data/data/hadoop/dfs/data/data7/current...
> 2016-01-06 16:05:08,180 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning 
> block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume 
> /home/data/data/hadoop/dfs/data/data8/current...
> 2016-01-06 16:05:08,180 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning 
> block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume 
> /home/data/data/hadoop/dfs/data/data9/current...
> 2016-01-06 16:05:08,181 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning 
> block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume 
> /home/data/data/hadoop/dfs/data/data10/current...
> 2016-01-06 16:05:08,181 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning 
> block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume 
> /home/data/data/hadoop/dfs/data/data11/current...
> 2016-01-06 16:05:08,181 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning 
> block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume 
> /home/data/data/hadoop/dfs/data/data12/current...
> 2016-01-06 16:09:49,646 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time 
> taken to scan block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on 
> /home/data/data/hadoop/dfs/data/data7/current: 281466ms
> 2016-01-06 16:09:54,235 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time 
> taken to scan block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on 
> /home/data/data/hadoop/dfs/data/data9/current: 286054ms
> 2016-01-06 16:09:57,859 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time 
> taken to scan block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on 
> /home/data/data/hadoop/dfs/data/data2/current: 289680ms
> 2016-01-06 16:10:00,333 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time 
> taken to scan block pool 

[jira] [Commented] (HDFS-8891) HDFS concat should keep srcs order

2016-01-08 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088891#comment-15088891
 ] 

Akira AJISAKA commented on HDFS-8891:
-

bq. shall we cherry-pick this fix to 2.6.4 as well?
No. I don't think this fix should be cherry-picked to 2.6.4 because this bug is 
not in branch-2.6.

> HDFS concat should keep srcs order
> --
>
> Key: HDFS-8891
> URL: https://issues.apache.org/jira/browse/HDFS-8891
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yong Zhang
>Assignee: Yong Zhang
>Priority: Blocker
> Fix For: 2.7.2
>
> Attachments: HDFS-8891.001.patch, HDFS-8891.002.patch
>
>
> FSDirConcatOp.verifySrcFiles may change src files order, but it should their 
> order as input.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9455) In distcp, Invalid Argument Error thrown in case of filesystem operation failure

2016-01-08 Thread Daisuke Kobayashi (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088906#comment-15088906
 ] 

Daisuke Kobayashi commented on HDFS-9455:
-

Thanks Yongjun for correcting me! Indeed, distcp runs within a single cluster 
per the description and I could reproduce the issue as follows. [~archanat], is 
this what you're also hitting, correct?

{noformat}
[daisuke@test1 jars]$ hadoop distcp webhdfs://test1:50470/user/daisuke/hosts 
webhdfs://test1:50470/user/daisuke/dir/
16/01/08 17:40:22 WARN security.UserGroupInformation: 
PriviledgedActionException as:daisuke@HADOOP (auth:KERBEROS) 
cause:java.net.SocketException: Unexpected end of file from server
16/01/08 17:40:22 WARN security.UserGroupInformation: 
PriviledgedActionException as:daisuke@HADOOP (auth:KERBEROS) 
cause:java.net.SocketException: Unexpected end of file from server
16/01/08 17:40:22 ERROR tools.DistCp: Invalid arguments:
java.net.SocketException: Unexpected end of file from server
at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:772)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:633)
at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:769)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:633)
at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
at 
java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:468)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:336)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:91)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:614)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:464)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:493)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:489)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getDelegationToken(WebHdfsFileSystem.java:1307)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getDelegationToken(WebHdfsFileSystem.java:239)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getAuthParameters(WebHdfsFileSystem.java:429)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toUrl(WebHdfsFileSystem.java:450)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractFsPathRunner.getUrl(WebHdfsFileSystem.java:697)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:609)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:464)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:493)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:489)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:844)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:859)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1409)
at org.apache.hadoop.tools.DistCp.setTargetPathExists(DistCp.java:200)
at org.apache.hadoop.tools.DistCp.run(DistCp.java:112)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.tools.DistCp.main(DistCp.java:429)
Invalid arguments: Unexpected end of file from server
usage: distcp OPTIONS [source_path...] 
  OPTIONS
 -append   Reuse existing data in target files and
   append new data to them if possible
<...snip...>
{noformat}

So my proposal is catching SocketException and print more meaningful message 
like {{An error occurs while getting target path}}

If this is agreeable, do you allow me to create a patch to fix this, 
[~archanat]?

> In distcp, Invalid Argument Error thrown in case of filesystem operation 
> failure
> 
>
> Key: HDFS-9455
> URL: https://issues.apache.org/jira/browse/HDFS-9455
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp, security
>   

[jira] [Commented] (HDFS-9617) my java client use muti-thread to put a same file to a same hdfs uri, after no lease error,then client OutOfMemoryError

2016-01-08 Thread zuotingbing (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088923#comment-15088923
 ] 

zuotingbing commented on HDFS-9617:
---

I do not use "FileSystem get(final URI uri, final Configuration conf, String 
user)" to get the file system.

By the way, this is my another question, why when i use "FileSystem get(final 
URI uri, final Configuration conf, String user)" with the same user string , 
the file system i got are different(DFSClient
  is a new instance for each time), the FS cache is useless ? should i must 
close the fs everytime if i use "FileSystem get(final URI uri, final 
Configuration conf, String user)" to get a fs even with the same user?



Thanks.

> my java client use muti-thread to put a same file to a same hdfs uri, after 
> no lease error,then client OutOfMemoryError
> ---
>
> Key: HDFS-9617
> URL: https://issues.apache.org/jira/browse/HDFS-9617
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: zuotingbing
>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>  No lease on /Tmp2/43.bmp.tmp (inode 2913263): File does not exist. [Lease.  
> Holder: DFSClient_NONMAPREDUCE_2084151715_1, pendingcreates: 250]
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3358)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:3160)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3042)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:615)
>   at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:188)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:476)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1653)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1411)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1364)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>   at com.sun.proxy.$Proxy14.addBlock(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:391)
>   at sun.reflect.GeneratedMethodAccessor66.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>   at com.sun.proxy.$Proxy15.addBlock(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1473)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1290)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:536)
> my java client(JVM -Xmx=2G) :
> jmap TOP15:
> num #instances #bytes  class name
> --
>1: 48072 2053976792  [B
>2: 458525987568  
>3: 458525878944  
>4:  33634193112  
>5:  33632548168  
>6:  27332299008  
>7:   5332191696  [Ljava.nio.ByteBuffer;
>8: 247332026600  [C
>9: 312872002368  
> org.apache.hadoop.hdfs.DFSOutputStream$Packet
>   10: 31972 767328  java.util.LinkedList$Node
>   11: 22845 548280  java.lang.String
>   12: 20372 488928  

[jira] [Commented] (HDFS-9612) DistCp worker threads are not terminated after jobs are done.

2016-01-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089638#comment-15089638
 ] 

Hadoop QA commented on HDFS-9612:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
9s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 32s 
{color} | {color:green} hadoop-distcp in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 40s 
{color} | {color:green} hadoop-distcp in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 40s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12781254/HDFS-9612.006.patch |
| JIRA Issue | HDFS-9612 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f5fe05bfa954 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 38c4c14 |
| Default Java | 1.7.0_91 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_66 

[jira] [Resolved] (HDFS-9574) Reduce client failures during datanode restart

2016-01-08 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee resolved HDFS-9574.
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.6.4
   2.7.3
   3.0.0

> Reduce client failures during datanode restart
> --
>
> Key: HDFS-9574
> URL: https://issues.apache.org/jira/browse/HDFS-9574
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Fix For: 3.0.0, 2.7.3, 2.6.4
>
> Attachments: HDFS-9574.patch, HDFS-9574.v2.patch, 
> HDFS-9574.v3.br26.patch, HDFS-9574.v3.br27.patch, HDFS-9574.v3.patch
>
>
> Since DataXceiverServer is initialized before BP is fully up, client requests 
> will fail until the datanode registers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9628) libhdfs++: Implement builder apis from C bindings

2016-01-08 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen updated HDFS-9628:
-
Attachment: HDFS-9628.HDFS-8707.001.patch

New patch: I double-checked that this one passes valgrind on my machine; 
perhaps there is a difference between local and the apache build macine

> libhdfs++: Implement builder apis from C bindings
> -
>
> Key: HDFS-9628
> URL: https://issues.apache.org/jira/browse/HDFS-9628
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-9628.HDFS-8707.000.patch, 
> HDFS-9628.HDFS-8707.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9574) Reduce client failures during datanode restart

2016-01-08 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-9574:
-
Attachment: HDFS-9574.v3.br26.patch

> Reduce client failures during datanode restart
> --
>
> Key: HDFS-9574
> URL: https://issues.apache.org/jira/browse/HDFS-9574
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Fix For: 3.0.0, 2.7.3, 2.6.4
>
> Attachments: HDFS-9574.patch, HDFS-9574.v2.patch, 
> HDFS-9574.v3.br26.patch, HDFS-9574.v3.br27.patch, HDFS-9574.v3.patch
>
>
> Since DataXceiverServer is initialized before BP is fully up, client requests 
> will fail until the datanode registers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9574) Reduce client failures during datanode restart

2016-01-08 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-9574:
-
Attachment: (was: HDFS-9574.v3.br26.patch)

> Reduce client failures during datanode restart
> --
>
> Key: HDFS-9574
> URL: https://issues.apache.org/jira/browse/HDFS-9574
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Fix For: 3.0.0, 2.7.3, 2.6.4
>
> Attachments: HDFS-9574.patch, HDFS-9574.v2.patch, 
> HDFS-9574.v3.br26.patch, HDFS-9574.v3.br27.patch, HDFS-9574.v3.patch
>
>
> Since DataXceiverServer is initialized before BP is fully up, client requests 
> will fail until the datanode registers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9628) libhdfs++: Implement builder apis from C bindings

2016-01-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089592#comment-15089592
 ] 

Hadoop QA commented on HDFS-9628:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
25s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 56s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 55s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s 
{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 55s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 3m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 58s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 3m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 4m 50s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.8.0_66. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 4m 49s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 42s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0cf5e66 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12781244/HDFS-9628.HDFS-8707.000.patch
 |
| JIRA Issue | HDFS-9628 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux a5e503554b78 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 2f20790 |
| Default Java | 1.7.0_91 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_66 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/14071/artifact/patchprocess/whitespace-eol.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/14071/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.8.0_66.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/14071/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.7.0_91.txt
 |
| JDK v1.7.0_91  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/14071/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client 

[jira] [Updated] (HDFS-9612) DistCp worker threads are not terminated after jobs are done.

2016-01-08 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-9612:
--
Attachment: HDFS-9612.006.patch

Rev06: make javadocs happy.

> DistCp worker threads are not terminated after jobs are done.
> -
>
> Key: HDFS-9612
> URL: https://issues.apache.org/jira/browse/HDFS-9612
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-9612.001.patch, HDFS-9612.002.patch, 
> HDFS-9612.003.patch, HDFS-9612.004.patch, HDFS-9612.005.patch, 
> HDFS-9612.006.patch
>
>
> In HADOOP-11827, a producer-consumer style thread pool was introduced to 
> parallelize the task of listing files/directories.
> We have a use case where a distcp job is run during the commit phase of a MR2 
> job. However, it was found distcp does not terminate ProducerConsumer thread 
> pools properly. Because threads are not terminated, those MR2 jobs never 
> finish.
> In a more typical use case where distcp is run as a standalone job, those 
> threads are terminated forcefully when the java process is terminated. So 
> these leaked threads did not become a problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9574) Reduce client failures during datanode restart

2016-01-08 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089645#comment-15089645
 ] 

Kihwal Lee commented on HDFS-9574:
--

This is an important improvement for rolling upgrades. Committed to branch-2.7 
and branch-2.6.

> Reduce client failures during datanode restart
> --
>
> Key: HDFS-9574
> URL: https://issues.apache.org/jira/browse/HDFS-9574
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Fix For: 3.0.0, 2.7.3, 2.6.4
>
> Attachments: HDFS-9574.patch, HDFS-9574.v2.patch, 
> HDFS-9574.v3.br26.patch, HDFS-9574.v3.br27.patch, HDFS-9574.v3.patch
>
>
> Since DataXceiverServer is initialized before BP is fully up, client requests 
> will fail until the datanode registers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9574) Reduce client failures during datanode restart

2016-01-08 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-9574:
-
Attachment: HDFS-9574.v3.br26.patch

The 2.6 patch is identical to the 2.7 patch except, these two minor differences.
- {{StopWatch}} is not available, so {{Time.monotonicNow()}} was used instead.
- {{DataNodeFaultInjector}} does not have {{set()}} method. {{instance}} is 
directly set in the test case.

> Reduce client failures during datanode restart
> --
>
> Key: HDFS-9574
> URL: https://issues.apache.org/jira/browse/HDFS-9574
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Attachments: HDFS-9574.patch, HDFS-9574.v2.patch, 
> HDFS-9574.v3.br26.patch, HDFS-9574.v3.br27.patch, HDFS-9574.v3.patch
>
>
> Since DataXceiverServer is initialized before BP is fully up, client requests 
> will fail until the datanode registers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9628) libhdfs++: Implement builder apis from C bindings

2016-01-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089862#comment-15089862
 ] 

Hadoop QA commented on HDFS-9628:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
43s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 4s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 27s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s 
{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 3s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 4m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 3s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 4m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 4m 43s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.8.0_66. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 4m 42s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 19s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0cf5e66 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12781268/HDFS-9628.HDFS-8707.001.patch
 |
| JIRA Issue | HDFS-9628 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux d27ae293ff11 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 2f20790 |
| Default Java | 1.7.0_91 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_66 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/14074/artifact/patchprocess/whitespace-eol.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/14074/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.8.0_66.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/14074/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.7.0_91.txt
 |
| JDK v1.7.0_91  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/14074/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Max 

[jira] [Commented] (HDFS-9631) Restarting namenode after deleting a directory with snapshot will fail

2016-01-08 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089880#comment-15089880
 ] 

Kihwal Lee commented on HDFS-9631:
--

Do you have a test log with the failure?

> Restarting namenode after deleting a directory with snapshot will fail
> --
>
> Key: HDFS-9631
> URL: https://issues.apache.org/jira/browse/HDFS-9631
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>
> I found a number of {{TestOpenFilesWithSnapshot}} tests failed quite 
> frequently. 
> {noformat}
> FAILED:  
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testParentDirWithUCFileDeleteWithSnapShot
> Error Message:
> Timed out waiting for Mini HDFS Cluster to start
> Stack Trace:
> java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1345)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:2024)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1985)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testParentDirWithUCFileDeleteWithSnapShot(TestOpenFilesWithSnapshot.java:82)
> {noformat}
> These tests ({{testParentDirWithUCFileDeleteWithSnapshot}}, 
> {{testOpenFilesWithRename}}, {{testWithCheckpoint}}) are unable to reconnect 
> to the namenode after restart. It looks like the reconnection failed due to 
> an EOFException when BPServiceActor sends a heartbeat.
> {noformat}
> 2016-01-07 23:25:43,678 [main] WARN  hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:waitClusterUp(1338)) - Waiting for the Mini HDFS Cluster 
> to start...
> 2016-01-07 23:25:44,679 [main] WARN  hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:waitClusterUp(1338)) - Waiting for the Mini HDFS Cluster 
> to start...
> 2016-01-07 23:25:44,720 [DataNode: 
> [[[DISK]file:/home/weichiu/hadoop2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/,
>  [DISK]file:
> /home/weichiu/hadoop2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2/]]
>   heartbeating to localhost/127.0.0.1:60472] WARN  datanode
> .DataNode (BPServiceActor.java:offerService(752)) - IOException in 
> offerService
> java.io.EOFException: End of File Exception between local host is: 
> "weichiu.vpc.cloudera.com/172.28.211.219"; destination host is: 
> "localhost":6047
> 2; :; For more details see:  http://wiki.apache.org/hadoop/EOFException
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:793)
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:766)
> at org.apache.hadoop.ipc.Client.call(Client.java:1452)
> at org.apache.hadoop.ipc.Client.call(Client.java:1385)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
> at com.sun.proxy.$Proxy18.sendHeartbeat(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.sendHeartbeat(DatanodeProtocolClientSideTranslatorPB.java:154)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:557)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:660)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:851)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.EOFException
> at java.io.DataInputStream.readInt(DataInputStream.java:392)
> at 
> org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1110)
> at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1005)
> {noformat}
> It appears that these three tests all call {{doWriteAndAbort()}}, which 
> creates files and then abort, and then set the parent directory with a 
> snapshot, and then delete the parent directory. 
> Interestingly, if the parent directory does not have a snapshot, the tests 
> will not fail. Additionally, if the parent directory is not deleted, the 
> tests will not fail.
> The following test will fail intermittently:
> {code:java}
> public void testDeleteParentDirWithSnapShot() throws Exception {
> Path path = new Path("/test");
> fs.mkdirs(path);
> 

[jira] [Commented] (HDFS-1312) Re-balance disks within a Datanode

2016-01-08 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089877#comment-15089877
 ] 

Andrew Wang commented on HDFS-1312:
---

Hi Anu, some replies:

bq. Generally administrators are wary of enabling a feature like HDFS-1804 in a 
production cluster. For new clusters it is more easier but for existing 
production clusters assuming the existence of HDFS-1804 is not realistic.

I don't follow this line of reasoning; don't concerns about using a new feature 
apply to a hypothetical HDFS-1312 implementation too?

HDFS-1804 also was fixed in 2.1.0, so almost everyone should have it available. 
It's also been in use for years, so it's pretty stable.

bq. we do lose one the critical feature of the tool, that is ability to report 
what we did to the machine

Why do we lose this? Can't the DN dump this somewhere?

bq. We wanted to merge mover into this engine later...

This is an interesting point I was not aware of. Is the goal here to do 
inter-DN moving? If so, we have a long-standing issue with inter-DN balancing, 
which is that the balancer as an external process is not aware of the NN's 
block placement policies, leading to placement violations. This is something 
[~mingma] and [~ctrezzo] brought up; if we're doing a rewrite of this 
functionality, it should probably be in the NN.

If it's only for intra-DN moving, then it could still live in the DN.

bq. Two issues with that, one there are lots of customers without HDFS-1804, 
and HDFS-1804 is just an option that user can choose.

Almost everyone is running a version of HDFS with HDFS-1804 these days. As I 
said in my previous comment, if a cluster is commonly hitting imbalance, 
enabling HDFS-1804 should be the first step since a) it's already available and 
b) it avoids the imbalance in the first place, which better conserves IO 
bandwidth.

This is also why I brought up HDFS-8538. If HDFS-1804 is the default volume 
choosing policy, we won't see imbalance outside of hotswap.

bq. Getting an alert due to low space on disk from datanode is very 
reactive it is common enough problem that I think it should be solved at 
HDFS level.

The point I was trying to make is that HDFS-1804 addresses the imbalance issues 
besides hotswap, so we eliminate the alerts in the first place. Hotswap is an 
operation explictly undertaken by the admin, so the admin will know to also run 
the intra-DN balancer. There's no monitoring system in the loop.

bq. I prefer to debug by looking at my local directory instead of ssh-ing into 
a datanode...

This is an aspirational goal, but when debugging a prod cluster we almost 
certainly also want to see the DN log too, which is local to the DN. Cluster 
management systems also make log collection pretty easy, so this seems minor.

Would it help to have a phone call about this? We have a lot of points flying 
around, might be easier to settle this via a higher-bandwidth medium.

> Re-balance disks within a Datanode
> --
>
> Key: HDFS-1312
> URL: https://issues.apache.org/jira/browse/HDFS-1312
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode
>Reporter: Travis Crawford
>Assignee: Anu Engineer
> Attachments: Architecture_and_testplan.pdf, disk-balancer-proposal.pdf
>
>
> Filing this issue in response to ``full disk woes`` on hdfs-user.
> Datanodes fill their storage directories unevenly, leading to situations 
> where certain disks are full while others are significantly less used. Users 
> at many different sites have experienced this issue, and HDFS administrators 
> are taking steps like:
> - Manually rebalancing blocks in storage directories
> - Decomissioning nodes & later readding them
> There's a tradeoff between making use of all available spindles, and filling 
> disks at the sameish rate. Possible solutions include:
> - Weighting less-used disks heavier when placing new blocks on the datanode. 
> In write-heavy environments this will still make use of all spindles, 
> equalizing disk use over time.
> - Rebalancing blocks locally. This would help equalize disk use as disks are 
> added/replaced in older cluster nodes.
> Datanodes should actively manage their local disk so operator intervention is 
> not needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9626) TestBlockReplacement#testBlockReplacement fails occasionally

2016-01-08 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-9626:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Thanks Xiao for confirming this. I just committed the patch to trunk, branch-2, 
and branch-2.8.

> TestBlockReplacement#testBlockReplacement fails occasionally
> 
>
> Key: HDFS-9626
> URL: https://issues.apache.org/jira/browse/HDFS-9626
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9626.01.patch
>
>
> testBlockPlacement sometimes fail in test case 4 in {{checkBlocks}}. I'll 
> post the detailed error in comment.
> Thanks [~jojochuang] for helping identify the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9627) libhdfs++: Add a mechanism to retrieve human readable error messages through the C API

2016-01-08 Thread Bob Hansen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089783#comment-15089783
 ] 

Bob Hansen commented on HDFS-9627:
--

+1

> libhdfs++: Add a mechanism to retrieve human readable error messages through 
> the C API
> --
>
> Key: HDFS-9627
> URL: https://issues.apache.org/jira/browse/HDFS-9627
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-9627.HDFS-8707.000.patch, 
> HDFS-9627.HDFS-8707.000.patch, HDFS-9627.HDFS-8707.001.patch
>
>
> Libhdfs doesn't have this but libhdfs3 has a "hdfsGetLastErrorString" 
> function.  The C API needs to be able to pass out error messages that are 
> more specific than what errno can provide.
> This functionality should be exposed via a new public header in order to keep 
> hdfs.h consistent with the libhdfs header.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9627) libhdfs++: Add a mechanism to retrieve human readable error messages through the C API

2016-01-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089883#comment-15089883
 ] 

Hadoop QA commented on HDFS-9627:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
4s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 7s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 2s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 17s 
{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 3s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 4m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 4s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 4m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 44s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 55s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 43s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0cf5e66 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12781269/HDFS-9627.HDFS-8707.001.patch
 |
| JIRA Issue | HDFS-9627 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 0e606c6bf50b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 2f20790 |
| Default Java | 1.7.0_91 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_66 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91 |
| JDK v1.7.0_91  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/14073/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Max memory used | 75MB |
| Powered by | Apache Yetus 0.2.0-SNAPSHOT   http://yetus.apache.org |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/14073/console |


This message was automatically generated.



> libhdfs++: Add a mechanism to retrieve human readable error messages through 
> the C API
> --
>
> Key: HDFS-9627
> URL: 

[jira] [Updated] (HDFS-9522) Cleanup o.a.h.hdfs.protocol.SnapshotDiffReport

2016-01-08 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-9522:
-
Attachment: HDFS-9522-005.patch

Patch 005:
* Use subclass hierarchy instead of C-style tagged union
  for different types of diff report entries.
* Fix DiffReportEntry.hashcode to include field 'type' when
  generating the hash code. (HDFS-9573)
* Add unit test testDiffReportWithDeleteCreateSameName and
  testDiffReportWithCircularRenames to TestSnapshotDiffReport.
* Rename field 'fullpath' to 'sourcePath' in SnapshotDiffReportEntryProto.

> Cleanup o.a.h.hdfs.protocol.SnapshotDiffReport
> --
>
> Key: HDFS-9522
> URL: https://issues.apache.org/jira/browse/HDFS-9522
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HDFS-9522-001.patch, HDFS-9522-002.patch, 
> HDFS-9522-003.patch, HDFS-9522-004.patch, HDFS-9522-005.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> The current DiffReportEntry is a C-style tagged union-like data structure.  
> Recommend subclass hierarchy as in Java idiom.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9631) Restarting namenode after deleting a directory with snapshot will fail

2016-01-08 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089940#comment-15089940
 ] 

Wei-Chiu Chuang commented on HDFS-9631:
---

Yes. Like this recent one: 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/2704/testReport/org.apache.hadoop.hdfs.server.namenode.snapshot/TestOpenFilesWithSnapshot/testParentDirWithUCFileDeleteWithSnapShot/

> Restarting namenode after deleting a directory with snapshot will fail
> --
>
> Key: HDFS-9631
> URL: https://issues.apache.org/jira/browse/HDFS-9631
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>
> I found a number of {{TestOpenFilesWithSnapshot}} tests failed quite 
> frequently. 
> {noformat}
> FAILED:  
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testParentDirWithUCFileDeleteWithSnapShot
> Error Message:
> Timed out waiting for Mini HDFS Cluster to start
> Stack Trace:
> java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1345)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:2024)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1985)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testParentDirWithUCFileDeleteWithSnapShot(TestOpenFilesWithSnapshot.java:82)
> {noformat}
> These tests ({{testParentDirWithUCFileDeleteWithSnapshot}}, 
> {{testOpenFilesWithRename}}, {{testWithCheckpoint}}) are unable to reconnect 
> to the namenode after restart. It looks like the reconnection failed due to 
> an EOFException when BPServiceActor sends a heartbeat.
> {noformat}
> 2016-01-07 23:25:43,678 [main] WARN  hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:waitClusterUp(1338)) - Waiting for the Mini HDFS Cluster 
> to start...
> 2016-01-07 23:25:44,679 [main] WARN  hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:waitClusterUp(1338)) - Waiting for the Mini HDFS Cluster 
> to start...
> 2016-01-07 23:25:44,720 [DataNode: 
> [[[DISK]file:/home/weichiu/hadoop2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/,
>  [DISK]file:
> /home/weichiu/hadoop2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2/]]
>   heartbeating to localhost/127.0.0.1:60472] WARN  datanode
> .DataNode (BPServiceActor.java:offerService(752)) - IOException in 
> offerService
> java.io.EOFException: End of File Exception between local host is: 
> "weichiu.vpc.cloudera.com/172.28.211.219"; destination host is: 
> "localhost":6047
> 2; :; For more details see:  http://wiki.apache.org/hadoop/EOFException
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:793)
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:766)
> at org.apache.hadoop.ipc.Client.call(Client.java:1452)
> at org.apache.hadoop.ipc.Client.call(Client.java:1385)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
> at com.sun.proxy.$Proxy18.sendHeartbeat(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.sendHeartbeat(DatanodeProtocolClientSideTranslatorPB.java:154)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:557)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:660)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:851)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.EOFException
> at java.io.DataInputStream.readInt(DataInputStream.java:392)
> at 
> org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1110)
> at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1005)
> {noformat}
> It appears that these three tests all call {{doWriteAndAbort()}}, which 
> creates files and then abort, and then set the parent directory with a 
> snapshot, and then delete the parent directory. 
> Interestingly, if the parent directory does not have a snapshot, the tests 
> will not fail. Additionally, if the parent directory is not deleted, the 
> tests will not fail.
> The following test will fail 

[jira] [Updated] (HDFS-9626) TestBlockReplacement#testBlockReplacement fails occasionally

2016-01-08 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-9626:

Target Version/s: 2.8.0
Priority: Minor  (was: Major)
 Component/s: test

> TestBlockReplacement#testBlockReplacement fails occasionally
> 
>
> Key: HDFS-9626
> URL: https://issues.apache.org/jira/browse/HDFS-9626
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Attachments: HDFS-9626.01.patch
>
>
> testBlockPlacement sometimes fail in test case 4 in {{checkBlocks}}. I'll 
> post the detailed error in comment.
> Thanks [~jojochuang] for helping identify the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9626) TestBlockReplacement#testBlockReplacement fails occasionally

2016-01-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089990#comment-15089990
 ] 

Hudson commented on HDFS-9626:
--

FAILURE: Integrated in Hadoop-trunk-Commit #9074 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9074/])
HDFS-9626. TestBlockReplacement#testBlockReplacement fails occasionally. (zhz: 
rev 0af2022e6d431e746301086980134730d4287cc7)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockReplacement.java


> TestBlockReplacement#testBlockReplacement fails occasionally
> 
>
> Key: HDFS-9626
> URL: https://issues.apache.org/jira/browse/HDFS-9626
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9626.01.patch
>
>
> testBlockPlacement sometimes fail in test case 4 in {{checkBlocks}}. I'll 
> post the detailed error in comment.
> Thanks [~jojochuang] for helping identify the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9630) DistCp minor refactoring and clean up

2016-01-08 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-9630:

Attachment: HDFS-9630-v2.patch

Thanks Kai! Patch LGTM except for the very minor new checkstyle issue. 
Attaching v2 to address. +1 pending new Jenkins run.

> DistCp minor refactoring and clean up
> -
>
> Key: HDFS-9630
> URL: https://issues.apache.org/jira/browse/HDFS-9630
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: distcp
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HDFS-9630-v1.patch, HDFS-9630-v2.patch
>
>
> While working on HDFS-9613, it was found there are various checking style 
> issues and minor things to clean up in {{DistCp}}. Better to handle them 
> separately so the fix can be in earlier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9534) Add CLI command to clear storage policy from a path.

2016-01-08 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-9534:

Attachment: HDFS-9534.001.patch

Posted patch V001, kindly review, thanks.

> Add CLI command to clear storage policy from a path.
> 
>
> Key: HDFS-9534
> URL: https://issues.apache.org/jira/browse/HDFS-9534
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: tools
>Reporter: Chris Nauroth
>Assignee: Xiaobing Zhou
> Attachments: HDFS-9534.001.patch
>
>
> The {{hdfs storagepolicies}} command has sub-commands for 
> {{-setStoragePolicy}} and {{-getStoragePolicy}} on a path.  However, there is 
> no {{-removeStoragePolicy}} to remove a previously set storage policy on a 
> path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9534) Add CLI command to clear storage policy from a path.

2016-01-08 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090255#comment-15090255
 ] 

Xiaobing Zhou commented on HDFS-9534:
-

The high level impl is the added removeStoragePolicy RPC call that sets the 
storage policy to newly added UNSPECIFIED_STORAGE_POLICY.

> Add CLI command to clear storage policy from a path.
> 
>
> Key: HDFS-9534
> URL: https://issues.apache.org/jira/browse/HDFS-9534
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: tools
>Reporter: Chris Nauroth
>Assignee: Xiaobing Zhou
> Attachments: HDFS-9534.001.patch
>
>
> The {{hdfs storagepolicies}} command has sub-commands for 
> {{-setStoragePolicy}} and {{-getStoragePolicy}} on a path.  However, there is 
> no {{-removeStoragePolicy}} to remove a previously set storage policy on a 
> path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9522) Cleanup o.a.h.hdfs.protocol.SnapshotDiffReport

2016-01-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090265#comment-15090265
 ] 

Hadoop QA commented on HDFS-9522:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
51s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 56s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 41s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
1s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 50s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 42s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 29s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 40s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 7m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 43s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 8m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 43s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 1s 
{color} | {color:red} Patch generated 1 new checkstyle issues in root (total 
was 69, now 66). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 2s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client introduced 1 new 
FindBugs issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 40s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 27s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 51s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 8s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 32s 
{color} | {color:green} hadoop-distcp in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 57s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 50m 50s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 30s 
{color} | {color:green} hadoop-distcp in 

[jira] [Commented] (HDFS-9630) DistCp minor refactoring and clean up

2016-01-08 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090313#comment-15090313
 ] 

Kai Zheng commented on HDFS-9630:
-

Thanks Zhe for the review! Your update looks all green!

> DistCp minor refactoring and clean up
> -
>
> Key: HDFS-9630
> URL: https://issues.apache.org/jira/browse/HDFS-9630
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: distcp
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HDFS-9630-v1.patch, HDFS-9630-v2.patch
>
>
> While working on HDFS-9613, it was found there are various checking style 
> issues and minor things to clean up in {{DistCp}}. Better to handle them 
> separately so the fix can be in earlier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9493) Test o.a.h.hdfs.server.namenode.TestMetaSave fails in trunk

2016-01-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090232#comment-15090232
 ] 

Hudson commented on HDFS-9493:
--

FAILURE: Integrated in Hadoop-trunk-Commit #9075 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9075/])
HDFS-9493. Test o.a.h.hdfs.server.namenode.TestMetaSave fails in trunk.  (lei: 
rev fd8065a763ff68db265ef23a7d4f97558e213ef9)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestMetaSave.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Test o.a.h.hdfs.server.namenode.TestMetaSave fails in trunk
> ---
>
> Key: HDFS-9493
> URL: https://issues.apache.org/jira/browse/HDFS-9493
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Mingliang Liu
>Assignee: Tony Wu
> Fix For: 2.8.0
>
> Attachments: HDFS-9493.001.patch, HDFS-9493.002.patch, 
> HDFS-9493.003.patch
>
>
> Tested in both Gentoo Linux and Mac.
> {quote}
> ---
>  T E S T S
> ---
> Running org.apache.hadoop.hdfs.server.namenode.TestMetaSave
> Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 34.159 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.server.namenode.TestMetaSave
> testMetasaveAfterDelete(org.apache.hadoop.hdfs.server.namenode.TestMetaSave)  
> Time elapsed: 15.318 sec  <<< FAILURE!
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestMetaSave.testMetasaveAfterDelete(TestMetaSave.java:154)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9630) DistCp minor refactoring and clean up

2016-01-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090238#comment-15090238
 ] 

Hadoop QA commented on HDFS-9630:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 58s 
{color} | {color:green} hadoop-distcp in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 29s 
{color} | {color:green} hadoop-distcp in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 46s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12781324/HDFS-9630-v2.patch |
| JIRA Issue | HDFS-9630 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 49824be19cae 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / fd8065a |
| Default Java | 1.7.0_91 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_66 

[jira] [Updated] (HDFS-9534) Add CLI command to clear storage policy from a path.

2016-01-08 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-9534:

Status: Patch Available  (was: Open)

> Add CLI command to clear storage policy from a path.
> 
>
> Key: HDFS-9534
> URL: https://issues.apache.org/jira/browse/HDFS-9534
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: tools
>Reporter: Chris Nauroth
>Assignee: Xiaobing Zhou
> Attachments: HDFS-9534.001.patch
>
>
> The {{hdfs storagepolicies}} command has sub-commands for 
> {{-setStoragePolicy}} and {{-getStoragePolicy}} on a path.  However, there is 
> no {{-removeStoragePolicy}} to remove a previously set storage policy on a 
> path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9395) getContentSummary is audit logged as success even if failed

2016-01-08 Thread Kuhu Shukla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090132#comment-15090132
 ] 

Kuhu Shukla commented on HDFS-9395:
---

[~cmccabe], could you share any inputs you might have on this? Thanks a lot.

> getContentSummary is audit logged as success even if failed
> ---
>
> Key: HDFS-9395
> URL: https://issues.apache.org/jira/browse/HDFS-9395
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kuhu Shukla
>
> Audit logging is in the fainally block along with the lock unlocking, so it 
> is always logged as success even for cases like FileNotFoundException is 
> thrown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9576) HTrace: collect position/length information on read operations

2016-01-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090163#comment-15090163
 ] 

Hadoop QA commented on HDFS-9576:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
6s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 12s 
{color} | {color:red} Patch generated 1 new checkstyle issues in 
hadoop-hdfs-project/hadoop-hdfs-client (total was 135, now 135). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 56s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 58s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 58s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12781310/HDFS-9576.05.patch |
| JIRA Issue | HDFS-9576 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7e7d85f5a625 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | 

[jira] [Updated] (HDFS-9576) HTrace: collect position/length information on read operations

2016-01-08 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-9576:

Attachment: HDFS-9576.05.patch

Thanks Xiao! Updating patch to fix the checkstyle issue.

> HTrace: collect position/length information on read operations
> --
>
> Key: HDFS-9576
> URL: https://issues.apache.org/jira/browse/HDFS-9576
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, tracing
>Affects Versions: 2.7.1
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-9576.00.patch, HDFS-9576.01.patch, 
> HDFS-9576.02.patch, HDFS-9576.03.patch, HDFS-9576.04.patch, HDFS-9576.05.patch
>
>
> HTrace currently collects the path of each read operation (both stateful and 
> position reads). To better understand applications' I/O behavior, it is also 
> useful to track the position and length of read operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9626) TestBlockReplacement#testBlockReplacement fails occasionally

2016-01-08 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090111#comment-15090111
 ] 

Xiao Chen commented on HDFS-9626:
-

Thank you Zhe for the review and commit!

> TestBlockReplacement#testBlockReplacement fails occasionally
> 
>
> Key: HDFS-9626
> URL: https://issues.apache.org/jira/browse/HDFS-9626
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9626.01.patch
>
>
> testBlockPlacement sometimes fail in test case 4 in {{checkBlocks}}. I'll 
> post the detailed error in comment.
> Thanks [~jojochuang] for helping identify the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9631) Restarting namenode after deleting a directory with snapshot will fail

2016-01-08 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090131#comment-15090131
 ] 

Kihwal Lee commented on HDFS-9631:
--

bq. IOException in offerService  java.io.EOFException: End of File Exception 
between...
This exception is fine. The DN's rpc to the NN didn't finish due to restart. 
According to the log, it re-registered successfully and also sent a full block 
report well before the mini dfs cluster was shutdown.  It looks like the 
namenode was up, but might have been stuck in safe mode or taking a long time 
to get out of it. It's not the first time snapshot has caused this kind of 
issue.

> Restarting namenode after deleting a directory with snapshot will fail
> --
>
> Key: HDFS-9631
> URL: https://issues.apache.org/jira/browse/HDFS-9631
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>
> I found a number of {{TestOpenFilesWithSnapshot}} tests failed quite 
> frequently. 
> {noformat}
> FAILED:  
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testParentDirWithUCFileDeleteWithSnapShot
> Error Message:
> Timed out waiting for Mini HDFS Cluster to start
> Stack Trace:
> java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1345)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:2024)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1985)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testParentDirWithUCFileDeleteWithSnapShot(TestOpenFilesWithSnapshot.java:82)
> {noformat}
> These tests ({{testParentDirWithUCFileDeleteWithSnapshot}}, 
> {{testOpenFilesWithRename}}, {{testWithCheckpoint}}) are unable to reconnect 
> to the namenode after restart. It looks like the reconnection failed due to 
> an EOFException when BPServiceActor sends a heartbeat.
> {noformat}
> 2016-01-07 23:25:43,678 [main] WARN  hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:waitClusterUp(1338)) - Waiting for the Mini HDFS Cluster 
> to start...
> 2016-01-07 23:25:44,679 [main] WARN  hdfs.MiniDFSCluster 
> (MiniDFSCluster.java:waitClusterUp(1338)) - Waiting for the Mini HDFS Cluster 
> to start...
> 2016-01-07 23:25:44,720 [DataNode: 
> [[[DISK]file:/home/weichiu/hadoop2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/,
>  [DISK]file:
> /home/weichiu/hadoop2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data2/]]
>   heartbeating to localhost/127.0.0.1:60472] WARN  datanode
> .DataNode (BPServiceActor.java:offerService(752)) - IOException in 
> offerService
> java.io.EOFException: End of File Exception between local host is: 
> "weichiu.vpc.cloudera.com/172.28.211.219"; destination host is: 
> "localhost":6047
> 2; :; For more details see:  http://wiki.apache.org/hadoop/EOFException
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:793)
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:766)
> at org.apache.hadoop.ipc.Client.call(Client.java:1452)
> at org.apache.hadoop.ipc.Client.call(Client.java:1385)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
> at com.sun.proxy.$Proxy18.sendHeartbeat(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.sendHeartbeat(DatanodeProtocolClientSideTranslatorPB.java:154)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:557)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:660)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:851)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.EOFException
> at java.io.DataInputStream.readInt(DataInputStream.java:392)
> at 
> org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1110)
> at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1005)
> {noformat}
> It appears that these three tests all call {{doWriteAndAbort()}}, which 
> creates files and then abort, and then set the parent directory with a 

[jira] [Commented] (HDFS-9628) libhdfs++: Implement builder apis from C bindings

2016-01-08 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090130#comment-15090130
 ] 

James Clampffer commented on HDFS-9628:
---

A couple comments, otherwise it looks good to me.

-In hdfsConfStrFree you don't need a null check before calling free; free does 
that anyway.

-Valgrind failed for me, but it looks like a simple fix:
{code}
==23607== Command: ./hdfs_builder_test
==23607== 
Running main() from gmock_main.cc
[==] Running 3 tests from 1 test case.
[--] Global test environment set-up.
[--] 3 tests from HdfsBuilderTest
[ RUN  ] HdfsBuilderTest.TestStubBuilder
[   OK ] HdfsBuilderTest.TestStubBuilder (35 ms)
[ RUN  ] HdfsBuilderTest.TestRead
==23607== Warning: invalid file descriptor -1 in syscall close()
==23607== Warning: invalid file descriptor -1 in syscall close()
[   OK ] HdfsBuilderTest.TestRead (48 ms)
[ RUN  ] HdfsBuilderTest.TestSet
[   OK ] HdfsBuilderTest.TestSet (10 ms)
[--] 3 tests from HdfsBuilderTest (102 ms total)

[--] Global test environment tear-down
[==] 3 tests from 1 test case ran. (128 ms total)
[  PASSED  ] 3 tests.
{code}

-Unlikely to happen but 
{code}
TempFile(const std::string & fn) : filename(fn), tempFileHandle(-1) {
  strncpy(fn_buffer, fn.c_str(), sizeof(fn_buffer));
}
{code}
If the length of fn is greater than or equal to sizeof(fn_buffer) strncpy isn't 
going to null terminate the string being copied.


> libhdfs++: Implement builder apis from C bindings
> -
>
> Key: HDFS-9628
> URL: https://issues.apache.org/jira/browse/HDFS-9628
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-9628.HDFS-8707.000.patch, 
> HDFS-9628.HDFS-8707.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9493) Test o.a.h.hdfs.server.namenode.TestMetaSave fails in trunk

2016-01-08 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-9493:

   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

+1. Thanks Tony.

Committed to trunk and branch-2

> Test o.a.h.hdfs.server.namenode.TestMetaSave fails in trunk
> ---
>
> Key: HDFS-9493
> URL: https://issues.apache.org/jira/browse/HDFS-9493
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Mingliang Liu
>Assignee: Tony Wu
> Fix For: 2.8.0
>
> Attachments: HDFS-9493.001.patch, HDFS-9493.002.patch, 
> HDFS-9493.003.patch
>
>
> Tested in both Gentoo Linux and Mac.
> {quote}
> ---
>  T E S T S
> ---
> Running org.apache.hadoop.hdfs.server.namenode.TestMetaSave
> Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 34.159 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.server.namenode.TestMetaSave
> testMetasaveAfterDelete(org.apache.hadoop.hdfs.server.namenode.TestMetaSave)  
> Time elapsed: 15.318 sec  <<< FAILURE!
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestMetaSave.testMetasaveAfterDelete(TestMetaSave.java:154)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9627) libhdfs++: Add a mechanism to retrieve human readable error messages through the C API

2016-01-08 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-9627:
--
Attachment: HDFS-9627.HDFS-8707.001.patch

-renamed headers to avoid conflict, left hdfs_ext in include/libhdfspp for now. 
[~bobhansen] please let me know what you think about this.

-test now uses EXPECT_EQ rather than EXPECT_TRUE(foo == bar)

-added LIBHDFS_EXTERNAL modifier to hdfsGetLastError.  hdfs.h explicitly undefs 
that macro at the end which is a bit of a bummer.  Copied the definition into 
hdfs_ext.h and added a comment explaining.  Still including hdfs.h in case 
other bits want to use the same typedefs like hdfsFS, tSize etc.  If the 
copy/paste of the macro definition looks like a maintenance issue it might be 
easier just to strip it out.  It isn't changing anything on unix systems; not 
sure if it makes a difference on windows.

-declared hdfsGetLastError as extern "C"

> libhdfs++: Add a mechanism to retrieve human readable error messages through 
> the C API
> --
>
> Key: HDFS-9627
> URL: https://issues.apache.org/jira/browse/HDFS-9627
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-9627.HDFS-8707.000.patch, 
> HDFS-9627.HDFS-8707.000.patch, HDFS-9627.HDFS-8707.001.patch
>
>
> Libhdfs doesn't have this but libhdfs3 has a "hdfsGetLastErrorString" 
> function.  The C API needs to be able to pass out error messages that are 
> more specific than what errno can provide.
> This functionality should be exposed via a new public header in order to keep 
> hdfs.h consistent with the libhdfs header.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9624) DataNode start slowly due to the initial DU command operations

2016-01-08 Thread Lin Yiqun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lin Yiqun updated HDFS-9624:

Attachment: HDFS-9624.004.patch

[~drankye], thanks for comments. I refactor the testcases and update the latest 
patch.

> DataNode start slowly due to the initial DU command operations
> --
>
> Key: HDFS-9624
> URL: https://issues.apache.org/jira/browse/HDFS-9624
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Lin Yiqun
>Assignee: Lin Yiqun
> Attachments: HDFS-9624.001.patch, HDFS-9624.002.patch, 
> HDFS-9624.003.patch, HDFS-9624.004.patch
>
>
> It seems starting datanode so slowly when I am finishing migration of 
> datanodes and restart them.I look the dn logs:
> {code}
> 2016-01-06 16:05:08,118 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added 
> new volume: DS-70097061-42f8-4c33-ac27-2a6ca21e60d4
> 2016-01-06 16:05:08,118 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added 
> volume - /home/data/data/hadoop/dfs/data/data12/current, StorageType: DISK
> 2016-01-06 16:05:08,176 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: 
> Registered FSDatasetState MBean
> 2016-01-06 16:05:08,177 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding 
> block pool BP-1942012336-xx.xx.xx.xx-1406726500544
> 2016-01-06 16:05:08,178 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning 
> block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume 
> /home/data/data/hadoop/dfs/data/data2/current...
> 2016-01-06 16:05:08,179 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning 
> block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume 
> /home/data/data/hadoop/dfs/data/data3/current...
> 2016-01-06 16:05:08,179 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning 
> block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume 
> /home/data/data/hadoop/dfs/data/data4/current...
> 2016-01-06 16:05:08,179 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning 
> block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume 
> /home/data/data/hadoop/dfs/data/data5/current...
> 2016-01-06 16:05:08,180 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning 
> block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume 
> /home/data/data/hadoop/dfs/data/data6/current...
> 2016-01-06 16:05:08,180 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning 
> block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume 
> /home/data/data/hadoop/dfs/data/data7/current...
> 2016-01-06 16:05:08,180 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning 
> block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume 
> /home/data/data/hadoop/dfs/data/data8/current...
> 2016-01-06 16:05:08,180 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning 
> block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume 
> /home/data/data/hadoop/dfs/data/data9/current...
> 2016-01-06 16:05:08,181 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning 
> block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume 
> /home/data/data/hadoop/dfs/data/data10/current...
> 2016-01-06 16:05:08,181 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning 
> block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume 
> /home/data/data/hadoop/dfs/data/data11/current...
> 2016-01-06 16:05:08,181 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning 
> block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on volume 
> /home/data/data/hadoop/dfs/data/data12/current...
> 2016-01-06 16:09:49,646 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time 
> taken to scan block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on 
> /home/data/data/hadoop/dfs/data/data7/current: 281466ms
> 2016-01-06 16:09:54,235 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time 
> taken to scan block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on 
> /home/data/data/hadoop/dfs/data/data9/current: 286054ms
> 2016-01-06 16:09:57,859 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time 
> taken to scan block pool BP-1942012336-xx.xx.xx.xx-1406726500544 on 
> /home/data/data/hadoop/dfs/data/data2/current: 289680ms
> 2016-01-06 16:10:00,333 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time 
> taken to scan block pool 

[jira] [Updated] (HDFS-9627) libhdfs++: Add a mechanism to retrieve human readable error messages through the C API

2016-01-08 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen updated HDFS-9627:
-
Attachment: HDFS-9627.HDFS-8707.000.patch

Resubmitting for yetus

> libhdfs++: Add a mechanism to retrieve human readable error messages through 
> the C API
> --
>
> Key: HDFS-9627
> URL: https://issues.apache.org/jira/browse/HDFS-9627
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-9627.HDFS-8707.000.patch, 
> HDFS-9627.HDFS-8707.000.patch
>
>
> Libhdfs doesn't have this but libhdfs3 has a "hdfsGetLastErrorString" 
> function.  The C API needs to be able to pass out error messages that are 
> more specific than what errno can provide.
> This functionality should be exposed via a new public header in order to keep 
> hdfs.h consistent with the libhdfs header.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9621) getListing wrongly associates Erasure Coding policy to pre-existing replicated files under an EC directory

2016-01-08 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090071#comment-15090071
 ] 

Zhe Zhang commented on HDFS-9621:
-

Thanks Jing for the fix. The patch LGTM. A nit on the Javadoc:
{code}
-   * @param src The string representation of the path to the file
+   * @param iip The path to the file, the file is included
{code}

> getListing wrongly associates Erasure Coding policy to pre-existing 
> replicated files under an EC directory  
> 
>
> Key: HDFS-9621
> URL: https://issues.apache.org/jira/browse/HDFS-9621
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Sushmitha Sreenivasan
>Assignee: Jing Zhao
>Priority: Critical
> Attachments: HDFS-9621.000.patch, HDFS-9621.001.patch, 
> HDFS-9621.002.patch
>
>
> This is reported by [~ssreenivasan]:
> If we set Erasure Coding policy to a directory which contains some files with 
> replicated blocks, later when listing files under the directory these files 
> will be reported as EC files. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-1312) Re-balance disks within a Datanode

2016-01-08 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090085#comment-15090085
 ] 

Anu Engineer commented on HDFS-1312:


Hi [~andrew.wang],

Thanks for you comments.  Here are my thoughts on these issues.

bq. I don't follow this line of reasoning; don't concerns about using a new 
feature apply to a hypothetical HDFS-1312 implementation too?

I think it is related to the risk. Let us look at the worst case scenarios 
possible with HDFS-1804 and HDFS-1312. With HDFS-1804 it is a cluster wide 
change, it is always on and any write will always go thru it. HDFS-1804 thus 
can have a cluster wide impact including impact on various workloads in the 
cluster.

However with HDFS-1312, the worst case is that we will take a node off-line. 
Since it is external tool that operates off-line on a node. Another important 
difference is that it is not always on, it works and goes away. So the amount 
of risk to the cluster, especially from an administrators point of view is 
different with these 2 approaches.

bq. Why do we lose this? Can't the DN dump this somewhere?

we can , but then we need to add RPCs in datanode to pull out that data and 
display the change in the node, whereas in the current approach it is something 
that we write to the local disk and then compute the diff later against the 
sources. We don't need a datanode operation.

bq. This is an interesting point I was not aware of. Is the goal here to do 
inter-DN moving? 
No, the goal is *intra-DN*, I was referring to {noformat} hdfs mover {noformat} 
not to {noformat} hdfs balancer{noformat} 

bq. If it's only for intra-DN moving, then it could still live in the DN.

Completely agree, all block moving code will be in DN.  

bq. This is also why I brought up HDFS-8538. If HDFS-1804 is the default volume 
choosing policy, we won't see imbalance outside of hotswap.

Agree, and it is a goal that we should work towards. From the comments in 
HDFS-8538, it looks like we might have to make some minor tweaks to that before 
we can commit it. I can look at it after HDFS-1312.

bq. The point I was trying to make is that HDFS-1804 addresses the imbalance 
issues besides hotswap, so we eliminate the alerts in the first place. Hotswap 
is an operation explictly undertaken by the admin, so the admin will know to 
also run the intra-DN balancer.

Since we both have made this point many times, I am going to agree with what 
you are saying. Even if we assume that hotswap or normal swap is the only use 
case for disk balancing, in a large cluster many disks would have failed. So if 
a cluster gets a number of disks replaced the current interface would make 
admins life easier. The admins can replace a bunch of disks on various machines 
and ask the system to find and fix those nodes. I just think the interface we 
are building makes the life of admins easier, and takes nothing away from the 
use cases described by you.

bq. This is an aspirational goal, but when debugging a prod cluster we almost 
certainly also want to see the DN log too

Right now, we have actually met the aspirational goal, we capture the snapshot 
of the node and that allows us to both debug and simulate what is happening 
with disk-balancer off-line.

bq. Would it help to have a phone call about this? We have a lot of points 
flying around, might be easier to settle this via a higher-bandwidth medium.

I think that is an excellent idea, would love to chat with you in person. I 
will setup a meeting and post the meeting info in this JIRA.

I really appreciate your inputs and thoughtful discussion we are having, hope 
to speak to you in person soon.

> Re-balance disks within a Datanode
> --
>
> Key: HDFS-1312
> URL: https://issues.apache.org/jira/browse/HDFS-1312
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode
>Reporter: Travis Crawford
>Assignee: Anu Engineer
> Attachments: Architecture_and_testplan.pdf, disk-balancer-proposal.pdf
>
>
> Filing this issue in response to ``full disk woes`` on hdfs-user.
> Datanodes fill their storage directories unevenly, leading to situations 
> where certain disks are full while others are significantly less used. Users 
> at many different sites have experienced this issue, and HDFS administrators 
> are taking steps like:
> - Manually rebalancing blocks in storage directories
> - Decomissioning nodes & later readding them
> There's a tradeoff between making use of all available spindles, and filling 
> disks at the sameish rate. Possible solutions include:
> - Weighting less-used disks heavier when placing new blocks on the datanode. 
> In write-heavy environments this will still make use of all spindles, 
> equalizing disk use over time.
> - Rebalancing blocks locally. This would help equalize disk use as 

[jira] [Updated] (HDFS-9628) libhdfs++: Implement builder apis from C bindings

2016-01-08 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen updated HDFS-9628:
-
Attachment: HDFS-9628.HDFS-8707.002.patch

New patch: 
* made keys case-insensitive
* fixed valgrind close handle messages
* fixed TempFile string termination

> libhdfs++: Implement builder apis from C bindings
> -
>
> Key: HDFS-9628
> URL: https://issues.apache.org/jira/browse/HDFS-9628
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-9628.HDFS-8707.000.patch, 
> HDFS-9628.HDFS-8707.001.patch, HDFS-9628.HDFS-8707.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9628) libhdfs++: Implement builder apis from C bindings

2016-01-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090426#comment-15090426
 ] 

Hadoop QA commented on HDFS-9628:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
58s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 4s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 3s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s 
{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 6s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 4m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 13s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 4m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 4m 48s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.8.0_66. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 4m 51s {color} 
| {color:red} hadoop-hdfs-native-client in the patch failed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 23s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0cf5e66 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12781369/HDFS-9628.HDFS-8707.002.patch
 |
| JIRA Issue | HDFS-9628 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 1952ca878854 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 2f20790 |
| Default Java | 1.7.0_91 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_66 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/14080/artifact/patchprocess/whitespace-eol.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/14080/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.8.0_66.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/14080/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client-jdk1.7.0_91.txt
 |
| JDK v1.7.0_91  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/14080/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| 

[jira] [Commented] (HDFS-7101) Potential null dereference in DFSck#doWork()

2016-01-08 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090348#comment-15090348
 ] 

Ted Yu commented on HDFS-7101:
--

Looks like the patch needs to be updated.

> Potential null dereference in DFSck#doWork()
> 
>
> Key: HDFS-7101
> URL: https://issues.apache.org/jira/browse/HDFS-7101
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.5.1
>Reporter: Ted Yu
>Assignee: skrho
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7101_001.patch
>
>
> {code}
> String lastLine = null;
> int errCode = -1;
> try {
>   while ((line = input.readLine()) != null) {
> ...
> if (lastLine.endsWith(NamenodeFsck.HEALTHY_STATUS)) {
>   errCode = 0;
> {code}
> If readLine() throws exception, lastLine may be null, leading to NPE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7101) Potential null dereference in DFSck#doWork()

2016-01-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090359#comment-15090359
 ] 

Hadoop QA commented on HDFS-7101:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} 
| {color:red} HDFS-7101 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12671192/HDFS-7101_001.patch |
| JIRA Issue | HDFS-7101 |
| Powered by | Apache Yetus 0.2.0-SNAPSHOT   http://yetus.apache.org |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/14079/console |


This message was automatically generated.



> Potential null dereference in DFSck#doWork()
> 
>
> Key: HDFS-7101
> URL: https://issues.apache.org/jira/browse/HDFS-7101
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.5.1
>Reporter: Ted Yu
>Assignee: skrho
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7101_001.patch
>
>
> {code}
> String lastLine = null;
> int errCode = -1;
> try {
>   while ((line = input.readLine()) != null) {
> ...
> if (lastLine.endsWith(NamenodeFsck.HEALTHY_STATUS)) {
>   errCode = 0;
> {code}
> If readLine() throws exception, lastLine may be null, leading to NPE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9534) Add CLI command to clear storage policy from a path.

2016-01-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090396#comment-15090396
 ] 

Hadoop QA commented on HDFS-9534:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 32s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 26s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
5s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 14s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 10s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 28s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 7m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 25s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 8m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 25s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 5s 
{color} | {color:red} Patch generated 7 new checkstyle issues in root (total 
was 577, now 579). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 14s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 13s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 6s {color} | 
{color:red} hadoop-common in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 50s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 54m 17s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 21s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 57s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 49m 15s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} |
| 

[jira] [Updated] (HDFS-9617) my java client use muti-thread to put a same file to a same hdfs uri, after no lease error,then client OutOfMemoryError

2016-01-08 Thread zuotingbing (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zuotingbing updated HDFS-9617:
--
Attachment: UploadProcess.java
LoadThread.java
HadoopLoader.java

> my java client use muti-thread to put a same file to a same hdfs uri, after 
> no lease error,then client OutOfMemoryError
> ---
>
> Key: HDFS-9617
> URL: https://issues.apache.org/jira/browse/HDFS-9617
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: zuotingbing
> Attachments: HadoopLoader.java, LoadThread.java, UploadProcess.java
>
>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>  No lease on /Tmp2/43.bmp.tmp (inode 2913263): File does not exist. [Lease.  
> Holder: DFSClient_NONMAPREDUCE_2084151715_1, pendingcreates: 250]
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3358)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:3160)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3042)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:615)
>   at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:188)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:476)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1653)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1411)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1364)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>   at com.sun.proxy.$Proxy14.addBlock(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:391)
>   at sun.reflect.GeneratedMethodAccessor66.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>   at com.sun.proxy.$Proxy15.addBlock(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1473)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1290)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:536)
> my java client(JVM -Xmx=2G) :
> jmap TOP15:
> num #instances #bytes  class name
> --
>1: 48072 2053976792  [B
>2: 458525987568  
>3: 458525878944  
>4:  33634193112  
>5:  33632548168  
>6:  27332299008  
>7:   5332191696  [Ljava.nio.ByteBuffer;
>8: 247332026600  [C
>9: 312872002368  
> org.apache.hadoop.hdfs.DFSOutputStream$Packet
>   10: 31972 767328  java.util.LinkedList$Node
>   11: 22845 548280  java.lang.String
>   12: 20372 488928  java.util.concurrent.atomic.AtomicLong
>   13:  3700 452984  java.lang.Class
>   14:   981 439576  
>   15:  5583 376344  [S



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9455) In distcp, Invalid Argument Error thrown in case of filesystem operation failure

2016-01-08 Thread Daisuke Kobayashi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daisuke Kobayashi updated HDFS-9455:

Attachment: HDFS-9455.01.patch

Thanks! Uploaded a patch as HDFS-9455.01.patch. This catches SocketException 
explicitly and log the following message when it fails to determine the target 
path. I've confirmed this does work with upstream too.

{noformat}
[daisuke@test2 ~]$ hadoop distcp webhdfs://test2:50470/user/daisuke/hosts 
webhdfs://test2:50470/user/daisuke/dir
16/01/08 18:42:09 WARN security.UserGroupInformation: 
PriviledgedActionException as:daisuke@HADOOP (auth:KERBEROS) 
cause:java.net.SocketException: Unexpected end of file from server
16/01/08 18:42:09 WARN security.UserGroupInformation: 
PriviledgedActionException as:daisuke@HADOOP (auth:KERBEROS) 
cause:java.net.SocketException: Unexpected end of file from server
16/01/08 18:42:09 ERROR tools.DistCp: An error occurs while getting target path:
java.net.SocketException: Unexpected end of file from server
at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:772)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:633)
at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:769)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:633)
at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
at 
java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:468)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:336)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:91)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:614)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:464)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:493)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:489)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getDelegationToken(WebHdfsFileSystem.java:1307)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getDelegationToken(WebHdfsFileSystem.java:239)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getAuthParameters(WebHdfsFileSystem.java:429)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toUrl(WebHdfsFileSystem.java:450)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractFsPathRunner.getUrl(WebHdfsFileSystem.java:697)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:609)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:464)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:493)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:489)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:844)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:859)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1409)
at org.apache.hadoop.tools.DistCp.setTargetPathExists(DistCp.java:229)
at org.apache.hadoop.tools.DistCp.run(DistCp.java:118)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.tools.DistCp.main(DistCp.java:458)
[daisuke@test2 ~]$
{noformat}

I am still unsure if returning {{DistCpConstants.UNKNOWN_ERROR}} is appropriate 
though. Can you advice me, [~yzhangal]?


> In distcp, Invalid Argument Error thrown in case of filesystem operation 
> failure
> 
>
> Key: HDFS-9455
> URL: https://issues.apache.org/jira/browse/HDFS-9455
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp, security
>Reporter: Archana T
>Assignee: Daisuke Kobayashi
>Priority: Minor
> Attachments: HDFS-9455.01.patch
>
>
> When Filesystem Operation failure happens during discp, 
> Wrong exception : Invalid Argument thrown along with distcp command Usage.
> {color:red} 
> hadoop distcp 

[jira] [Commented] (HDFS-9522) Cleanup o.a.h.hdfs.protocol.SnapshotDiffReport

2016-01-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088994#comment-15088994
 ] 

Hadoop QA commented on HDFS-9522:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 34s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
21s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
9s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 3s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 56s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 44s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 24s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 9m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
19s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 10m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 19s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 4s 
{color} | {color:red} Patch generated 2 new checkstyle issues in root (total 
was 69, now 67). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 25s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client introduced 2 new 
FindBugs issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 1s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 27s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 49s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 10s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 56s 
{color} | {color:green} hadoop-distcp in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 5s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 42s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 8s 
{color} | {color:green} hadoop-distcp in 

[jira] [Commented] (HDFS-8767) RawLocalFileSystem.listStatus() returns null for UNIX pipefile

2016-01-08 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089145#comment-15089145
 ] 

Junping Du commented on HDFS-8767:
--

Thank u, [~ajisakaa]!

> RawLocalFileSystem.listStatus() returns null for UNIX pipefile
> --
>
> Key: HDFS-8767
> URL: https://issues.apache.org/jira/browse/HDFS-8767
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Kanaka Kumar Avvaru
>Priority: Critical
> Fix For: 2.7.2, 2.6.4
>
> Attachments: HDFS-8767-00.patch, HDFS-8767-01.patch, 
> HDFS-8767-02.patch, HDFS-8767-branch-2.6.patch, HDFS-8767.003.patch, 
> HDFS-8767.004.patch
>
>
> Calling FileSystem.listStatus() on a UNIX pipe file returns null instead of 
> the file. The bug breaks Hive when Hive loads data from UNIX pipe file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9627) libhdfs++: Add a mechanism to retrieve human readable error messages through the C API

2016-01-08 Thread Bob Hansen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089283#comment-15089283
 ] 

Bob Hansen commented on HDFS-9627:
--

Additional minor comment: the new method should be declared in the "extern C" 
namespace.

> libhdfs++: Add a mechanism to retrieve human readable error messages through 
> the C API
> --
>
> Key: HDFS-9627
> URL: https://issues.apache.org/jira/browse/HDFS-9627
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-9627.HDFS-8707.000.patch, 
> HDFS-9627.HDFS-8707.000.patch
>
>
> Libhdfs doesn't have this but libhdfs3 has a "hdfsGetLastErrorString" 
> function.  The C API needs to be able to pass out error messages that are 
> more specific than what errno can provide.
> This functionality should be exposed via a new public header in order to keep 
> hdfs.h consistent with the libhdfs header.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9556) libhdfs++: pull Options from default configs by default

2016-01-08 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen updated HDFS-9556:
-
Description: Include method to connect to defaultFS from configuration
Summary: libhdfs++: pull Options from default configs by default  (was: 
libhdfs++: allow connection to defaultFS from configuration)

> libhdfs++: pull Options from default configs by default
> ---
>
> Key: HDFS-9556
> URL: https://issues.apache.org/jira/browse/HDFS-9556
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
>
> Include method to connect to defaultFS from configuration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9627) libhdfs++: Add a mechanism to retrieve human readable error messages through the C API

2016-01-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089101#comment-15089101
 ] 

Hadoop QA commented on HDFS-9627:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
40s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 5s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 59s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 14s 
{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 58s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 3m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 58s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 3m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 44s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 42s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 36m 18s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0cf5e66 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12781193/HDFS-9627.HDFS-8707.000.patch
 |
| JIRA Issue | HDFS-9627 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 8046ab81836e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 2f20790 |
| Default Java | 1.7.0_91 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_66 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91 |
| JDK v1.7.0_91  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/14069/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Max memory used | 76MB |
| Powered by | Apache Yetus 0.2.0-SNAPSHOT   http://yetus.apache.org |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/14069/console |


This message was automatically generated.



> libhdfs++: Add a mechanism to retrieve human readable error messages through 
> the C API
> --
>
> Key: HDFS-9627
> URL: 

[jira] [Commented] (HDFS-9617) my java client use muti-thread to put a same file to a same hdfs uri, after no lease error,then client OutOfMemoryError

2016-01-08 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089226#comment-15089226
 ] 

Kai Zheng commented on HDFS-9617:
-

Looking at your attached codes, you're crazily trying to use *1* threads to 
write to the same HDFS file, which is surely not to work. What's behavior and 
output would you expect? As Kihwal said, this can cause all sorts of problems. 
I thought you need to be clear about what you want to achieve, then ask your 
questions in the user mailing list about how to do it, as MIngliang suggested.

> my java client use muti-thread to put a same file to a same hdfs uri, after 
> no lease error,then client OutOfMemoryError
> ---
>
> Key: HDFS-9617
> URL: https://issues.apache.org/jira/browse/HDFS-9617
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: zuotingbing
> Attachments: HadoopLoader.java, LoadThread.java, UploadProcess.java
>
>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>  No lease on /Tmp2/43.bmp.tmp (inode 2913263): File does not exist. [Lease.  
> Holder: DFSClient_NONMAPREDUCE_2084151715_1, pendingcreates: 250]
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3358)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:3160)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3042)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:615)
>   at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:188)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:476)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1653)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1411)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1364)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>   at com.sun.proxy.$Proxy14.addBlock(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:391)
>   at sun.reflect.GeneratedMethodAccessor66.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>   at com.sun.proxy.$Proxy15.addBlock(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1473)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1290)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:536)
> my java client(JVM -Xmx=2G) :
> jmap TOP15:
> num #instances #bytes  class name
> --
>1: 48072 2053976792  [B
>2: 458525987568  
>3: 458525878944  
>4:  33634193112  
>5:  33632548168  
>6:  27332299008  
>7:   5332191696  [Ljava.nio.ByteBuffer;
>8: 247332026600  [C
>9: 312872002368  
> org.apache.hadoop.hdfs.DFSOutputStream$Packet
>   10: 31972 767328  java.util.LinkedList$Node
>   11: 22845 548280  java.lang.String
>   12: 20372 488928  java.util.concurrent.atomic.AtomicLong
>   13:  3700 452984  java.lang.Class
>   

[jira] [Updated] (HDFS-9632) libhdfs++: Add additional type-safe getters to the Configuration class

2016-01-08 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen updated HDFS-9632:
-
Attachment: HDFS-9632.HDFS-8707.000.patch

Patch: Initial cut of URI parsing.  Still needs work, but wanted to keep it 
here as a start.

> libhdfs++: Add additional type-safe getters to the Configuration class
> --
>
> Key: HDFS-9632
> URL: https://issues.apache.org/jira/browse/HDFS-9632
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
> Attachments: HDFS-9632.HDFS-8707.000.patch
>
>
> Notably, URIs and byte sizes are missing



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9617) my java client use muti-thread to put a same file to a same hdfs uri, after no lease error,then client OutOfMemoryError

2016-01-08 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089245#comment-15089245
 ] 

Kai Zheng commented on HDFS-9617:
-

Looks like you want to implement a files loading tool that uploads files to 
HDFS cluster, if so, you may take a look at the work in HDFS-8968, where a 
benchmark tool does the similar things to benchmark write throughput, using 
multiple concurrent writers in threads, but surely writing to different HDFS 
files.

> my java client use muti-thread to put a same file to a same hdfs uri, after 
> no lease error,then client OutOfMemoryError
> ---
>
> Key: HDFS-9617
> URL: https://issues.apache.org/jira/browse/HDFS-9617
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: zuotingbing
> Attachments: HadoopLoader.java, LoadThread.java, UploadProcess.java
>
>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>  No lease on /Tmp2/43.bmp.tmp (inode 2913263): File does not exist. [Lease.  
> Holder: DFSClient_NONMAPREDUCE_2084151715_1, pendingcreates: 250]
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3358)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:3160)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3042)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:615)
>   at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:188)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:476)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1653)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1411)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1364)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>   at com.sun.proxy.$Proxy14.addBlock(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:391)
>   at sun.reflect.GeneratedMethodAccessor66.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>   at com.sun.proxy.$Proxy15.addBlock(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1473)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1290)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:536)
> my java client(JVM -Xmx=2G) :
> jmap TOP15:
> num #instances #bytes  class name
> --
>1: 48072 2053976792  [B
>2: 458525987568  
>3: 458525878944  
>4:  33634193112  
>5:  33632548168  
>6:  27332299008  
>7:   5332191696  [Ljava.nio.ByteBuffer;
>8: 247332026600  [C
>9: 312872002368  
> org.apache.hadoop.hdfs.DFSOutputStream$Packet
>   10: 31972 767328  java.util.LinkedList$Node
>   11: 22845 548280  java.lang.String
>   12: 20372 488928  java.util.concurrent.atomic.AtomicLong
>   13:  3700 452984  java.lang.Class
>   14:   981 439576  
>   15:  5583 376344  [S



--

[jira] [Commented] (HDFS-9627) libhdfs++: Add a mechanism to retrieve human readable error messages through the C API

2016-01-08 Thread Bob Hansen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089282#comment-15089282
 ] 

Bob Hansen commented on HDFS-9627:
--

Additional minor comment: for consistency, we should mark the methods as 
{{LIBHDFS_EXTERNAL}}, which means including {{hdfs.h}}, which means we need to 
disambiguate the C {{hdfs.h}} from the libdhfdspp {{hdfs.h}} (which declares 
FileSystem and FileHandle).  Perhaps renaming this patch's {{hdfspp.h}} to 
{{hdfs_ext.h}} and renaming {{libhdfs/include/libhdfspp/hdfs.h}} to 
{{hdfspp.h}} is a good set of names?

Nothing is ever simple.

> libhdfs++: Add a mechanism to retrieve human readable error messages through 
> the C API
> --
>
> Key: HDFS-9627
> URL: https://issues.apache.org/jira/browse/HDFS-9627
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-9627.HDFS-8707.000.patch, 
> HDFS-9627.HDFS-8707.000.patch
>
>
> Libhdfs doesn't have this but libhdfs3 has a "hdfsGetLastErrorString" 
> function.  The C API needs to be able to pass out error messages that are 
> more specific than what errno can provide.
> This functionality should be exposed via a new public header in order to keep 
> hdfs.h consistent with the libhdfs header.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9632) libhdfs++: Add additional type-safe getters to the Configuration class

2016-01-08 Thread Bob Hansen (JIRA)
Bob Hansen created HDFS-9632:


 Summary: libhdfs++: Add additional type-safe getters to the 
Configuration class
 Key: HDFS-9632
 URL: https://issues.apache.org/jira/browse/HDFS-9632
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Bob Hansen


Notably, URIs and byte sizes are missing



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8891) HDFS concat should keep srcs order

2016-01-08 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089150#comment-15089150
 ] 

Junping Du commented on HDFS-8891:
--

Thanks [~ajisakaa] and [~chris.douglas] for confirmation.

> HDFS concat should keep srcs order
> --
>
> Key: HDFS-8891
> URL: https://issues.apache.org/jira/browse/HDFS-8891
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Yong Zhang
>Assignee: Yong Zhang
>Priority: Blocker
> Fix For: 2.7.2
>
> Attachments: HDFS-8891-test-only-branch-2.6.patch, 
> HDFS-8891.001.patch, HDFS-8891.002.patch
>
>
> FSDirConcatOp.verifySrcFiles may change src files order, but it should their 
> order as input.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9521) TransferFsImage.receiveFile should account and log separate times for image download and fsync to disk

2016-01-08 Thread Wellington Chevreuil (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil updated HDFS-9521:
---
Attachment: HDFS-9521.patch.1

New patch version including logging for combined time for download + fsyncs to 
all disks.

Also fixed minor checkstyles issues.

> TransferFsImage.receiveFile should account and log separate times for image 
> download and fsync to disk 
> ---
>
> Key: HDFS-9521
> URL: https://issues.apache.org/jira/browse/HDFS-9521
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Minor
> Attachments: HDFS-9521.patch, HDFS-9521.patch.1
>
>
> Currently, TransferFsImage.receiveFile is logging total transfer time as 
> below:
> {noformat}
> double xferSec = Math.max(
>((float)(Time.monotonicNow() - startTime)) / 1000.0, 0.001);
> long xferKb = received / 1024;
> LOG.info(String.format("Transfer took %.2fs at %.2f KB/s",xferSec, xferKb / 
> xferSec))
> {noformat}
> This is really useful, but it just measures the total method execution time, 
> which includes time taken to download the image and do an fsync to all the 
> namenode metadata directories.
> Sometime when troubleshooting these imager transfer problems, it's 
> interesting to know which part of the process is being the bottleneck 
> (whether network or disk write).
> This patch accounts time for image download and fsync to each disk 
> separately, logging how much time did it take on each operation.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9574) Reduce client failures during datanode restart

2016-01-08 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089277#comment-15089277
 ] 

Kihwal Lee commented on HDFS-9574:
--

The failed test cases all pass when run locally.
{noformat}
---
 T E S T S
---
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestBlockScanner
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 55.947 sec - 
in org.apache.hadoop.hdfs.server.datanode.TestBlockScanner
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestFsck
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 135.155 sec - 
in org.apache.hadoop.hdfs.server.namenode.TestFsck
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotFileLength
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.697 sec - in 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotFileLength
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 62.252 sec
 - in org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestSafeMode
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 46.149 sec - in 
org.apache.hadoop.hdfs.TestSafeMode

Results :

Tests run: 59, Failures: 0, Errors: 0, Skipped: 0
{noformat}

The whitespace warning is from the context, not my change.  Nothing to be done 
for the checkstyle warnings.

> Reduce client failures during datanode restart
> --
>
> Key: HDFS-9574
> URL: https://issues.apache.org/jira/browse/HDFS-9574
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Attachments: HDFS-9574.patch, HDFS-9574.v2.patch, HDFS-9574.v3.patch
>
>
> Since DataXceiverServer is initialized before BP is fully up, client requests 
> will fail until the datanode registers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9574) Reduce client failures during datanode restart

2016-01-08 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-9574:
-
Target Version/s: 2.7.3, 2.6.4  (was: 2.7.3)

> Reduce client failures during datanode restart
> --
>
> Key: HDFS-9574
> URL: https://issues.apache.org/jira/browse/HDFS-9574
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Attachments: HDFS-9574.patch, HDFS-9574.v2.patch, HDFS-9574.v3.patch
>
>
> Since DataXceiverServer is initialized before BP is fully up, client requests 
> will fail until the datanode registers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9624) DataNode start slowly due to the initial DU command operations

2016-01-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089186#comment-15089186
 ] 

Hadoop QA commented on HDFS-9624:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
6s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 55s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 22s 
{color} | {color:red} Patch generated 2 new checkstyle issues in 
hadoop-hdfs-project/hadoop-hdfs (total was 408, now 410). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 52s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 29s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 6s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 145m 44s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeMetrics |
|   | hadoop.hdfs.server.datanode.TestBlockScanner |
|   | hadoop.hdfs.TestReadStripedFileWithDecoding |
|   | hadoop.hdfs.server.namenode.TestNNThroughputBenchmark |
| JDK v1.7.0_91 Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion |
|   | 

[jira] [Updated] (HDFS-9521) TransferFsImage.receiveFile should account and log separate times for image download and fsync to disk

2016-01-08 Thread Wellington Chevreuil (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil updated HDFS-9521:
---
Status: Patch Available  (was: In Progress)

> TransferFsImage.receiveFile should account and log separate times for image 
> download and fsync to disk 
> ---
>
> Key: HDFS-9521
> URL: https://issues.apache.org/jira/browse/HDFS-9521
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Minor
> Attachments: HDFS-9521.patch, HDFS-9521.patch.1
>
>
> Currently, TransferFsImage.receiveFile is logging total transfer time as 
> below:
> {noformat}
> double xferSec = Math.max(
>((float)(Time.monotonicNow() - startTime)) / 1000.0, 0.001);
> long xferKb = received / 1024;
> LOG.info(String.format("Transfer took %.2fs at %.2f KB/s",xferSec, xferKb / 
> xferSec))
> {noformat}
> This is really useful, but it just measures the total method execution time, 
> which includes time taken to download the image and do an fsync to all the 
> namenode metadata directories.
> Sometime when troubleshooting these imager transfer problems, it's 
> interesting to know which part of the process is being the bottleneck 
> (whether network or disk write).
> This patch accounts time for image download and fsync to each disk 
> separately, logging how much time did it take on each operation.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9521) TransferFsImage.receiveFile should account and log separate times for image download and fsync to disk

2016-01-08 Thread Wellington Chevreuil (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil updated HDFS-9521:
---
Status: In Progress  (was: Patch Available)

New patch version available

> TransferFsImage.receiveFile should account and log separate times for image 
> download and fsync to disk 
> ---
>
> Key: HDFS-9521
> URL: https://issues.apache.org/jira/browse/HDFS-9521
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Minor
> Attachments: HDFS-9521.patch, HDFS-9521.patch.1
>
>
> Currently, TransferFsImage.receiveFile is logging total transfer time as 
> below:
> {noformat}
> double xferSec = Math.max(
>((float)(Time.monotonicNow() - startTime)) / 1000.0, 0.001);
> long xferKb = received / 1024;
> LOG.info(String.format("Transfer took %.2fs at %.2f KB/s",xferSec, xferKb / 
> xferSec))
> {noformat}
> This is really useful, but it just measures the total method execution time, 
> which includes time taken to download the image and do an fsync to all the 
> namenode metadata directories.
> Sometime when troubleshooting these imager transfer problems, it's 
> interesting to know which part of the process is being the bottleneck 
> (whether network or disk write).
> This patch accounts time for image download and fsync to each disk 
> separately, logging how much time did it take on each operation.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9574) Reduce client failures during datanode restart

2016-01-08 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089509#comment-15089509
 ] 

Daryn Sharp commented on HDFS-9574:
---

+1 Looks good, few suggestions if you think they would add value, up to you.

{{DFSInputStream}}: Instead of tracking the {{retryList}} separately, would it 
be easier to just add it back to the {{nodeList}} and set the {{isRetry}} 
boolean?

{{DataXceiver}}: Might consider changing {{checkAccess}} to not require the 
stream and just have it call {{getBufferedOutputStream}}.

> Reduce client failures during datanode restart
> --
>
> Key: HDFS-9574
> URL: https://issues.apache.org/jira/browse/HDFS-9574
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Attachments: HDFS-9574.patch, HDFS-9574.v2.patch, HDFS-9574.v3.patch
>
>
> Since DataXceiverServer is initialized before BP is fully up, client requests 
> will fail until the datanode registers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9574) Reduce client failures during datanode restart

2016-01-08 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089549#comment-15089549
 ] 

Kihwal Lee commented on HDFS-9574:
--

Committed to trunk, branch-2 and branch-2.8. branch-2.7 and 2.6 need a separate 
patch since files have been moved and the hdfs client has been separated out.

> Reduce client failures during datanode restart
> --
>
> Key: HDFS-9574
> URL: https://issues.apache.org/jira/browse/HDFS-9574
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Attachments: HDFS-9574.patch, HDFS-9574.v2.patch, HDFS-9574.v3.patch
>
>
> Since DataXceiverServer is initialized before BP is fully up, client requests 
> will fail until the datanode registers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9574) Reduce client failures during datanode restart

2016-01-08 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-9574:
-
Status: Open  (was: Patch Available)

> Reduce client failures during datanode restart
> --
>
> Key: HDFS-9574
> URL: https://issues.apache.org/jira/browse/HDFS-9574
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Attachments: HDFS-9574.patch, HDFS-9574.v2.patch, HDFS-9574.v3.patch
>
>
> Since DataXceiverServer is initialized before BP is fully up, client requests 
> will fail until the datanode registers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9521) TransferFsImage.receiveFile should account and log separate times for image download and fsync to disk

2016-01-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089557#comment-15089557
 ] 

Hadoop QA commented on HDFS-9521:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 10s 
{color} | {color:blue} The patch file was not named according to hadoop's 
naming conventions. Please see https://wiki.apache.org/hadoop/HowToContribute 
for instructions. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
46s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 54s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 45s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m 12s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 3s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 165m 16s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | hadoop.hdfs.TestPersistBlocks |
|   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
|   | hadoop.hdfs.TestDFSClientRetries |
| JDK v1.7.0_91 Failed junit tests | hadoop.hdfs.TestEncryptionZones |
|   | hadoop.hdfs.web.TestWebHDFS |
\\
\\
|| Subsystem || 

[jira] [Updated] (HDFS-9574) Reduce client failures during datanode restart

2016-01-08 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-9574:
-
Attachment: HDFS-9574.v3.br27.patch

The patch for branch-2.7 is not very different from the trunk/branch-2 one.
- Simple context differences.
- Difference due to hdfs client split and file moves.
- {{AccessMode}} moved from {{BlockTokenSecretManager}} to 
{{BlockTokenIdentifier}}.

> Reduce client failures during datanode restart
> --
>
> Key: HDFS-9574
> URL: https://issues.apache.org/jira/browse/HDFS-9574
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Attachments: HDFS-9574.patch, HDFS-9574.v2.patch, 
> HDFS-9574.v3.br27.patch, HDFS-9574.v3.patch
>
>
> Since DataXceiverServer is initialized before BP is fully up, client requests 
> will fail until the datanode registers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9627) libhdfs++: Add a mechanism to retrieve human readable error messages through the C API

2016-01-08 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089463#comment-15089463
 ] 

James Clampffer commented on HDFS-9627:
---

Thanks [~bobhansen] for resubmitting the patch for CI and feedback.

Regarding your comments:

"Should libhdfspp.h live in bindings/c?"
I moved this around a couple times while I was working on it.  The reasons for 
putting it in include/libhdfspp were both to make packaging easier as well as 
try and keep all libhdfspp specific includes in one place.  I don't have a 
problem with moving it if you'd prefer that.

"Tests should probably be using EXPECT_EQ(expected value, test value) rather 
than ASSERT(ex == test)"
Good point, I can change that.

"Additional minor comment: for consistency, we should mark the methods as 
LIBHDFS_EXTERNAL, which means including hdfs.h, which means we need to 
disambiguate the C hdfs.h from the libdhfdspp hdfs.h (which declares FileSystem 
and FileHandle). Perhaps renaming this patch's hdfspp.h to hdfs_ext.h and 
renaming libhdfs/include/libhdfspp/hdfs.h to hdfspp.h is a good set of names?"
Good point, I think having two header files with the same name in the same 
project is asking for trouble.  I've already had some situations writing toy 
applications on top of libhdfs++ where that got annoying.  I think moving 
hdfspp.h->hdfs_extentions and (the libhdfs++) hdfs.h->hdfspp.h would be a good 
route.  I'll change that and add LIBHDFS_EXTERNAL.

"Additional minor comment: the new method should be declared in the "extern C" 
namespace."
Good catch.  If more functions start getting added to the extensions header it 
may be worth building a test in C99 mode and linking it against the functions 
in the header as a way of enforcing this.


> libhdfs++: Add a mechanism to retrieve human readable error messages through 
> the C API
> --
>
> Key: HDFS-9627
> URL: https://issues.apache.org/jira/browse/HDFS-9627
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-9627.HDFS-8707.000.patch, 
> HDFS-9627.HDFS-8707.000.patch
>
>
> Libhdfs doesn't have this but libhdfs3 has a "hdfsGetLastErrorString" 
> function.  The C API needs to be able to pass out error messages that are 
> more specific than what errno can provide.
> This functionality should be exposed via a new public header in order to keep 
> hdfs.h consistent with the libhdfs header.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9574) Reduce client failures during datanode restart

2016-01-08 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089536#comment-15089536
 ] 

Kihwal Lee commented on HDFS-9574:
--

bq. DFSInputStream: Instead of tracking the retryList separately, would it be 
easier to just add it back to the nodeList and set the isRetry boolean?
I thought about doing that. But with one list, it is hard to tell whether it 
has any good candidate left to try or it has only retriable nodes. The code 
ended up being more complicated than necessary so I settled with two separate 
lists.

bq. DataXceiver: Might consider changing checkAccess to not require the stream 
and just have it call getBufferedOutputStream.
It looks like that might be okay for the current usages. I didn't try to 
improve all inconsistencies in there. It might be better to be done in a 
separate clean-up jira.

> Reduce client failures during datanode restart
> --
>
> Key: HDFS-9574
> URL: https://issues.apache.org/jira/browse/HDFS-9574
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Attachments: HDFS-9574.patch, HDFS-9574.v2.patch, HDFS-9574.v3.patch
>
>
> Since DataXceiverServer is initialized before BP is fully up, client requests 
> will fail until the datanode registers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9574) Reduce client failures during datanode restart

2016-01-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089560#comment-15089560
 ] 

Hudson commented on HDFS-9574:
--

FAILURE: Integrated in Hadoop-trunk-Commit #9073 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9073/])
HDFS-9574. Reduce client failures during datanode restart. Contributed (kihwal: 
rev 38c4c14472996562eb3d610649246770c2888c6b)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestDatanodeRestart.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataXceiverLazyPersistHint.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNodeFaultInjector.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DNConf.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java


> Reduce client failures during datanode restart
> --
>
> Key: HDFS-9574
> URL: https://issues.apache.org/jira/browse/HDFS-9574
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Attachments: HDFS-9574.patch, HDFS-9574.v2.patch, HDFS-9574.v3.patch
>
>
> Since DataXceiverServer is initialized before BP is fully up, client requests 
> will fail until the datanode registers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9628) libhdfs++: Implement builder apis from C bindings

2016-01-08 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen updated HDFS-9628:
-
Status: Patch Available  (was: Open)

> libhdfs++: Implement builder apis from C bindings
> -
>
> Key: HDFS-9628
> URL: https://issues.apache.org/jira/browse/HDFS-9628
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-9628.HDFS-8707.000.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9628) libhdfs++: Implement builder apis from C bindings

2016-01-08 Thread Bob Hansen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bob Hansen updated HDFS-9628:
-
Attachment: HDFS-9628.HDFS-8707.000.patch

Implemented C builder interface; added hdfspp.h for some extensions that seem 
relevant.  This file will conflict with the hdfspp.h that's introduced with the 
error checking bug, but I'll resolve it once that bug lands.

> libhdfs++: Implement builder apis from C bindings
> -
>
> Key: HDFS-9628
> URL: https://issues.apache.org/jira/browse/HDFS-9628
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: Bob Hansen
> Attachments: HDFS-9628.HDFS-8707.000.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-9574) Reduce client failures during datanode restart

2016-01-08 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089536#comment-15089536
 ] 

Kihwal Lee edited comment on HDFS-9574 at 1/8/16 5:11 PM:
--

bq. DFSInputStream: Instead of tracking the retryList separately, would it be 
easier to just add it back to the nodeList and set the isRetry boolean?
I thought about doing that. But with one list, it is hard to tell whether it 
has any good candidate left to try or it has only retriable nodes. The code 
ended up being more complicated than necessary so I settled with two separate 
lists.

bq. DataXceiver: Might consider changing checkAccess to not require the stream 
and just have it call getBufferedOutputStream.
It looks like that might be okay for the current usages. I didn't try to 
improve all existing inconsistencies in there. It might be better to be done in 
a separate clean-up jira.


was (Author: kihwal):
bq. DFSInputStream: Instead of tracking the retryList separately, would it be 
easier to just add it back to the nodeList and set the isRetry boolean?
I thought about doing that. But with one list, it is hard to tell whether it 
has any good candidate left to try or it has only retriable nodes. The code 
ended up being more complicated than necessary so I settled with two separate 
lists.

bq. DataXceiver: Might consider changing checkAccess to not require the stream 
and just have it call getBufferedOutputStream.
It looks like that might be okay for the current usages. I didn't try to 
improve all inconsistencies in there. It might be better to be done in a 
separate clean-up jira.

> Reduce client failures during datanode restart
> --
>
> Key: HDFS-9574
> URL: https://issues.apache.org/jira/browse/HDFS-9574
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Attachments: HDFS-9574.patch, HDFS-9574.v2.patch, HDFS-9574.v3.patch
>
>
> Since DataXceiverServer is initialized before BP is fully up, client requests 
> will fail until the datanode registers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)