[jira] [Resolved] (HDFS-8332) DFS client API calls should check filesystem closed

2015-05-08 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G resolved HDFS-8332.
---
   Resolution: Fixed
Fix Version/s: 2.8.0
 Hadoop Flags: Reviewed

 DFS client API calls should check filesystem closed
 ---

 Key: HDFS-8332
 URL: https://issues.apache.org/jira/browse/HDFS-8332
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Rakesh R
Assignee: Rakesh R
 Fix For: 2.8.0

 Attachments: HDFS-8332-000.patch, HDFS-8332-001.patch, 
 HDFS-8332-002-Branch-2.patch, HDFS-8332-002.patch, 
 HDFS-8332.001.branch-2.patch


 I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be 
 called even after the filesystem close. Instead these calls should do 
 {{checkOpen}} and throws:
 {code}
 java.io.IOException: Filesystem closed
   at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8350) Remove old webhdfs.xml

2015-05-08 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HDFS-8350:
---

 Summary: Remove old webhdfs.xml
 Key: HDFS-8350
 URL: https://issues.apache.org/jira/browse/HDFS-8350
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.7.0
Reporter: Akira AJISAKA
Priority: Minor


Old style document 
hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documenation/content/xdocs/webhdfs.xml
 is no longer maintenanced and WebHDFS.md is used instead. We can remove 
webhdfs.xml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8351) Remove namenode -finalize option from document

2015-05-08 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HDFS-8351:
---

 Summary: Remove namenode -finalize option from document
 Key: HDFS-8351
 URL: https://issues.apache.org/jira/browse/HDFS-8351
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.7.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA


hdfs namenode -finalize option was removed by HDFS-5138, however, the document 
was not updated.
http://hadoop.apache.org/docs/r2.7.0/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html#namenode



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8352) Erasure Coding: test webhdfs read write stripe file

2015-05-08 Thread Walter Su (JIRA)
Walter Su created HDFS-8352:
---

 Summary: Erasure Coding: test webhdfs read write stripe file
 Key: HDFS-8352
 URL: https://issues.apache.org/jira/browse/HDFS-8352
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Walter Su
Assignee: Walter Su






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-348) When a HDFS client fails to read a block (due to server failure) the namenode should log this

2015-05-08 Thread nijel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nijel resolved HDFS-348.

  Resolution: Not A Problem
Target Version/s: 2.5.2

Agree with [~qwertymaniac]
closing as not a problem
Feel free to reopen

 When a HDFS client fails to read a block (due to server failure) the namenode 
 should log this
 -

 Key: HDFS-348
 URL: https://issues.apache.org/jira/browse/HDFS-348
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: eric baldeschwieler
Assignee: Sameer Paranjpye

 Right now only client debugging info is available.  The fact that the client 
 node needed to execute a failure mitigation strategy should be logged 
 centrally so we can do analysis.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Hdfs-trunk #2119

2015-05-08 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2119/changes

Changes:

[junping_du] YARN-3523. Cleanup ResourceManagerAdministrationProtocol interface 
audience. Contributed by Naganarasimha G R

[zjshen] YARN-3448. Added a rolling time-to-live LevelDB timeline store 
implementation. Contributed by Jonathan Eagles.

[szetszwo] HDFS-7980. Incremental BlockReport will dramatically slow down 
namenode startup.  Contributed by Walter Su

[aw] HADOOP-11936. Dockerfile references a removed image (aw)

[jianhe] YARN-3584. Fixed attempt diagnostics format shown on the UI. 
Contributed by nijel

[jlowe] MAPREDUCE-6279. AM should explicity exit JVM after all services have 
stopped. Contributed by Eric Payne

[wheat9] HDFS-8321. CacheDirectives and CachePool operations should throw 
RetriableException in safemode. Contributed by Haohui Mai.

[wheat9] HDFS-8037. CheckAccess in WebHDFS silently accepts malformed FsActions 
parameters. Contributed by Walter Su.

[jianhe] YARN-2918. RM should not fail on startup if queue's configured labels 
do not exist in cluster-node-labels. Contributed by Wangda Tan

[aajisaka] YARN-1832. Fix wrong MockLocalizerStatus#equals implementation. 
Contributed by Hong Zhiguo.

[aajisaka] YARN-3572. Correct typos in WritingYarnApplications.md. Contributed 
by Gabor Liptak.

[vinayakumarb] HADOOP-11922. Misspelling of threshold in log4j.properties for 
tests in hadoop-tools (Contributed by Gabor Liptak)

[vinayakumarb] HDFS-8257. Namenode rollingUpgrade option is incorrect in 
document (Contributed by J.Andreina)

[vinayakumarb] HDFS-8067. haadmin prints out stale help messages (Contributed 
by Ajith S)

[devaraj] YARN-3592. Fix typos in RMNodeLabelsManager. Contributed by Sunil G.

[umamahesh] HDFS-8174. Update replication count to live rep count in fsck 
report. Contributed by  J.Andreina

[vinayakumarb] HDFS-6291. FSImage may be left unclosed in 
BootstrapStandby#doRun() ( Contributed by Sanghyun Yun)

[devaraj] YARN-3358. Audit log not present while refreshing Service ACLs.

[umamahesh] HDFS-8332. DFS client API calls should check filesystem closed. 
Contributed by Rakesh R.

[aajisaka] HDFS-8349. Remove .xml and documentation references to 
dfs.webhdfs.enabled. Contributed by Ray Chiang.

[ozawa] MAPREDUCE-6284. Add Task Attempt State API to MapReduce Application 
Master REST API. Contributed by Ryu Kobayashi.

[vinayakumarb] HDFS-7998. HDFS Federation : Command mentioned to add a NN to 
existing federated cluster is wrong (Contributed by Ajith S)

[aajisaka] HDFS-8222. Remove usage of dfsadmin -upgradeProgress from document 
which is no longer supported. Contributed by J.Andreina.

[ozawa] YARN-3589. RM and AH web UI display DOCTYPE wrongly. Contbituted by 
Rohith.

[umamahesh] HDFS-8108. Fsck should provide the info on mandatory option to be 
used along with -blocks ,-locations and -racks. Contributed by J.Andreina.

[vinayakumarb] HDFS-8187. Remove usage of '-setStoragePolicy' and 
'-getStoragePolicy' using dfsadmin cmd (as it is not been supported) 
(Contributed by J.Andreina)

[vinayakumarb] HDFS-8175. Provide information on snapshotDiff for supporting 
the comparison between snapshot and current status (Contributed by J.Andreina)

[ozawa] HDFS-8207. Improper log message when blockreport interval compared with 
initial delay. Contributed by Brahma Reddy Battula and Ashish Singhi.

[aajisaka] MAPREDUCE-6079. Rename JobImpl#username to reporterUserName. 
Contributed by Tsuyoshi Ozawa.

[vinayakumarb] HDFS-8209. Support different number of datanode directories in 
MiniDFSCluster. (Contributed by surendra singh lilhore)

[devaraj] MAPREDUCE-6342. Make POM project names consistent. Contributed by 
Rohith.

[vinayakumarb] HDFS-8226. Non-HA rollback compatibility broken (Contributed by 
J.Andreina)

[ozawa] YARN-3169. Drop YARN's overview document. Contributed by Brahma Reddy 
Battula.

[vinayakumarb] HDFS-6576. Datanode log is generating at root directory in 
security mode (Contributed by surendra singh lilhore)

[vinayakumarb] HADOOP-11877. SnappyDecompressor's Logger class name is wrong ( 
Contributed by surendra singh lilhore)

[vinayakumarb] HDFS-3384. DataStreamer thread should be closed immediatly when 
failed to setup a PipelineForAppendOrRecovery (Contributed by Uma Maheswara Rao 
G)

[umamahesh] HDFS-6285. tidy an error log inside BlockReceiver. Contributed by 
Liang Xie.

--
[...truncated 6730 lines...]
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.542 sec - in 
org.apache.hadoop.fs.TestUnbuffer
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.565 sec - in 
org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 14.393 sec - 
in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests 

Hadoop-Hdfs-trunk - Build # 2119 - Still Failing

2015-05-08 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2119/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 6923 lines...]
[ERROR] - [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn goals -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2116
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362738 bytes
Compression is 0.0%
Took 10 sec
Recording test results
Updating HDFS-8222
Updating YARN-3592
Updating HDFS-8037
Updating HADOOP-11936
Updating MAPREDUCE-6279
Updating YARN-3572
Updating HDFS-6576
Updating HDFS-8174
Updating HADOOP-11877
Updating HDFS-8175
Updating HDFS-8257
Updating HDFS-8321
Updating MAPREDUCE-6342
Updating YARN-2918
Updating YARN-1832
Updating HDFS-6285
Updating HDFS-3384
Updating HDFS-7998
Updating YARN-3584
Updating HDFS-8108
Updating YARN-3448
Updating YARN-3523
Updating HADOOP-11922
Updating HDFS-8332
Updating YARN-3589
Updating HDFS-8067
Updating MAPREDUCE-6284
Updating HDFS-8187
Updating HDFS-8349
Updating HDFS-8207
Updating YARN-3358
Updating HDFS-8209
Updating HDFS-8226
Updating HDFS-7980
Updating MAPREDUCE-6079
Updating YARN-3169
Updating HDFS-6291
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
4 tests failed.
REGRESSION:  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshot.testSnapshot

Error Message:
Timed out waiting for Mini HDFS Cluster to start

Stack Trace:
java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
at 
org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1206)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:839)
at org.apache.hadoop.hdfs.MiniDFSCluster.init(MiniDFSCluster.java:471)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:430)
at 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshot.checkFSImage(TestSnapshot.java:201)
at 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshot.runTestSnapshot(TestSnapshot.java:298)
at 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshot.testSnapshot(TestSnapshot.java:237)


REGRESSION:  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshot.testSnapshottableDirectory

Error Message:
org.xml.sax.SAXParseException; systemId: 
jar:file:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-common/3.0.0-SNAPSHOT/hadoop-common-3.0.0-SNAPSHOT.jar!/core-default.xml;
 lineNumber: 1; columnNumber: 1; Invalid byte 1 of 1-byte UTF-8 sequence.

Stack Trace:
java.lang.RuntimeException: org.xml.sax.SAXParseException; systemId: 
jar:file:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-common/3.0.0-SNAPSHOT/hadoop-common-3.0.0-SNAPSHOT.jar!/core-default.xml;
 lineNumber: 1; columnNumber: 1; Invalid byte 1 of 1-byte UTF-8 sequence.
at org.apache.xerces.impl.io.UTF8Reader.invalidByte(Unknown Source)
at org.apache.xerces.impl.io.UTF8Reader.read(Unknown Source)
at org.apache.xerces.impl.XMLEntityScanner.load(Unknown Source)
at org.apache.xerces.impl.XMLEntityScanner.skipString(Unknown Source)
at 
org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:150)
at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2546)
at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2534)
at 
org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2605)
at 
org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2558)
at 
org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2469)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:1205)
at 

[jira] [Created] (HDFS-8353) FileSystem.resolvePath() fails for Windows UNC paths

2015-05-08 Thread john lilley (JIRA)
john lilley created HDFS-8353:
-

 Summary: FileSystem.resolvePath() fails for Windows UNC paths
 Key: HDFS-8353
 URL: https://issues.apache.org/jira/browse/HDFS-8353
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Affects Versions: 2.6.0, 2.4.0
 Environment: Windows 8, x64
Java 1.7
Reporter: john lilley


FileSystem.resolvePath() fails with Windows UNC path. This has a knock-on 
effect with Parquet file access in local filesystem, because Parquet has no 
open-using-stream API that we've been able to find. Take this simple test: 

public class Scratch {
  public static void main(String[] args) {
// Note that this path must exist
java.net.URI uriWithAuth = java.net.URI.create(file://host/share/file);
try {
  FileSystem fs = FileSystem.get(uriWithAuth, new Configuration());
  fs.resolvePath(new Path(uriWithAuth));
} catch (Exception ex) {
  ex.printStackTrace();
}
  }
}

The resolvePath() call will fail in FileSystem.checkPath():
  protected void checkPath(Path path) {
URI uri = path.toUri();
String thatScheme = uri.getScheme();
if (thatScheme == null)// fs is relative
  return;
URI thisUri = getCanonicalUri();
String thisScheme = thisUri.getScheme();
//authority and scheme are not case sensitive
if (thisScheme.equalsIgnoreCase(thatScheme)) {// schemes match
  String thisAuthority = thisUri.getAuthority();
  String thatAuthority = uri.getAuthority();
  if (thatAuthority == null // path's authority is null
  thisAuthority != null) {// fs has an authority
URI defaultUri = getDefaultUri(getConf());
if (thisScheme.equalsIgnoreCase(defaultUri.getScheme())) {
  uri = defaultUri; // schemes match, so use this uri instead
} else {
  uri = null; // can't determine auth of the path
}
  }
  if (uri != null) {
// canonicalize uri before comparing with this fs
uri = canonicalizeUri(uri);
thatAuthority = uri.getAuthority();
if (thisAuthority == thatAuthority ||   // authorities match
(thisAuthority != null 
 thisAuthority.equalsIgnoreCase(thatAuthority)))
  return;
  }
}
throw new IllegalArgumentException(Wrong FS: +path+
   , expected: +this.getUri());
  }


The problem is that thisAuthority is null, and thatAuthority gets host. There 
is no logic for dealing with that case. In fact, this method seems broken in 
several ways. There are at least two problems: 
-- For UNC paths like file://host/share/... , the authority does not need to 
match, at least for Windows UNC paths. All of these paths are the same file 
system: file:///F:/folder/file, file://host1/share/file, 
file://host2/share/file. 
-- The test thisAuthority == thatAuthority violates Java 101. It should be 
thisAuthority.equals(thatAuthority) 
-- hostnames are case-independent, so I think that the authority comparison 
should also be case-insensitive, at least for UNC paths. 
-- I don't see any attempt to resolve hostnames to IP addresses, but that may 
simply be beyond the scope of this method.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #179

2015-05-08 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/179/changes

Changes:

[jlowe] MAPREDUCE-6279. AM should explicity exit JVM after all services have 
stopped. Contributed by Eric Payne

[wheat9] HDFS-8321. CacheDirectives and CachePool operations should throw 
RetriableException in safemode. Contributed by Haohui Mai.

[wheat9] HDFS-8037. CheckAccess in WebHDFS silently accepts malformed FsActions 
parameters. Contributed by Walter Su.

[jianhe] YARN-2918. RM should not fail on startup if queue's configured labels 
do not exist in cluster-node-labels. Contributed by Wangda Tan

[aajisaka] YARN-1832. Fix wrong MockLocalizerStatus#equals implementation. 
Contributed by Hong Zhiguo.

[aajisaka] YARN-3572. Correct typos in WritingYarnApplications.md. Contributed 
by Gabor Liptak.

[vinayakumarb] HADOOP-11922. Misspelling of threshold in log4j.properties for 
tests in hadoop-tools (Contributed by Gabor Liptak)

[vinayakumarb] HDFS-8257. Namenode rollingUpgrade option is incorrect in 
document (Contributed by J.Andreina)

[vinayakumarb] HDFS-8067. haadmin prints out stale help messages (Contributed 
by Ajith S)

[devaraj] YARN-3592. Fix typos in RMNodeLabelsManager. Contributed by Sunil G.

[umamahesh] HDFS-8174. Update replication count to live rep count in fsck 
report. Contributed by  J.Andreina

[vinayakumarb] HDFS-6291. FSImage may be left unclosed in 
BootstrapStandby#doRun() ( Contributed by Sanghyun Yun)

[devaraj] YARN-3358. Audit log not present while refreshing Service ACLs.

[umamahesh] HDFS-8332. DFS client API calls should check filesystem closed. 
Contributed by Rakesh R.

[aajisaka] HDFS-8349. Remove .xml and documentation references to 
dfs.webhdfs.enabled. Contributed by Ray Chiang.

[ozawa] MAPREDUCE-6284. Add Task Attempt State API to MapReduce Application 
Master REST API. Contributed by Ryu Kobayashi.

[vinayakumarb] HDFS-7998. HDFS Federation : Command mentioned to add a NN to 
existing federated cluster is wrong (Contributed by Ajith S)

[aajisaka] HDFS-8222. Remove usage of dfsadmin -upgradeProgress from document 
which is no longer supported. Contributed by J.Andreina.

[ozawa] YARN-3589. RM and AH web UI display DOCTYPE wrongly. Contbituted by 
Rohith.

[umamahesh] HDFS-8108. Fsck should provide the info on mandatory option to be 
used along with -blocks ,-locations and -racks. Contributed by J.Andreina.

[vinayakumarb] HDFS-8187. Remove usage of '-setStoragePolicy' and 
'-getStoragePolicy' using dfsadmin cmd (as it is not been supported) 
(Contributed by J.Andreina)

[vinayakumarb] HDFS-8175. Provide information on snapshotDiff for supporting 
the comparison between snapshot and current status (Contributed by J.Andreina)

[ozawa] HDFS-8207. Improper log message when blockreport interval compared with 
initial delay. Contributed by Brahma Reddy Battula and Ashish Singhi.

[aajisaka] MAPREDUCE-6079. Rename JobImpl#username to reporterUserName. 
Contributed by Tsuyoshi Ozawa.

[vinayakumarb] HDFS-8209. Support different number of datanode directories in 
MiniDFSCluster. (Contributed by surendra singh lilhore)

[devaraj] MAPREDUCE-6342. Make POM project names consistent. Contributed by 
Rohith.

[vinayakumarb] HDFS-8226. Non-HA rollback compatibility broken (Contributed by 
J.Andreina)

[ozawa] YARN-3169. Drop YARN's overview document. Contributed by Brahma Reddy 
Battula.

[vinayakumarb] HDFS-6576. Datanode log is generating at root directory in 
security mode (Contributed by surendra singh lilhore)

[vinayakumarb] HADOOP-11877. SnappyDecompressor's Logger class name is wrong ( 
Contributed by surendra singh lilhore)

[vinayakumarb] HDFS-3384. DataStreamer thread should be closed immediatly when 
failed to setup a PipelineForAppendOrRecovery (Contributed by Uma Maheswara Rao 
G)

[umamahesh] HDFS-6285. tidy an error log inside BlockReceiver. Contributed by 
Liang Xie.

--
[...truncated 7184 lines...]
Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.465 sec - in 
org.apache.hadoop.cli.TestXAttrCLI
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.992 sec - in 
org.apache.hadoop.tools.TestJMXGet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.086 sec - in 
org.apache.hadoop.tools.TestTools
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.433 sec - in 
org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed 

Hadoop-Hdfs-trunk-Java8 - Build # 179 - Still Failing

2015-05-08 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/179/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7377 lines...]
[INFO] Final Memory: 52M/260M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] - [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn goals -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #175
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 806107 bytes
Compression is 0.0%
Took 17 sec
Recording test results
Updating HDFS-8222
Updating YARN-3592
Updating HDFS-8037
Updating MAPREDUCE-6279
Updating YARN-3572
Updating HDFS-6576
Updating HDFS-8174
Updating HADOOP-11877
Updating HDFS-8175
Updating HDFS-8257
Updating HDFS-8321
Updating MAPREDUCE-6342
Updating YARN-2918
Updating YARN-1832
Updating HDFS-6285
Updating HDFS-3384
Updating HDFS-7998
Updating HDFS-8108
Updating HADOOP-11922
Updating HDFS-8332
Updating YARN-3589
Updating HDFS-8067
Updating MAPREDUCE-6284
Updating HDFS-8187
Updating HDFS-8349
Updating YARN-3358
Updating HDFS-8207
Updating HDFS-8209
Updating HDFS-8226
Updating MAPREDUCE-6079
Updating YARN-3169
Updating HDFS-6291
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  
org.apache.hadoop.tracing.TestTraceAdmin.testCreateAndDestroySpanReceiver

Error Message:
Failed to load SpanReceiver org.apache.htrace.impl.LocalFileSpanReceiver
 at 
org.apache.hadoop.tracing.SpanReceiverHost.loadInstance(SpanReceiverHost.java:171)
 at 
org.apache.hadoop.tracing.SpanReceiverHost.addSpanReceiver(SpanReceiverHost.java:216)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addSpanReceiver(NameNodeRpcServer.java:2029)
 at 
org.apache.hadoop.tracing.TraceAdminProtocolServerSideTranslatorPB.addSpanReceiver(TraceAdminProtocolServerSideTranslatorPB.java:81)
 at 
org.apache.hadoop.tracing.TraceAdminPB$TraceAdminService$2.callBlockingMethod(TraceAdminPB.java:4580)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2174)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2170)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2170)


Stack Trace:
org.apache.hadoop.ipc.RemoteException: Failed to load SpanReceiver 
org.apache.htrace.impl.LocalFileSpanReceiver
at 
org.apache.hadoop.tracing.SpanReceiverHost.loadInstance(SpanReceiverHost.java:171)
at 
org.apache.hadoop.tracing.SpanReceiverHost.addSpanReceiver(SpanReceiverHost.java:216)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addSpanReceiver(NameNodeRpcServer.java:2029)
at 
org.apache.hadoop.tracing.TraceAdminProtocolServerSideTranslatorPB.addSpanReceiver(TraceAdminProtocolServerSideTranslatorPB.java:81)
at 
org.apache.hadoop.tracing.TraceAdminPB$TraceAdminService$2.callBlockingMethod(TraceAdminPB.java:4580)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:636)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2174)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2170)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
at 

[jira] [Created] (HDFS-8354) Update jets3t jar to latest in order to fix leading '/' problem with S3 calls.

2015-05-08 Thread Matthew Yee (JIRA)
Matthew Yee created HDFS-8354:
-

 Summary: Update jets3t jar to latest in order to fix leading '/' 
problem with S3 calls.
 Key: HDFS-8354
 URL: https://issues.apache.org/jira/browse/HDFS-8354
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Matthew Yee


In Hadoop 2.6.0, the included jets3t jar, 0.9.0 contains a bug where a leading 
'/' is not included in the PUT object-copy request.  See 
https://bitbucket.org/jmurty/jets3t/issue/210/put-object-copy-x-amz-copy-source-header#comment-17910319
 

This bug has been resolved and I'm requesting that the Hadoop package be 
updated to include this fix.  Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] branch-1

2015-05-08 Thread Chris Nauroth
I think it would be fine to auto-close most remaining branch-1 issues
even if the branch is still formally considered alive.  I don't expect us
to create a new 1.x release unless a security vulnerability or critical
bug forces it.  Closing all non-critical issues would match with the
reality that no one is actively developing for the branch, but there would
still be the option of filing new critical bugs if someone decides that
they want a new 1.x release.

--Chris Nauroth




On 5/8/15, 10:50 AM, Karthik Kambatla ka...@cloudera.com wrote:

I would be -1 to declaring the branch dead just yet. There have been 7
commits to that branch this year. I know this isn't comparable to trunk or
branch-2, but it is not negligible either.

I propose we come up with a policy for deprecating past major release
branches. May be, something along the lines of - deprecate branch-x when
release x+3.0.0 goes GA?



On Fri, May 8, 2015 at 10:41 AM, Allen Wittenauer a...@altiscale.com
wrote:


 May we declare this branch dead and just close bugs (but not
 necessarily concepts, ideas, etc) with won¹t fix?  I don¹t think anyone
has
 any intention of working on the 1.3 release, especially given that 1.2.1
 was Aug 2013 Š.

 I guess we need a PMC member to declare a vote or whateverŠ.





-- 
Karthik Kambatla
Software Engineer, Cloudera Inc.

http://five.sentenc.es



[jira] [Created] (HDFS-8355) Erasure Code: Refactor BlockInfo and BlockInfoUnderConstruction

2015-05-08 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HDFS-8355:
-

 Summary: Erasure Code: Refactor BlockInfo and 
BlockInfoUnderConstruction
 Key: HDFS-8355
 URL: https://issues.apache.org/jira/browse/HDFS-8355
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
Priority: Minor


A few static methods in BlockInfo can be declared in BlockInfoUnderConstruction 
so that the subclasses could implement them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] branch-1

2015-05-08 Thread Karthik Kambatla
Closing out the JIRAs as Auto Closed or Closed due to Inactivity seems
reasonable to me. For branch-1, we can be more aggressive. We should
probably do the same less aggressively for other branches too.

On Fri, May 8, 2015 at 11:01 AM, Arun C Murthy acmur...@apache.org wrote:

 +1

 Arun

 On May 8, 2015, at 10:41 AM, Allen Wittenauer a...@altiscale.com wrote:

 
May we declare this branch dead and just close bugs (but not
 necessarily concepts, ideas, etc) with won’t fix?  I don’t think anyone has
 any intention of working on the 1.3 release, especially given that 1.2.1
 was Aug 2013 ….
 
I guess we need a PMC member to declare a vote or whatever….
 
 




-- 
Karthik Kambatla
Software Engineer, Cloudera Inc.

http://five.sentenc.es


Re: [DISCUSS] branch-1

2015-05-08 Thread Arun C Murthy
+1

Arun

On May 8, 2015, at 10:41 AM, Allen Wittenauer a...@altiscale.com wrote:

   
   May we declare this branch dead and just close bugs (but not 
 necessarily concepts, ideas, etc) with won’t fix?  I don’t think anyone has 
 any intention of working on the 1.3 release, especially given that 1.2.1 was 
 Aug 2013 ….
 
   I guess we need a PMC member to declare a vote or whatever….
 
 



Re: [DISCUSS] branch-1

2015-05-08 Thread Karthik Kambatla
I would be -1 to declaring the branch dead just yet. There have been 7
commits to that branch this year. I know this isn't comparable to trunk or
branch-2, but it is not negligible either.

I propose we come up with a policy for deprecating past major release
branches. May be, something along the lines of - deprecate branch-x when
release x+3.0.0 goes GA?



On Fri, May 8, 2015 at 10:41 AM, Allen Wittenauer a...@altiscale.com wrote:


 May we declare this branch dead and just close bugs (but not
 necessarily concepts, ideas, etc) with won’t fix?  I don’t think anyone has
 any intention of working on the 1.3 release, especially given that 1.2.1
 was Aug 2013 ….

 I guess we need a PMC member to declare a vote or whatever….





-- 
Karthik Kambatla
Software Engineer, Cloudera Inc.

http://five.sentenc.es


[DISCUSS] branch-1

2015-05-08 Thread Allen Wittenauer

May we declare this branch dead and just close bugs (but not 
necessarily concepts, ideas, etc) with won’t fix?  I don’t think anyone has any 
intention of working on the 1.3 release, especially given that 1.2.1 was Aug 
2013 ….

I guess we need a PMC member to declare a vote or whatever….




[jira] [Created] (HDFS-8357) Consolidate parameters of INode.CleanSubtree() into a parameter objects.

2015-05-08 Thread Haohui Mai (JIRA)
Haohui Mai created HDFS-8357:


 Summary: Consolidate parameters of INode.CleanSubtree() into a 
parameter objects.
 Key: HDFS-8357
 URL: https://issues.apache.org/jira/browse/HDFS-8357
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Li Lu


{{INode.CleanSubtree()}} takes multiple parameters including 
BlockStoragePolicySuite, removedBlocks and removedINodes. These parameters are 
pass multiple layers down the call chains.

This jira proposes to refactor them into a parameter object so that it is 
easier to make changes like HDFS-6757.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8359) Normalization of timeouts in InputStream and OutputStream

2015-05-08 Thread Esteban Gutierrez (JIRA)
Esteban Gutierrez created HDFS-8359:
---

 Summary: Normalization of timeouts in InputStream and OutputStream
 Key: HDFS-8359
 URL: https://issues.apache.org/jira/browse/HDFS-8359
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, hdfs-client
Reporter: Esteban Gutierrez


This is a follow up from HDFS-8311. As noticed by [~yzhangal] there are many 
other places where we need to provide a timeout in the InputStream and 
OutputStream (perhaps in lesser extent in OS). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Jenkins precommit-*-build

2015-05-08 Thread Konstantin Shvachko
Thank you Allen!

--Konst

On Tue, May 5, 2015 at 5:14 PM, Allen Wittenauer a...@altiscale.com wrote:


 HDFS, MAPREDUCE, and YARN have been migrated.

 Let me know of any issues and I’ll try to get to them as I can.  This
 should be the end of the Jenkins race conditions for our pre commits!
 *crosses fingers*






[jira] [Created] (HDFS-8356) Document missing properties in hdfs-default.xml

2015-05-08 Thread Ray Chiang (JIRA)
Ray Chiang created HDFS-8356:


 Summary: Document missing properties in hdfs-default.xml
 Key: HDFS-8356
 URL: https://issues.apache.org/jira/browse/HDFS-8356
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Affects Versions: 2.7.0
Reporter: Ray Chiang
Assignee: Ray Chiang


The following properties are currently not defined in hdfs-default.xml. These 
properties should either be
A) documented in hdfs-default.xml OR
B) listed as an exception (with comments, e.g. for internal use) in the 
TestHdfsConfigFields unit test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: REMINDER! REGISTRATIONS CLOSING 5/6!

2015-05-08 Thread Yongjun Zhang
Thanks Allen a lot for organizing the Haoop Bug Bash, very happy to see
many folks by face!

Wish you all a nice weekend!


--Yongjun


On Tue, May 5, 2015 at 9:31 PM, Allen Wittenauer a...@altiscale.com wrote:


 On May 5, 2015, at 8:10 PM, Allen Wittenauer a...@altiscale.com wrote:

* We’ll be closing registrations to the Bug Bash on May
 6th at 3PM Pacific time.  So make sure you do it son:
 https://www.eventbrite.com/e/apache-hadoop-global-bug-bash-tickets-16507188445

 That should be *noon* Pacific time.  So just do it already, ok?

 [I can’t tell time.  Someone should buy me an Apple Watch Edition
 or something.]





[jira] [Created] (HDFS-8360) Fix FindBugs issues introduced by erasure coding

2015-05-08 Thread Kai Zheng (JIRA)
Kai Zheng created HDFS-8360:
---

 Summary: Fix FindBugs issues introduced by erasure coding
 Key: HDFS-8360
 URL: https://issues.apache.org/jira/browse/HDFS-8360
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng


As reported by 
https://issues.apache.org/jira/browse/HADOOP-11938?focusedCommentId=14534949page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14534949,
 there are quite a few FindBugs issues related to erasure coding. It would be 
good to get them resolved before the merge. Note the issues are not relevant to 
HADOOP-11938, I'm just quoting it for the easy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-274) Add the current time to all the pages in the user interface

2015-05-08 Thread Arshad Mohammad (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arshad Mohammad resolved HDFS-274.
--
Resolution: Won't Fix

The changes are very stale and do not apply in current hadoop code

 Add the current time to all the pages in the user interface
 ---

 Key: HDFS-274
 URL: https://issues.apache.org/jira/browse/HDFS-274
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Enis Soztutar
Assignee: Arshad Mohammad
Priority: Minor
  Labels: newbie
 Attachments: HDFS-274.patch


 Adding current time to all of the pages, so that current time of the machine 
 serving the page is displayed in the UI
 As discussed in the hadoop-dev mailing list by Arkady Borkovsky :
 it would be so nice to add the CURRENT TIME to all the pages.
 For naive users like myself, understanding universal time is very difficult.  
 So knowing what the cluster thinks about current time makes it so much easier 
 to understand when a job has actually started or ended 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8361) Choose SSD over DISK in block placement

2015-05-08 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HDFS-8361:
-

 Summary: Choose SSD over DISK in block placement
 Key: HDFS-8361
 URL: https://issues.apache.org/jira/browse/HDFS-8361
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze


BlockPlacementPolicyDefault chooses the StorageType by iterating the given 
StorageType EnumMap in its natural order (the order in which the enum constants 
are declared).  So DISK will be chosen over SSD in One-SSD policy since DISK is 
declared before SSD as shown below.  We should choose SSD first.

{code}
public enum StorageType {
  DISK(false),
  SSD(false),
  ARCHIVE(false),
  RAM_DISK(true);

  ...
}
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8360) Fix FindBugs issues introduced by erasure coding

2015-05-08 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula resolved HDFS-8360.

Resolution: Duplicate

 Fix FindBugs issues introduced by erasure coding
 

 Key: HDFS-8360
 URL: https://issues.apache.org/jira/browse/HDFS-8360
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng

 As reported by 
 https://issues.apache.org/jira/browse/HADOOP-11938?focusedCommentId=14534949page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14534949,
  there are quite a few FindBugs issues related to erasure coding. It would be 
 good to get them resolved before the merge. Note the issues are not relevant 
 to HADOOP-11938, I'm just quoting it for the easy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)