[jira] [Created] (HADOOP-11732) Make the KMS related log file name consistent with other hadoop processes

2015-03-19 Thread nijel (JIRA)
nijel created HADOOP-11732:
--

 Summary: Make the KMS related log file name consistent with other 
hadoop processes
 Key: HADOOP-11732
 URL: https://issues.apache.org/jira/browse/HADOOP-11732
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms
Reporter: nijel
Assignee: nijel
Priority: Minor


Now the kms log file names are "kms.log" and "kms-audit.log"
Preferably KMS also can use the same log file name patetrn as other  processes.
"hadoop--kms-.log"
"hadoop--kms--audit.log"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


RE: Save the Date: Apache Hadoop Bug Bash / May 8th!

2015-03-19 Thread Brahma Reddy Battula
+1
I feel,we can include 
a) Road map
b) Guidelines ( If possible by giving certain examples )
c) Open Discussion ( Even virtual participants can express their ideas/doubts )



Thanks & Regards
Brahma Reddy Battula

From: Tsuyoshi Ozawa [oz...@apache.org]
Sent: Thursday, March 19, 2015 11:36 PM
To: hdfs-...@hadoop.apache.org
Cc: common-dev@hadoop.apache.org; yarn-...@hadoop.apache.org; 
mapreduce-...@hadoop.apache.org
Subject: Re: Save the Date: Apache Hadoop Bug Bash / May 8th!

Hi Allen,

Thank you for the great suggestion. Let's do this!

Thanks,
- Tsuyoshi

On Fri, Mar 20, 2015 at 2:34 AM, Allen Wittenauer  wrote:
>
> Hi folks,
>
> There are ~6,000 Hadoop JIRA issues that have gone unaddressed, 
> including ~900 with patches waiting to be reviewed.  Among other things, this 
> lack of attention to our backlog is making the Hadoop project very unfriendly 
> to contributors--which is ultimately very unhealthy for the project.
>
> In hopes of resetting our community habits a bit, a bunch of us are 
> organizing a "bug bash" of sorts on Friday May 8th, the day after HBase Con.  
> We're thinking about holding an in-person event someplace in the San 
> Francisco Bay Area... but we definitely want this to be a virtual event for 
> participants around the world!
>
> As details form, we'll let you know.  But we wanted to send an 
> initial notice so you can save the date, and also share your ideas on how we 
> can make this event successful.
>
>
> Thanks!


[jira] [Created] (HADOOP-11731) Rework the changelog and releasenotes

2015-03-19 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-11731:
-

 Summary: Rework the changelog and releasenotes
 Key: HADOOP-11731
 URL: https://issues.apache.org/jira/browse/HADOOP-11731
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation
Affects Versions: 3.0.0
Reporter: Allen Wittenauer


The current way we generate these build artifacts is awful.  Plus they are ugly 
and, in the case of release notes, very hard to pick out what is important.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11647) Reed-Solomon ErasureCoder

2015-03-19 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng resolved HADOOP-11647.

   Resolution: Fixed
Fix Version/s: HDFS-7285
 Hadoop Flags: Reviewed

> Reed-Solomon ErasureCoder
> -
>
> Key: HADOOP-11647
> URL: https://issues.apache.org/jira/browse/HADOOP-11647
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: HDFS-7285
>
> Attachments: HADOOP-11647-v2.patch, HADOOP-11647-v4.patch, 
> HADOOP-11647-v5.patch, HADOOP-11647-v6.patch, HDFS-7664-v1.patch
>
>
> This is to implement Reed-Solomon ErasureCoder using the API defined in 
> HADOOP-11646. It supports to plugin via configuration for concrete 
> RawErasureCoder, using either JRSErasureCoder added in HDFS-7418 or 
> IsaRSErasureCoder added in HDFS-7338.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-7638) visibility of the security utils and things like getCanonicalService.

2015-03-19 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved HADOOP-7638.
--
Resolution: Later

> visibility of the security utils and things like getCanonicalService.
> -
>
> Key: HADOOP-7638
> URL: https://issues.apache.org/jira/browse/HADOOP-7638
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 0.24.0
>Reporter: John George
>Priority: Minor
>
> It would be a good idea to file an additional jira to take another look at 
> the visibility of the security utils and things like getCanonicalService. 
> Doesn't seem like these should be fully public. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-7822) Hadoop startup script has a race condition : this causes failures in datanodes status and stop commands

2015-03-19 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved HADOOP-7822.
--
Resolution: Fixed

This has probably been fixed as part of the shell script rewrite

> Hadoop startup script has a race condition : this causes failures in 
> datanodes status and stop commands
> ---
>
> Key: HADOOP-7822
> URL: https://issues.apache.org/jira/browse/HADOOP-7822
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.1, 0.20.2, 0.20.205.0
>Reporter: Rahul Jain
>
> The symptoms are the following:
> a) start-all.sh is able to start both hadoop dfs and map-reduce processes, 
> assuming same grid nodes are used for dfs and map-reduce
> b) stop-all.sh stops map-reduce but fails to stop dfs processes (datanode 
> tasks on grid nodes)  
> Instead, the warning message 'no datanode to stop' is seen for all data 
> nodes.
> c) The 'pid' files for datanode processes do not exist therefore the only way 
> to stop datanode processes is to manually execute kill commands.
> The root cause of the issue appears to be in hadoop startup scripts. 
> start-all.sh is really two parts:
> 1. start-dfs.sh : Start namenode and datanodes
> 2. start-mapred.sh: Jobtracker and task trackers.
> In this case, running start-dfs.sh did as expected and created the pid files 
> for different datanodes. However, start-mapred.sh script did end up forcing 
> another rsync from master to slaves, effectively wiping out the pid files 
> stored under "pid" directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-7692) hadoop single node setup script to create mapred dir

2015-03-19 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved HADOOP-7692.
--
Resolution: Won't Fix

> hadoop single node setup script to create mapred dir
> 
>
> Key: HADOOP-7692
> URL: https://issues.apache.org/jira/browse/HADOOP-7692
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.205.0, 1.1.0
>Reporter: Giridharan Kesavan
>
> hadoop single node setup script should create /mapred directory and chown to 
> mapred:mapred ; jt requires this directory for startup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-7643) Bump up the version of aspectj

2015-03-19 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved HADOOP-7643.
--
Resolution: Won't Fix

> Bump up the version of aspectj
> --
>
> Key: HADOOP-7643
> URL: https://issues.apache.org/jira/browse/HADOOP-7643
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, test
>Affects Versions: 0.20.205.0, 1.1.0
>Reporter: Kihwal Lee
>Priority: Minor
>
> When the fault injection target is enabled, aspectj fails with the following 
> message:
> "Can't parameterize a member of non-generic type:"
> This is fixed by upgrading aspectj. I tested with 1.6.11 and it worked.
> It will also apply to trunk, but I believe trunk has other problems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11730) The broken s3n read retry logic causes a wrong output being committed

2015-03-19 Thread Takenori Sato (JIRA)
Takenori Sato created HADOOP-11730:
--

 Summary: The broken s3n read retry logic causes a wrong output 
being committed
 Key: HADOOP-11730
 URL: https://issues.apache.org/jira/browse/HADOOP-11730
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.6.0
 Environment: HDP 2.2
Reporter: Takenori Sato
Assignee: Takenori Sato


s3n attempts to read again when it encounters IOException during read. But the 
current logic does not reopen the connection, thus, it ends up with no-op, and 
committing the wrong(truncated) output.

Here's a stack trace as an example.

{quote}
2015-03-13 20:17:24,835 [TezChild] INFO  
org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor - 
Starting output org.apache.tez.mapreduce.output.MROutput@52008dbd to vertex 
scope-12
2015-03-13 20:17:24,866 [TezChild] DEBUG 
org.jets3t.service.impl.rest.httpclient.HttpMethodReleaseInputStream - Released 
HttpMethod as its response data stream threw an exception
org.apache.http.ConnectionClosedException: Premature end of Content-Length 
delimited message body (expected: 296587138; received: 155648
at 
org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:184)
at 
org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:138)
at 
org.jets3t.service.io.InterruptableInputStream.read(InterruptableInputStream.java:78)
at 
org.jets3t.service.impl.rest.httpclient.HttpMethodReleaseInputStream.read(HttpMethodReleaseInputStream.java:146)
at 
org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.read(NativeS3FileSystem.java:145)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
at java.io.DataInputStream.read(DataInputStream.java:100)
at org.apache.hadoop.util.LineReader.fillBuffer(LineReader.java:180)
at 
org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:216)
at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174)
at 
org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecordReader.java:185)
at org.apache.pig.builtin.PigStorage.getNext(PigStorage.java:259)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:204)
at 
org.apache.tez.mapreduce.lib.MRReaderMapReduce.next(MRReaderMapReduce.java:116)
at 
org.apache.pig.backend.hadoop.executionengine.tez.plan.operator.POSimpleTezLoad.getNextTuple(POSimpleTezLoad.java:106)
at 
org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:307)
at 
org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.getNextTuple(POForEach.java:246)
at 
org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:307)
at 
org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POFilter.getNextTuple(POFilter.java:91)
at 
org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:307)
at 
org.apache.pig.backend.hadoop.executionengine.tez.plan.operator.POStoreTez.getNextTuple(POStoreTez.java:117)
at 
org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor.runPipeline(PigProcessor.java:313)
at 
org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor.run(PigProcessor.java:192)
at 
org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:324)
at 
org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:176)
at 
org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at 
org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.call(TezTaskRunner.java:168)
at 
org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.call(TezTaskRunner.java:163)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
2015-03-13 20:17:24,867 [TezChild] INFO  
org.apache.hadoop.fs.s3native.NativeS3FileSystem - Received IOException while 
reading 'user/hadoop/tsato/readlarge/input/clou

[jira] [Resolved] (HADOOP-11707) Add factory to create raw erasure coder

2015-03-19 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng resolved HADOOP-11707.

  Resolution: Fixed
Target Version/s: HDFS-7285
Hadoop Flags: Reviewed

> Add factory to create raw erasure coder
> ---
>
> Key: HADOOP-11707
> URL: https://issues.apache.org/jira/browse/HADOOP-11707
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: HDFS-7285
>
> Attachments: HADOOP-11707-v1.patch
>
>
> We have {{RawErasureEncoder}} and {{RawErasureDecoder}} interface separately, 
> which simplifies the implementation of raw coders. This would require to 
> configure raw encoder and decoder respectively for a {{ErasureCoder}}, which 
> isn't convenient. To simplify the configuration, we would have coder factory 
> to group encoder and decoder together so only a factory class needs to be 
> configured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-10037) s3n read truncated, but doesn't throw exception

2015-03-19 Thread Takenori Sato (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takenori Sato resolved HADOOP-10037.

Resolution: Fixed

The issue that had reopened this turned out being a separate issue.

> s3n read truncated, but doesn't throw exception 
> 
>
> Key: HADOOP-10037
> URL: https://issues.apache.org/jira/browse/HADOOP-10037
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.0.0-alpha
> Environment: Ubuntu Linux 13.04 running on Amazon EC2 (cc2.8xlarge)
>Reporter: David Rosenstrauch
> Fix For: 2.6.0
>
> Attachments: S3ReadFailedOnTruncation.html, S3ReadSucceeded.html
>
>
> For months now we've been finding that we've been experiencing frequent data 
> truncation issues when reading from S3 using the s3n:// protocol.  I finally 
> was able to gather some debugging output on the issue in a job I ran last 
> night, and so can finally file a bug report.
> The job I ran last night was on a 16-node cluster (all of them AWS EC2 
> cc2.8xlarge machines, running Ubuntu 13.04 and Cloudera CDH4.3.0).  The job 
> was a Hadoop streaming job, which reads through a large number (i.e., 
> ~55,000) of files on S3, each of them approximately 300K bytes in size.
> All of the files contain 46 columns of data in each record.  But I added in 
> an extra check in my mapper code to count and verify the number of columns in 
> every record - throwing an error and crashing the map task if the column 
> count is wrong.
> If you look in the attached task logs, you'll see 2 attempts on the same 
> task.  The first one fails due to data truncated (i.e., my job intentionally 
> fails the map task due to the current record failing the column count check). 
>  The task then gets retried on a different machine and runs to a succesful 
> completion.
> You can see further evidence of the truncation further down in the task logs, 
> where it displays the count of the records read:  the failed task says 32953 
> records read, while the successful task says 63133.
> Any idea what the problem might be here and/or how to work around it?  This 
> issue is a very common occurrence on our clusters.  E.g., in the job I ran 
> last night before I had gone to bed I had already encountered 8 such 
> failuers, and the job was only 10% complete.  (~25,000 out of ~250,000 tasks.)
> I realize that it's common for I/O errors to occur - possibly even frequently 
> - in a large Hadoop job.  But I would think that if an I/O failure (like a 
> truncated read) did occur, that something in the underlying infrastructure 
> code (i.e., either in NativeS3FileSystem or in jets3t) should detect the 
> error and throw an IOException accordingly.  It shouldn't be up to the 
> calling code to detect such failures, IMO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Save the Date: Apache Hadoop Bug Bash / May 8th!

2015-03-19 Thread Tsuyoshi Ozawa
Hi Allen,

Thank you for the great suggestion. Let's do this!

Thanks,
- Tsuyoshi

On Fri, Mar 20, 2015 at 2:34 AM, Allen Wittenauer  wrote:
>
> Hi folks,
>
> There are ~6,000 Hadoop JIRA issues that have gone unaddressed, 
> including ~900 with patches waiting to be reviewed.  Among other things, this 
> lack of attention to our backlog is making the Hadoop project very unfriendly 
> to contributors--which is ultimately very unhealthy for the project.
>
> In hopes of resetting our community habits a bit, a bunch of us are 
> organizing a "bug bash" of sorts on Friday May 8th, the day after HBase Con.  
> We're thinking about holding an in-person event someplace in the San 
> Francisco Bay Area... but we definitely want this to be a virtual event for 
> participants around the world!
>
> As details form, we'll let you know.  But we wanted to send an 
> initial notice so you can save the date, and also share your ideas on how we 
> can make this event successful.
>
>
> Thanks!


Save the Date: Apache Hadoop Bug Bash / May 8th!

2015-03-19 Thread Allen Wittenauer

Hi folks,

There are ~6,000 Hadoop JIRA issues that have gone unaddressed, 
including ~900 with patches waiting to be reviewed.  Among other things, this 
lack of attention to our backlog is making the Hadoop project very unfriendly 
to contributors--which is ultimately very unhealthy for the project.

In hopes of resetting our community habits a bit, a bunch of us are 
organizing a "bug bash" of sorts on Friday May 8th, the day after HBase Con.  
We're thinking about holding an in-person event someplace in the San Francisco 
Bay Area... but we definitely want this to be a virtual event for participants 
around the world!

As details form, we'll let you know.  But we wanted to send an initial 
notice so you can save the date, and also share your ideas on how we can make 
this event successful. 


Thanks!

[jira] [Resolved] (HADOOP-7741) Maven related JIRAs to backport to 0.23

2015-03-19 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-7741.
--
Resolution: Fixed

> Maven related JIRAs to backport to 0.23
> ---
>
> Key: HADOOP-7741
> URL: https://issues.apache.org/jira/browse/HADOOP-7741
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Affects Versions: 0.23.0
>Reporter: Alejandro Abdelnur
>
> HADOOP-7624
> HDFS-2294
> MAPREDUCE-3014
> HDFS-2322
> HADOOP-7642
> MAPREDUCE-3171
> HADOOP-7737
> MAPREDUCE-3177
> MAPREDUCE-3003
> HADOOP-7590
> MAPREDUCE-3024
> HADOOP-7538



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-common-trunk-Java8 #139

2015-03-19 Thread Apache Jenkins Server
See 

Changes:

[harsh] MAPREDUCE-5807. Print usage for TeraSort job. Contributed by Rohith.

[wheat9] HDFS-7953. NN Web UI fails to navigate to paths that contain #. 
Contributed by kanaka kumar avvaru.

[arp] HDFS-7948. TestDataNodeHotSwapVolumes#testAddVolumeFailures failed on 
Windows. (Contributed by Xiaoyu Yao)

[jlowe] MAPREDUCE-6277. Job can post multiple history files if attempt loses 
connection to the RM. Contributed by Chang Li

[arp] HDFS-7950. Fix TestFsDatasetImpl#testAddVolumes failure on Windows. 
(Contributed by Xiaoyu Yao)

[arp] HDFS-7951. Fix NPE for 
TestFsDatasetImpl#testAddVolumeFailureReleasesInUseLock on Linux. (Contributed 
by Xiaoyu Yao)

[arp] Fix CHANGES.txt for HDFS-7722.

[wheat9] HDFS-7697. Mark the PB OIV tool as experimental. Contributed by Lei 
(Eddy) Xu.

[arp] HDFS-7914. TestJournalNode#testFailToStartWithBadConfig fails when the 
default dfs.journalnode.http-address port 8480 is in use. (Contributed by 
Xiaoyu Yao)

[wheat9] HDFS-7945. The WebHdfs system on DN does not honor the length 
parameter. Contributed by Haohui Mai.

[kasha] YARN-3351. AppMaster tracking URL is broken in HA. (Anubhav Dhoot via 
kasha)

[cmccabe] HDFS-7054. Make DFSOutputStream tracing more fine-grained (cmccabe)

[jing9] HDFS-7943. Append cannot handle the last block with length greater than 
the preferred block size. Contributed by Jing Zhao.

[cmccabe] HDFS-7929. inotify unable fetch pre-upgrade edit log segments once 
upgrade starts (Zhe Zhang via Colin P. McCabe)

[jing9] HDFS-7587. Edit log corruption can happen if append fails with a quota 
violation. Contributed by Jing Zhao.

[cmccabe] HADOOP-9329. document native build dependencies in BUILDING.txt 
(Vijay Bhat via Colin P. McCabe)

[wheat9] HADOOP-10703. HttpServer2 creates multiple authentication filters. 
Contributed by Benoy Antony.

[vinayakumarb] HDFS-7869. Update action param from 'start' to 'prepare' in 
rolling upgrade ( Contributed by J.Andreina)

[vinayakumarb] HDFS-7867. Update action param from 'start' to 'prepare' in 
rolling upgrade ( Contributed by J.Andreina) Updated JIRA id.

[devaraj] YARN-3357. Move TestFifoScheduler to FIFO package. Contributed by 
Rohith

--
[...truncated 3527 lines...]

main:
[mkdir] Created dir: 

[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ 
hadoop-minikdc ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hadoop-minikdc 
---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 2 source files to 

[INFO] 
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ 
hadoop-minikdc ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] 
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
hadoop-minikdc ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 2 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.17:test (default-test) @ hadoop-minikdc ---
[INFO] Surefire report directory: 


---
 T E S T S
---

---
 T E S T S
---
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.minikdc.TestMiniKdc
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.466 sec - in 
org.apache.hadoop.minikdc.TestMiniKdc
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.minikdc.TestChangeOrgNameAndDomain
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.557 sec - in 
org.apache.hadoop.minikdc.TestChangeOrgNameAndDomain

Results :

Tests run: 6, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (default-jar) @ hadoop-minikdc ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-source-plug

[jira] [Created] (HADOOP-11729) Fix link to cgroups doc in site.xml

2015-03-19 Thread Masatake Iwasaki (JIRA)
Masatake Iwasaki created HADOOP-11729:
-

 Summary: Fix link to cgroups doc in site.xml
 Key: HADOOP-11729
 URL: https://issues.apache.org/jira/browse/HADOOP-11729
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor


s/NodeManagerCGroups/NodeManagerCgroups/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)