[jira] [Created] (HADOOP-14166) Reset the DecayRpcScheduler AvgResponseTime metric to zero when queue is not used

2017-03-08 Thread Surendra Singh Lilhore (JIRA)
Surendra Singh Lilhore created HADOOP-14166:
---

 Summary: Reset the DecayRpcScheduler AvgResponseTime metric to 
zero when queue is not used
 Key: HADOOP-14166
 URL: https://issues.apache.org/jira/browse/HADOOP-14166
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Reporter: Surendra Singh Lilhore
Assignee: Surendra Singh Lilhore


{noformat}
 "name" : "Hadoop:service=NameNode,name=DecayRpcSchedulerMetrics2.ipc.65110",
"modelerType" : "DecayRpcSchedulerMetrics2.ipc.65110",
"tag.Context" : "ipc.65110",
"tag.Hostname" : "BLR106556",
"DecayedCallVolume" : 3,
"UniqueCallers" : 1,
"Caller(root).Volume" : 266,
"Caller(root).Priority" : 3,
"Priority.0.AvgResponseTime" : 6.151201023385511E-5,
"Priority.1.AvgResponseTime" : 0.0,
"Priority.2.AvgResponseTime" : 0.0,
"Priority.3.AvgResponseTime" : 1.184686336544601,
"Priority.0.CompletedCallVolume" : 0,
"Priority.1.CompletedCallVolume" : 0,
"Priority.2.CompletedCallVolume" : 0,
"Priority.3.CompletedCallVolume" : 2,
"CallVolume" : 266
{noformat}

"Priority.0.AvgResponseTime" is always "6.151201023385511E-5" even the queue is 
not used for long time.

{code}
  if (lastAvg > PRECISION || averageResponseTime > PRECISION) {
if (enableDecay) {
  final double decayed = decayFactor * lastAvg + averageResponseTime;
  LOG.info("Decayed "  + i + " time " +   decayed);
  responseTimeAvgInLastWindow.set(i, decayed);
} else {
  responseTimeAvgInLastWindow.set(i, averageResponseTime);
}
  }
{code}

we should reset it to zero when above condition is false.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13037) Refactor Azure Data Lake Store as an independent FileSystem

2017-03-08 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas reopened HADOOP-13037:


Reopening for backport to branch-2. Talked to [~djp], and will backport to 
branch-2.8 if there are no objections.

> Refactor Azure Data Lake Store as an independent FileSystem
> ---
>
> Key: HADOOP-13037
> URL: https://issues.apache.org/jira/browse/HADOOP-13037
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
>Reporter: Shrikant Naidu
>Assignee: Vishwajeet Dusane
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13037-001.patch, HADOOP-13037-002.patch, 
> HADOOP-13037-003.patch, HADOOP-13037-004.patch, HADOOP-13037.005.patch, 
> HADOOP-13037.006.patch, HADOOP-13037 Proposal.pdf
>
>
> The jira proposes an improvement over HADOOP-12666 to remove webhdfs 
> dependencies from the ADL file system client and build out a standalone 
> client. At a high level, this approach would extend the Hadoop file system 
> class to provide an implementation for accessing Azure Data Lake. The scheme 
> used for accessing the file system will continue to be 
> adl://.azuredatalake.net/path/to/file. 
> The Azure Data Lake Cloud Store will continue to provide a webHDFS rest 
> interface. The client will  access the ADLS store using WebHDFS Rest APIs 
> provided by the ADLS store. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14076) Allow Configuration to be persisted given path to file

2017-03-08 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HADOOP-14076.
-
Resolution: Later

This can be done client side.

> Allow Configuration to be persisted given path to file
> --
>
> Key: HADOOP-14076
> URL: https://issues.apache.org/jira/browse/HADOOP-14076
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ted Yu
>
> Currently Configuration has the following methods for persistence:
> {code}
>   public void writeXml(OutputStream out) throws IOException {
>   public void writeXml(Writer out) throws IOException {
> {code}
> Adding API for persisting to file given path would be useful:
> {code}
>   public void writeXml(String path) throws IOException {
> {code}
> Background: I recently worked on exporting Configuration to a file using JNI.
> Without the proposed API, I resorted to some trick such as the following:
> http://www.kfu.com/~nsayer/Java/jni-filedesc.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14165) Add S3Guard.dirListingUnion in S3AFileSystem#listFiles, listLocatedStatus

2017-03-08 Thread Rajesh Balamohan (JIRA)
Rajesh Balamohan created HADOOP-14165:
-

 Summary: Add S3Guard.dirListingUnion in S3AFileSystem#listFiles, 
listLocatedStatus
 Key: HADOOP-14165
 URL: https://issues.apache.org/jira/browse/HADOOP-14165
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Rajesh Balamohan
Priority: Minor


{{S3Guard::dirListingUnion}} merges information from backing store and DDB to 
create consistent view. This needs to be added in {{S3AFileSystem::listFiles}} 
and {{S3AFileSystem::listLocatedStatus}}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: H9 build slave is bad

2017-03-08 Thread Allen Wittenauer

> On Mar 8, 2017, at 2:53 PM, Anu Engineer  wrote:
> 
> Agreed, but I was under the impression that we would kill the container under 
> OOM conditions and not the whole base machine.

We do not run our docker containers under a cgroup.



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: H9 build slave is bad

2017-03-08 Thread Anu Engineer
Agreed, but I was under the impression that we would kill the container under 
OOM conditions and not the whole base machine.

Thanks
Anu


On 3/8/17, 2:41 PM, "Allen Wittenauer"  wrote:

>
>> On Mar 8, 2017, at 2:21 PM, Anu Engineer  wrote:
>> 
>> Hi Allen,
>>> Likely something in the HDFS-7240 branch or with this patch that's 
>>> doing Bad Things (tm).
>> 
>> Thanks for bringing this to my attention, But I am surprised that a mvn 
>> command is able to kill a test machine.
>
>   FWIW, it’s pretty trivial tip Linux over under low memory conditions….  



Re: About 2.7.4 Release

2017-03-08 Thread Allen Wittenauer

> On Mar 8, 2017, at 1:54 PM, Allen Wittenauer  
> wrote:
> 
>   This is already possible:
>   * don’t use —asfrelease
>   * use —sign, —native, and, if appropriate for your platform, 
> —docker and —dockercache


Oh yeah, I forgot about this:

https://effectivemachines.com/2016/08/16/building-your-own-apache-hadoop-distribution/



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: H9 build slave is bad

2017-03-08 Thread Allen Wittenauer

> On Mar 8, 2017, at 2:21 PM, Anu Engineer  wrote:
> 
> Hi Allen,
>>  Likely something in the HDFS-7240 branch or with this patch that's 
>> doing Bad Things (tm).
> 
> Thanks for bringing this to my attention, But I am surprised that a mvn 
> command is able to kill a test machine.

FWIW, it’s pretty trivial tip Linux over under low memory conditions….  
-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: H9 build slave is bad

2017-03-08 Thread Anu Engineer
Hi Allen,
>   Likely something in the HDFS-7240 branch or with this patch that's 
> doing Bad Things (tm).

Thanks for bringing this to my attention, But I am surprised that a mvn command 
is able to kill a test machine.

I have pasted the call stack from the issue that you pointed out to be the root 
cause, can you please help me understand what you think is the root cause?  
If anyone can give me pointers to how to access H9 machine, I would love to 
take a look.

From the console logs, I am not able to see why this run can kill H9 machine 
(Let us assume that test is able to kill the container, but rendering H9 
machine inoperable, I doubt that it is related to the patch).
Let us for a second assume that what you are saying is true, that HDFS-7240 is 
somehow killing these machines, why is this happening only on H9? Is HDFS runs 
happening only on H9? 

Thanks
Anu

Ps. We are able to run this on local machines without any issues, I will try to 
run this inside a Docker container just to make sure that these tests are not 
doing something weird.

Stacks from the Console Log: 

mvn -Dmaven.repo.local=/home/jenkins/yetus-m2/hadoop-HDFS-7240-patch-1 
-Ptest-patch -Pparallel-tests -P!shelltest -Pnative -Drequire.libwebhdfs 
-Drequire.snappy -Drequire.openssl -Drequire.fuse -Drequire.test.libhadoop 
-Pyarn-ui clean test -fae > 
/testptch/hadoop/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt 
2>&1
FATAL: command execution failed
java.io.IOException: Backing channel 'H9' is disconnected.
at 
hudson.remoting.RemoteInvocationHandler.channelOrFail(RemoteInvocationHandler.java:191)
at 
hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:256)
at com.sun.proxy.$Proxy104.isAlive(Unknown Source)
at hudson.Launcher$RemoteLauncher$ProcImpl.isAlive(Launcher.java:1043)
at hudson.Launcher$RemoteLauncher$ProcImpl.join(Launcher.java:1035)
at hudson.tasks.CommandInterpreter.join(CommandInterpreter.java:154)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:108)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:65)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:779)
at hudson.model.Build$BuildExecution.build(Build.java:205)
at hudson.model.Build$BuildExecution.doRun(Build.java:162)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:534)
at hudson.model.Run.execute(Run.java:1728)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:404)
Caused by: hudson.remoting.Channel$OrderlyShutdown
at hudson.remoting.Channel$CloseCommand.execute(Channel.java:1121)
at hudson.remoting.Channel$1.handle(Channel.java:526)
at 
hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:83)
Caused by: Command close created at
at hudson.remoting.Command.(Command.java:59)
at hudson.remoting.Channel$CloseCommand.(Channel.java:1115)
at hudson.remoting.Channel$CloseCommand.(Channel.java:1113)
at hudson.remoting.Channel.close(Channel.java:1273)
at hudson.remoting.Channel.close(Channel.java:1255)
at hudson.remoting.Channel$CloseCommand.execute(Channel.java:1120)
... 2 more
Build step 'Execute shell' marked build as failure
ERROR: Step ?Archive the artifacts? failed: no workspace for 
PreCommit-HDFS-Build #18591
No JDK named ?jdk-1.8.0? found
[description-setter] Description set: HDFS-11451
ERROR: Step ?Publish JUnit test result report? failed: no workspace for 
PreCommit-HDFS-Build #18591
No JDK named ?jdk-1.8.0? found
Finished: FAILURE
These tests are indeed passing on the local boxes, so 


On 3/8/17, 12:04 PM, "Allen Wittenauer"  wrote:

>
>> On Mar 8, 2017, at 9:34 AM, Sean Busbey  wrote:
>> 
>> Is this HADOOP-13951?
>
>   Almost certainly.  Here's the run that broke it again:
>
>https://builds.apache.org/job/PreCommit-HDFS-Build/18591
>
>   Likely something in the HDFS-7240 branch or with this patch that's 
> doing Bad Things (tm).
>
>
>
>-
>To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>



Re: About 2.7.4 Release

2017-03-08 Thread Allen Wittenauer

> On Mar 8, 2017, at 10:55 AM, Marton Elek  wrote:
> 
> I think the main point here is the testing of the release script, not the 
> creation of the official release.

… except the Hadoop PMC was doing exactly this from 2.3.0 up until 
recently. Which means we have a few years worth of releases that are 
effectively untrustworthy despite being signed.  One of the (many) reasons I 
rewrote the release process was to get Hadoop back in line with ASF policy.  
Given the massive turn over in committers, I don’t want us to repeat the same 
mistakes (like we usually do).

> I think there should be an option to configure the release tool to use a 
> forked github repo and/or a private playground nexus instead of official 
> apache repos. In this case it would be easy to test regularly the tool, even 
> by a non-committer (or even from Jenkins). But it would be just a smoketest 
> of the release script…

This is already possible:
* don’t use —asfrelease
* use —sign, —native, and, if appropriate for your platform, 
—docker and —dockercache


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-14062) ApplicationMasterProtocolPBClientImpl.allocate fails with EOFException when RPC privacy is enabled

2017-03-08 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He reopened HADOOP-14062:
--

> ApplicationMasterProtocolPBClientImpl.allocate fails with EOFException when 
> RPC privacy is enabled
> --
>
> Key: HADOOP-14062
> URL: https://issues.apache.org/jira/browse/HADOOP-14062
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Steven Rand
>Assignee: Steven Rand
>Priority: Critical
> Fix For: 2.8.0, 3.0.0-alpha3
>
> Attachments: HADOOP-14062.001.patch, HADOOP-14062.002.patch, 
> HADOOP-14062.003.patch, HADOOP-14062-branch-2.8.0.004.patch, 
> HADOOP-14062-branch-2.8.0.005.patch, HADOOP-14062-branch-2.8.0.005.patch, 
> HADOOP-14062-branch-2.8.0.dummy.patch, yarn-rm-log.txt
>
>
> When privacy is enabled for RPC (hadoop.rpc.protection = privacy), 
> {{ApplicationMasterProtocolPBClientImpl.allocate}} sometimes (but not always) 
> fails with an EOFException. I've reproduced this with Spark 2.0.2 built 
> against latest branch-2.8 and with a simple distcp job on latest branch-2.8.
> Steps to reproduce using distcp:
> 1. Set hadoop.rpc.protection equal to privacy
> 2. Write data to HDFS. I did this with Spark as follows: 
> {code}
> sc.parallelize(1 to (5*1024*1024)).map(k => Seq(k, 
> org.apache.commons.lang.RandomStringUtils.random(1024, 
> "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWxyZ0123456789")).mkString("|")).toDF().repartition(100).write.parquet("hdfs:///tmp/testData")
> {code}
> 3. Attempt to distcp that data to another location in HDFS. For example:
> {code}
> hadoop distcp -Dmapreduce.framework.name=yarn hdfs:///tmp/testData 
> hdfs:///tmp/testDataCopy
> {code}
> I observed this error in the ApplicationMaster's syslog:
> {code}
> 2016-12-19 19:13:50,097 INFO [eventHandlingThread] 
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event Writer 
> setup for JobId: job_1482189777425_0004, File: 
> hdfs://:8020/tmp/hadoop-yarn/staging//.staging/job_1482189777425_0004/job_1482189777425_0004_1.jhist
> 2016-12-19 19:13:51,004 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before 
> Scheduling: PendingReds:0 ScheduledMaps:4 ScheduledReds:0 AssignedMaps:0 
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0 
> HostLocal:0 RackLocal:0
> 2016-12-19 19:13:51,031 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() 
> for application_1482189777425_0004: ask=1 release= 0 newContainers=0 
> finishedContainers=0 resourcelimit= knownNMs=3
> 2016-12-19 19:13:52,043 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.io.retry.RetryInvocationHandler: Exception while invoking 
> ApplicationMasterProtocolPBClientImpl.allocate over null. Retrying after 
> sleeping for 3ms.
> java.io.EOFException: End of File Exception between local host is: 
> "/"; destination host is: "":8030; 
> : java.io.EOFException; For more details see:  
> http://wiki.apache.org/hadoop/EOFException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>   at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:801)
>   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:765)
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1486)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1428)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1338)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
>   at com.sun.proxy.$Proxy80.allocate(Unknown Source)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.allocate(ApplicationMasterProtocolPBClientImpl.java:77)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:398)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)

Re: H9 build slave is bad

2017-03-08 Thread Allen Wittenauer

> On Mar 8, 2017, at 12:04 PM, Allen Wittenauer  
> wrote:
> 
> 
>> On Mar 8, 2017, at 9:34 AM, Sean Busbey  wrote:
>> 
>> Is this HADOOP-13951?
> 
>   Almost certainly.  Here's the run that broke it again:
> 
> https://builds.apache.org/job/PreCommit-HDFS-Build/18591
> 
>   Likely something in the HDFS-7240 branch or with this patch that's 
> doing Bad Things (tm).


Or the clean script ran and stomped all over a running job.



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: H9 build slave is bad

2017-03-08 Thread Allen Wittenauer

> On Mar 8, 2017, at 9:34 AM, Sean Busbey  wrote:
> 
> Is this HADOOP-13951?

Almost certainly.  Here's the run that broke it again:

https://builds.apache.org/job/PreCommit-HDFS-Build/18591

Likely something in the HDFS-7240 branch or with this patch that's 
doing Bad Things (tm).



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14164) Update the skin of maven-site during doc generation

2017-03-08 Thread Elek, Marton (JIRA)
Elek, Marton created HADOOP-14164:
-

 Summary: Update the skin of maven-site during doc generation
 Key: HADOOP-14164
 URL: https://issues.apache.org/jira/browse/HADOOP-14164
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Reporter: Elek, Marton
Assignee: Elek, Marton


Together with the improvements of the hadoop site (HADOOP-14163), I suggest to 
improve theme used by the mave-site plugin for all the hadoop documentation.

One possible option is using the reflow skin:

http://andriusvelykis.github.io/reflow-maven-skin/



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14163) Refactor existing hadoop site to use more usable static website generator

2017-03-08 Thread Elek, Marton (JIRA)
Elek, Marton created HADOOP-14163:
-

 Summary: Refactor existing hadoop site to use more usable static 
website generator
 Key: HADOOP-14163
 URL: https://issues.apache.org/jira/browse/HADOOP-14163
 Project: Hadoop Common
  Issue Type: Improvement
  Components: site
Reporter: Elek, Marton
Assignee: Elek, Marton


>From the dev mailing list:

"Publishing can be attacked via a mix of scripting and revamping the darned 
website. Forrest is pretty bad compared to the newer static site generators out 
there (e.g. need to write XML instead of markdown, it's hard to review a 
staging site because of all the absolute links, hard to customize, did I 
mention XML?), and the look and feel of the site is from the 00s. We don't 
actually have that much site content, so it should be possible to migrate to a 
new system."

This issue is find a solution to migrate the old site to a new modern static 
site generator using a more contemprary theme.

Goals: 
 * existing links should work (or at least redirected)
 * It should be easy to add more content required by a release automatically 
(most probably with creating separated markdown files)




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-03-08 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/339/

[Mar 7, 2017 6:12:35 PM] (arp) HDFS-11477. Simplify file IO profiling 
configuration. Contributed by
[Mar 7, 2017 7:41:05 PM] (arp) HDFS-11508. Fix bind failure in SimpleTCPServer 
& Portmap where bind
[Mar 7, 2017 7:58:48 PM] (templedf) YARN-6287. 
RMCriticalThreadUncaughtExceptionHandler.rmContext should be
[Mar 7, 2017 9:34:46 PM] (rkanter) MAPREDUCE-6839. TestRecovery.testCrashed 
failed (pairg via rkanter)
[Mar 7, 2017 9:47:52 PM] (rkanter) YARN-6275. Fail to show real-time tracking 
charts in SLS (yufeigu via
[Mar 7, 2017 10:55:52 PM] (liuml07) HADOOP-14150. Implement getHomeDirectory() 
method in
[Mar 8, 2017 6:34:30 AM] (sunilg) YARN-6207. Move application across queues 
should handle delayed event




-1 overall


The following subsystems voted -1:
asflicense unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.balancer.TestBalancer 
   hadoop.yarn.server.timeline.webapp.TestTimelineWebServices 
   
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption 
   hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.client.api.impl.TestAMRMClient 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/339/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/339/artifact/out/diff-compile-javac-root.txt
  [176K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/339/artifact/out/diff-checkstyle-root.txt
  [17M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/339/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/339/artifact/out/diff-patch-shellcheck.txt
  [24K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/339/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/339/artifact/out/whitespace-eol.txt
  [11M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/339/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/339/artifact/out/diff-javadoc-javadoc-root.txt
  [2.2M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/339/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [268K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/339/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/339/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [60K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/339/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [324K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/339/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/339/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
  [8.0K]

   asflicense:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/339/artifact/out/patch-asflicense-problems.txt
  [4.0K]

Powered by Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

[jira] [Created] (HADOOP-14162) Improve release scripts to automate missing steps

2017-03-08 Thread Elek, Marton (JIRA)
Elek, Marton created HADOOP-14162:
-

 Summary: Improve release scripts to automate missing steps
 Key: HADOOP-14162
 URL: https://issues.apache.org/jira/browse/HADOOP-14162
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Elek, Marton
Assignee: Elek, Marton


According to the conversation on the dev mailing list one pain point of the 
release making is that even with the latest create-release script a lot of 
steps are not automated.

This Jira is about creating a script which guides the release manager throw the 
proces:

Goals:
  * It would work even without the apache infrastructure: with custom 
configuration (forked repositories/alternative nexus), it would be possible to 
test the scripts even by a non-commiter.  
  * every step which could be automated should be scripted (create git 
branches, build,...). if something could be not automated there an explanation 
could be printed out, and wait for confirmation
  * Before dangerous steps (eg. bulk jira update) we can ask for confirmation 
and explain the 
  * The run should be idempontent (and there should be an option to continue 
the release from any steps).  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14161) Failed to rename S3AFileStatus

2017-03-08 Thread Luke Miner (JIRA)
Luke Miner created HADOOP-14161:
---

 Summary: Failed to rename S3AFileStatus
 Key: HADOOP-14161
 URL: https://issues.apache.org/jira/browse/HADOOP-14161
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.7.3, 2.7.2, 2.7.1, 2.7.0
 Environment: spark 2.0.2 with mesos
hadoop 2.7.2
Reporter: Luke Miner


I'm getting non deterministic rename errors while writing to S3 using spark and 
hadoop. The proper permissions are set and this only happens occasionally. It 
can happen on a job that is as simple as reading in json, repartitioning and 
then writing out.

{code}
org.apache.spark.SparkException: Task failed while writing rows
at 
org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:261)
at 
org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
at 
org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:86)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: Failed to commit task
at 
org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$commitTask$1(WriterContainer.scala:275)
at 
org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply$mcV$sp(WriterContainer.scala:257)
at 
org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252)
at 
org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252)
at 
org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1348)
at 
org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:258)
... 8 more
Caused by: java.io.IOException: Failed to rename 
S3AFileStatus{path=s3a://foo/_temporary/0/_temporary/attempt_201703081855_0018_m_000966_0/part-r-00966-615ed714-58c1-4b89-be56-e47966737c75.snappy.parquet;
 isDirectory=false; length=111225342; replication=1; blocksize=33554432; 
modification_time=1488999342000; access_time=0; owner=; group=; 
permission=rw-rw-rw-; isSymlink=false} to 
s3a://foo/part-r-00966-615ed714-58c1-4b89-be56-e47966737c75.snappy.parquet
at 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.mergePaths(FileOutputCommitter.java:415)
at 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.mergePaths(FileOutputCommitter.java:428)
at 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitTask(FileOutputCommitter.java:539)
at 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitTask(FileOutputCommitter.java:502)
at 
org.apache.spark.mapred.SparkHadoopMapRedUtil$.performCommit$1(SparkHadoopMapRedUtil.scala:50)
at 
org.apache.spark.mapred.SparkHadoopMapRedUtil$.commitTask(SparkHadoopMapRedUtil.scala:76)
at 
org.apache.spark.sql.execution.datasources.BaseWriterContainer.commitTask(WriterContainer.scala:211)
at 
org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$commitTask$1(WriterContainer.scala:270)
... 13 more
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: About 2.7.4 Release

2017-03-08 Thread Marton Elek
I think the main point here is the testing of the release script, not the 
creation of the official release.

I think there should be an option to configure the release tool to use a forked 
github repo and/or a private playground nexus instead of official apache repos. 
In this case it would be easy to test regularly the tool, even by a 
non-committer (or even from Jenkins). But it would be just a smoketest of the 
release script...

Marton
   

From: Allen Wittenauer 
Sent: Wednesday, March 08, 2017 2:24 AM
To: Andrew Wang
Cc: Hadoop Common; yarn-...@hadoop.apache.org; Hdfs-dev; 
mapreduce-...@hadoop.apache.org
Subject: Re: About 2.7.4 Release

> On Mar 7, 2017, at 2:51 PM, Andrew Wang  wrote:
> I think it'd be nice to
> have a nightly Jenkins job that builds an RC,

Just a reminder that any such build cannot be used for an actual 
release:

http://www.apache.org/legal/release-policy.html#owned-controlled-hardware



-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: About 2.7.4 Release

2017-03-08 Thread Marton Elek

Thank you very much for your feedback. I am opening the following JIRAs:

1. Create dev-support scripts to do the bulk jira updates required by the 
releases (check remaining jiras, update fix versions, etc.)

2. Create a 'wizzard' like script which guide through the release process (all 
the steps from the wiki pages, not just a build. But it may be an extension of 
the existing script):

  Goals:
  * It would work even without the apache infrastructure: with custom 
configuration (forked repositories/alternative nexus), it would be possible to 
test the scripts even by a non-commiter.  
  * every step which could be automated should be scripted (create git 
branches, build,...). if something could be not automated there an explanation 
could be printed out, and wait for confirmation
  * Before dangerous steps (eg. bulk jira update) we can ask for confirmation 
and explain what will be happened (eg. the following jira items will be 
changed: ) 
  * The run should be idempontent (and there should be an option to continue 
the release from any steps).   

3. Migrate the forrest based home page to a use a modern static site generator.

  Goals: * existing links should work (or at least redirected)
 * It should be easy to add more content required by a release 
automatically

4. It's not about the release, but I think the current maven site theme also 
could be updated to use a (more modern) theme, which could be similar to the 
main site from step 3.

Let me know if you have any other suggestion for actionable items. Or comment 
the Jiras if you have more specific requirements.

Marton

ps: Vinod, I will contact with you, soon.




From: Vinod Kumar Vavilapalli 
Sent: Tuesday, March 07, 2017 11:58 PM
To: Sangjin Lee
Cc: Marton Elek; common-dev@hadoop.apache.org; yarn-...@hadoop.apache.org; 
Hdfs-dev; mapreduce-...@hadoop.apache.org
Subject: Re: About 2.7.4 Release

I was planning to take this up, celebrating my return from my paternity leave 
of absence for quite a while.

Marton, let me know if you do want to take this up instead and we can work 
together.

Thanks
+Vinod

> On Mar 7, 2017, at 9:13 AM, Sangjin Lee  wrote:
>
> If we have a volunteer for releasing 2.7.4, we should go full speed
> ahead. We still need a volunteer from a PMC member or a committer as some
> tasks may require certain privileges, but I don't think it precludes
> working with others to close down the release.


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14160) Create dev-support scripts to do the bulk jira update required by the release process

2017-03-08 Thread Elek, Marton (JIRA)
Elek, Marton created HADOOP-14160:
-

 Summary: Create dev-support scripts to do the bulk jira update 
required by the release process
 Key: HADOOP-14160
 URL: https://issues.apache.org/jira/browse/HADOOP-14160
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Elek, Marton
Assignee: Elek, Marton


According to the conversation on the dev mailing list one pain point of the 
release making is the Jira administration.

This issue is about creating new scripts to 
 
 * query apache issue about a possible release (remaining blocking, issues, 
etc.)
 * and do bulk changes (eg.  bump fixVersions)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14159) Add some Java-8 friendly way to work with RemoteIterable, especially listings

2017-03-08 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14159:
---

 Summary: Add some Java-8 friendly way to work with RemoteIterable, 
especially listings
 Key: HADOOP-14159
 URL: https://issues.apache.org/jira/browse/HADOOP-14159
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 3.0.0-alpha2
Reporter: Steve Loughran


There's a fair amount of Hadoop code which uses {{FileSystem.listStatus(path) 
}} just to get an {{FileStatus[]}} array which they can then iterate over in a 
{{for}} loop.

This is inefficient and scales badly, as the entire listing is done before the 
compute; it cannot handle directories with millions of entries. 

The listLocatedStatus() calls return a RemoteIterator class, which can't be 
used in for loops as it has the right to throw an IOE in any hasNext/next call. 
That doesn't matter, as we now have closures and simple stream operations.

{code}
 listLocatedStatus(path).filter((st) -> st.length > 0).apply(st -> 
fs.delete(st.path))}}
{code}

See? We could do shiny new closure things. It wouldn't necessarily need changes 
to FileSystem either, just something which took {{RemoteIterator}} and let you 
chain some closures off it, similar to the java 8 streams operations.

Once implemented, we can move to using it in the Hadoop code wherever we  use 
listFiles() today



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: H9 build slave is bad

2017-03-08 Thread Sean Busbey
Is this HADOOP-13951?

On Tue, Mar 7, 2017 at 8:32 PM, Andrew Wang  wrote:
> A little ping that H9 hit the same error again, and I'm again going to
> clean it out. One more time and I'll ask infra about either removing or
> reimaging this node.
>
> On Mon, Mar 6, 2017 at 2:12 PM, Allen Wittenauer 
> wrote:
>
>>
>> > On Mar 6, 2017, at 1:57 PM, Andrew Wang 
>> wrote:
>> >
>> > I'll leave it there so it's ready for next time. If this keeps happening
>> on H9, then I'm going to ask infra to reimage it. FWIW I haven't seen this
>> on our internal unit test runs, so it points to an H9-specific issue.
>>
>> I’ve seen test data cause failures on quite a few nodes over the
>> past year or two.  I just usually fixed it without telling anyone since
>> there never seemed to be much interest in the problems.  That said, I’ve
>> mostly stopped babysitting the hadoop builds on the ASF infra.



-- 
busbey

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14158) Possible for modified configuration to leak into metadatastore in S3GuardTool

2017-03-08 Thread Sean Mackrory (JIRA)
Sean Mackrory created HADOOP-14158:
--

 Summary: Possible for modified configuration to leak into 
metadatastore in S3GuardTool
 Key: HADOOP-14158
 URL: https://issues.apache.org/jira/browse/HADOOP-14158
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sean Mackrory


It doesn't appear to do it when run from the command-line, but when running the 
S3GuardTool.run (i.e. the parent function of most of the functions used in the 
unit tests) from a unit test, you end up with a NullMetadataStore, regardless 
of what else was configured.

We create an instance of S3AFileSystem with the metadata store implementation 
overridden to NullMetadataStore so that we have distinct interfaces to S3 and 
the metadata store. S3Guard can later be called using this filesystem, causing 
it to pick up the filesystem's configuration, which instructs it to use the 
NullMetadataStore implementation. This shouldn't be possible.

It is unknown if this happens in any real-world scenario - I've been unable to 
reproduce the problem from the command-line. But it definitely happens in a 
test, it shouldn't, and fixing this will at least allow HADOOP-14145 to have an 
automated test.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14157) FsUrlStreamHandlerFactory "Illegal character in path" parsing file URL on Windows

2017-03-08 Thread Simon Scott (JIRA)
Simon Scott created HADOOP-14157:


 Summary: FsUrlStreamHandlerFactory "Illegal character in path" 
parsing file URL on Windows
 Key: HADOOP-14157
 URL: https://issues.apache.org/jira/browse/HADOOP-14157
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 3.0.0-alpha2, 2.6.5, 2.7.3
 Environment: Windows
Reporter: Simon Scott
Priority: Minor


After registering the FsUrlStreamHandlerFactory with the JVM, subsequent calls 
to convert a "file" URL to a URI can fail with "Illegal character in path" 
where the illegal character is a backslash.

For example:
{code}
URL.setURLStreamHandlerFactory(new FsUrlStreamHandlerFactory());
File file = new File("C:/Users");
final URL url = new URL("file:///" + file.getAbsolutePath());
{code}

gives stack trace:
{noformat}
java.net.URISyntaxException: Illegal character in path at index 8: 
file:/C:\Users
at java.net.URI$Parser.fail(URI.java:2848)
at java.net.URI$Parser.checkChars(URI.java:3021)
at java.net.URI$Parser.parseHierarchical(URI.java:3105)
at java.net.URI$Parser.parse(URI.java:3053)
at java.net.URI.(URI.java:588)
at java.net.URL.toURI(URL.java:946)
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14156) Grammar error in the ConfTest.java

2017-03-08 Thread Andrey Dyatlov (JIRA)
Andrey Dyatlov created HADOOP-14156:
---

 Summary: Grammar error in the ConfTest.java
 Key: HADOOP-14156
 URL: https://issues.apache.org/jira/browse/HADOOP-14156
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Andrey Dyatlov
Priority: Trivial


In the file 
{{hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ConfTest.java}}

bq. does not defined

should be replaced by

bq. is not defined


PR: https://github.com/apache/hadoop/pull/187/




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14155) KerberosName.replaceParameters() may throw java.lang.ArrayIndexOutOfBoundsException

2017-03-08 Thread Dazhuang Su (JIRA)
Dazhuang Su created HADOOP-14155:


 Summary: KerberosName.replaceParameters() may throw 
java.lang.ArrayIndexOutOfBoundsException
 Key: HADOOP-14155
 URL: https://issues.apache.org/jira/browse/HADOOP-14155
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.7.3
Reporter: Dazhuang Su
Priority: Minor


In core-site.xml:

  hadoop.security.auth_to_local
  
RULE:[1:$1](.*)
RULE:[2:$1$2](.*)
  


KerberosName.replaceParameters() replaces the numbered parameters of the form 
$n where n is from 1 to the length of params. Normal text is copied directly 
and $n is replaced by the corresponding parameter.
However, when RULE is configued as the following way (although it's wrong)

RULE:[1:$1$2](.*)

Then run command

hadoop org.apache.hadoop.security.HadoopKerberosName testpr...@testrealm.com

It will throw ArrayIndexOutOfBoundsException instead of BadFormatString 
exception.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org