[jira] [Created] (HADOOP-11204) Fix incorrect property in hadoop-kms/src/main/conf/kms-site.xml

2014-10-15 Thread Yi Liu (JIRA)
Yi Liu created HADOOP-11204:
---

 Summary: Fix incorrect property in 
hadoop-kms/src/main/conf/kms-site.xml
 Key: HADOOP-11204
 URL: https://issues.apache.org/jira/browse/HADOOP-11204
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms
Affects Versions: 2.5.0
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Minor


{{hadoop.security.keystore.JavaKeyStoreProvider.password}} doesn't exist, it 
should be {{hadoop.security.keystore.java-keystore-provider.password-file}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Common-0.23-Build #1103

2014-10-15 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Common-0.23-Build/1103/

--
[...truncated 8263 lines...]
Running org.apache.hadoop.net.TestDNS
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.239 sec
Running org.apache.hadoop.net.TestNetUtils
Tests run: 36, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.481 sec
Running org.apache.hadoop.net.TestStaticMapping
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.747 sec
Running org.apache.hadoop.net.TestSwitchMapping
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.313 sec
Running org.apache.hadoop.net.TestSocketIOWithTimeout
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.189 sec
Running org.apache.hadoop.ipc.TestRPCCompatibility
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.842 sec
Running org.apache.hadoop.ipc.TestAvroRpc
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.371 sec
Running org.apache.hadoop.ipc.TestServer
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.368 sec
Running org.apache.hadoop.ipc.TestIPC
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 68.826 sec
Running org.apache.hadoop.ipc.TestMiniRPCBenchmark
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.752 sec
Running org.apache.hadoop.ipc.TestSaslRPC
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.701 sec
Running org.apache.hadoop.ipc.TestRPC
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.52 sec
Running org.apache.hadoop.ipc.TestSocketFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.266 sec
Running org.apache.hadoop.ipc.TestIPCServerResponder
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.483 sec
Running org.apache.hadoop.jmx.TestJMXJsonServlet
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.832 sec
Running org.apache.hadoop.record.TestRecordVersioning
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.17 sec
Running org.apache.hadoop.record.TestBuffer
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.055 sec
Running org.apache.hadoop.record.TestRecordIO
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.189 sec
Running org.apache.hadoop.cli.TestCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.928 sec
Running org.apache.hadoop.log.TestLog4Json
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.36 sec
Running org.apache.hadoop.log.TestLogLevel
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.6 sec
Running org.apache.hadoop.http.TestHttpServer
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.511 sec
Running org.apache.hadoop.http.TestHtmlQuoting
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.271 sec
Running org.apache.hadoop.http.TestHttpRequestLogAppender
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.077 sec
Running org.apache.hadoop.http.TestGlobalFilter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.78 sec
Running org.apache.hadoop.http.TestHttpServerWebapps
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.419 sec
Running org.apache.hadoop.http.lib.TestStaticUserWebFilter
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.507 sec
Running org.apache.hadoop.http.TestServletFilter
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.022 sec
Running org.apache.hadoop.http.TestPathFilter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.766 sec
Running org.apache.hadoop.http.TestHttpRequestLog
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.152 sec
Running org.apache.hadoop.http.TestHttpServerLifecycle
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.189 sec
Running org.apache.hadoop.conf.TestReconfiguration
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.98 sec
Running org.apache.hadoop.conf.TestConfServlet
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.406 sec
Running org.apache.hadoop.conf.TestDeprecatedKeys
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.345 sec
Running org.apache.hadoop.conf.TestConfiguration
Tests run: 37, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.605 sec
Running org.apache.hadoop.conf.TestConfigurationDeprecation
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.434 sec
Running org.apache.hadoop.conf.TestGetInstances
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.246 sec
Running org.apache.hadoop.conf.TestConfigurationSubclass
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.307 sec
Running org.apache.hadoop.fs.kfs.TestKosmosFileSystem
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.579 sec
Running org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract
Tests run: 29, Failures: 0, 

[jira] [Created] (HADOOP-11205) ThrottledInputStream should return the actual bandwidth (read rate)

2014-10-15 Thread Raju Bairishetti (JIRA)
Raju Bairishetti created HADOOP-11205:
-

 Summary: ThrottledInputStream should return the actual bandwidth 
(read rate)
 Key: HADOOP-11205
 URL: https://issues.apache.org/jira/browse/HADOOP-11205
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools/distcp
Reporter: Raju Bairishetti
Assignee: Raju Bairishetti


Currently, it is not returning the actual read rate. Due to this, most of the 
time is in idle state.

Behavior: First, it checks whether current bandwidth (number of bytes per 
second) is more than maxBandwidth before reading a chunk of bytes(or byte) from 
buffer. If read rate exceeds max bandwidth then it sleeps for 50ms and resume 
the process after the sleeping period(50ms).

Ex: Assume, both maxBandwidth = 1MBPS and read rate = 1MBPS(i.e. reading 1M 
messages per second())

In the above case,  even if it reads 1.5MB in 1.5 sec which is ideally not 
crossing the max bandwidth but still it goes for sleeping mode as it assumes 
read rate is 1.5M (bytes read/ time i.e. 1.5/1.. time is 1500ms/1000 =1) 
instead of 1(i.e. 1.5/1.5).

Example: 
It does not got to sleep mode till 1 sec as number of bytes read in that 
elapsed time is lesser than maxBandwidth.
when it reads 1M +1 byte/chunk it checks read rate against maxBandwidth. 
when it reads 1M + 2byte /chunk it sleeps for 50ms as read rate is  1
when it reads 1M + 3byte/chunk again it sleeps for 50ms as read rate is  1
...
even if it reads 1.5MB in 1.5 sec but still it goes for sleeping mode as it 
assumes read rate is 1.5M (bytes read/ time i.e. 1.5/1.. time is 1500ms/1000 
=1) instead of 1(i.e. 1.5/1.5).

Cons: it reads for a sec and almost sleeps for a 1sec in an alternate fashion.

getBytesPerSec() method is not returning the actual bandwidth.
Current code: {code}
public long getBytesPerSec() {
long elapsed = (System.currentTimeMillis() - startTime) / 1000;
if (elapsed == 0) {
  return bytesRead;
} else {
  return bytesRead / elapsed;
}
  }
{code}
We should fix the getBytesPerSec() method:
{code}
public long getBytesPerSec() {
long elapsedTimeInMilliSecs = System.currentTimeMillis() - startTime;
if (elapsedTimeInMilliSecs = MILLISECONDS_IN_SEC) {
  return bytesRead;
} else {
  return (bytesRead * MILLISECONDS_IN_SEC)/ elapsedTimeInMilliSecs;
}
  }
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11206) TestCryptoCodec.testOpensslAesCtrCryptoCodec fails on master without native code compiled

2014-10-15 Thread Robert Joseph Evans (JIRA)
Robert Joseph Evans created HADOOP-11206:


 Summary: TestCryptoCodec.testOpensslAesCtrCryptoCodec fails on 
master without native code compiled
 Key: HADOOP-11206
 URL: https://issues.apache.org/jira/browse/HADOOP-11206
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Robert Joseph Evans


I tried to run the unit tests recently for another issue, and didn't turn on 
native code.  I got the following error.

{code}
Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.71 sec  
FAILURE! - in org.apache.hadoop.crypto.TestCryptoCodec
testOpensslAesCtrCryptoCodec(org.apache.hadoop.crypto.TestCryptoCodec)  Time 
elapsed: 0.064 sec   ERROR!
java.lang.UnsatisfiedLinkError: 
org.apache.hadoop.util.NativeCodeLoader.buildSupportsOpenssl()Z
at org.apache.hadoop.util.NativeCodeLoader.buildSupportsOpenssl(Native 
Method)
at 
org.apache.hadoop.crypto.TestCryptoCodec.testOpensslAesCtrCryptoCodec(TestCryptoCodec.java:66)
{code}

Looks like that test needs an assume that native code is loaded/compiled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Thinking ahead to hadoop-2.6

2014-10-15 Thread Vinod Kumar Vavilapalli
From YARN, we have a couple of items related to log-aggregation of services
and node-labels that are not tracked as blockers, but are working on
targeting 2.6. I am trying to get them in by end of this week.

+Vinod

On Tue, Oct 14, 2014 at 11:25 AM, Arun C Murthy a...@hortonworks.com wrote:

 2.6.0 is close now.

 Here are the remaining blockers, I'm hoping cut an RC in the next week or
 so:
 http://s.apache.org/hadoop-2.6.0-blockers

 thanks,
 Arun

 On Sep 30, 2014, at 10:42 AM, Arun C Murthy a...@hortonworks.com wrote:

  Folks,
 
   I've created branch-2.6 to stabilize the release.
 
   Committers, please exercise caution henceforth on commits other than
 the ones we've discussed on this thread already.
 
   By default new features should now be targeted to the version 2.7
 henceforth - I've ensure all the projects have that version on jira.
 
  thanks,
  Arun
 
  On Sep 26, 2014, at 1:08 AM, Arun Murthy a...@hortonworks.com wrote:
 
  Sounds good. I'll branch this weekend and we can merge the jiras we
  discussed in this thread as they they get wrapped next week.
 
  Thanks everyone.
 
  Arun
 
 
  On Sep 24, 2014, at 7:39 PM, Vinod Kumar Vavilapalli 
 vino...@apache.org wrote:
 
  We can branch off in a week or two so that work on branch-2 itself can
 go
  ahead with other features that can't fit in 2.6. Independent of that,
 we
  can then decide on the timeline of the release candidates once
 branch-2.6
  is close to being done w.r.t the planned features.
 
  Branching it off can let us focus on specific features that we want in
 for
  2.6 and then eventually blockers for the release, nothing else. There
 is a
  trivial pain of committing to one more branch, but it's worth it in
 this
  case IMO.
 
  A lot of efforts are happening in parallel from the YARN side from
 where I
  see. 2.6 is a little bulky if only on the YARN side and I'm afraid if
 we
  don't branch off and selectively try to get stuff in, it is likely to
 be in
  a perpetual delay.
 
  My 2 cents.
 
  +Vinod
 
  On Wed, Sep 24, 2014 at 3:28 PM, Suresh Srinivas 
 sur...@hortonworks.com
  wrote:
 
  Given some of the features are in final stages of stabilization,
  Arun, we should hold off creating 2.6 branch or building an RC by a
 week?
  All the features in flux are important ones and worth delaying the
 release
  by a week.
 
  On Wed, Sep 24, 2014 at 11:36 AM, Andrew Wang 
 andrew.w...@cloudera.com
  wrote:
 
  Hey Nicholas,
 
  My concern about Archival Storage isn't related to the code quality
 or
  the
  size of the feature. I think that you and Jing did good work. My
 concern
  is
  that once we ship, we're locked into that set of archival storage
 APIs,
  and
  these APIs are not yet finalized. Simply being able to turn off the
  feature
  does not change the compatibility story.
 
  I'm willing to devote time to help review these JIRAs and kick the
 tires
  on
  the APIs, but my point above was that I'm not sure it'd all be done
 by
  the
  end of the week. Testing might also reveal additional changes that
 need
  to
  be made, which also might not happen by end-of-week.
 
  I guess the question before us is if we're comfortable putting
 something
  in
  branch-2.6 and then potentially adding API changes after. I'm okay
 with
  that as long as we're all aware that this might happen.
 
  Arun, as RM is this cool with you? Again, I like this feature and I'm
  fine
  with it's inclusion, just a heads up that we might need some extra
 time
  to
  finalize things before an RC can be cut.
 
  Thanks,
  Andrew
 
  On Tue, Sep 23, 2014 at 7:30 PM, Tsz Wo (Nicholas), Sze 
  s29752-hadoop...@yahoo.com.invalid wrote:
 
  Hi,
 
  I am worry about KMS and transparent encryption since there are
 quite
  many
  bugs discovered after it got merged to branch-2.  It gives us an
  impression
  that the feature is not yet well tested.  Indeed, transparent
  encryption
  is
  a complicated feature which changes the core part of HDFS.  It is
 not
  easy
  to get everything right.
 
 
  For HDFS-6584: Archival Storage, it is a relatively simple and low
 risk
  feature.  It introduces a new storage type ARCHIVE and the concept
 of
  block
  storage policy to HDFS.  When a cluster is configured with ARCHIVE
  storage,
  the blocks will be stored using the appropriate storage types
 specified
  by
  storage policies assigned to the files/directories.  Cluster admin
  could
  disable the feature by simply not configuring any storage type and
 not
  setting any storage policy as before.   As Suresh mentioned,
 HDFS-6584
  is
  in the final stages to be merged to branch-2.
 
  Regards,
  Tsz-Wo
 
 
 
  On Wednesday, September 24, 2014 7:00 AM, Suresh Srinivas 
  sur...@hortonworks.com wrote:
 
 
 
 
  I actually would like to see both archival storage and single
 replica
  memory writes to be in 2.6 release. Archival storage is in the
 final
  stages
  of getting ready for branch-2 merge as Nicholas has already
 indicated
  on
  the dev mailing list. 

[jira] [Created] (HADOOP-11207) DelegationTokenAuthenticationHandler needs to support DT operations for proxy user

2014-10-15 Thread Zhijie Shen (JIRA)
Zhijie Shen created HADOOP-11207:


 Summary: DelegationTokenAuthenticationHandler needs to support DT 
operations for proxy user
 Key: HADOOP-11207
 URL: https://issues.apache.org/jira/browse/HADOOP-11207
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Zhijie Shen
Assignee: Zhijie Shen


Currently, DelegationTokenAuthenticationHandler only support DT operations for 
the request user after it passes the authentication. However, it should also 
support the request user to do DT operations on behalf of the proxy user.

Timeline server is using the authentication filter for DT operations instead of 
traditional RPC-based ones. It needs this feature to enable the proxy user to 
use the timeline service (YARN-2676).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)