Build failed in Jenkins: Hadoop-Common-0.23-Build #1082

2014-09-24 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Common-0.23-Build/1082/

--
[...truncated 8263 lines...]
Running org.apache.hadoop.io.TestBloomMapFile
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.879 sec
Running org.apache.hadoop.io.TestObjectWritableProtos
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.32 sec
Running org.apache.hadoop.io.TestTextNonUTF8
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.047 sec
Running org.apache.hadoop.io.nativeio.TestNativeIO
Tests run: 9, Failures: 0, Errors: 0, Skipped: 9, Time elapsed: 0.16 sec
Running org.apache.hadoop.io.TestSortedMapWritable
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.191 sec
Running org.apache.hadoop.io.TestMapFile
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.669 sec
Running org.apache.hadoop.io.TestUTF8
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.443 sec
Running org.apache.hadoop.io.TestBoundedByteArrayOutputStream
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.041 sec
Running org.apache.hadoop.io.retry.TestRetryProxy
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.208 sec
Running org.apache.hadoop.io.retry.TestFailoverProxy
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.194 sec
Running org.apache.hadoop.io.TestSetFile
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.779 sec
Running org.apache.hadoop.io.serializer.TestWritableSerialization
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.315 sec
Running org.apache.hadoop.io.serializer.TestSerializationFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.291 sec
Running org.apache.hadoop.io.serializer.avro.TestAvroSerialization
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.537 sec
Running org.apache.hadoop.util.TestGenericOptionsParser
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.685 sec
Running org.apache.hadoop.util.TestReflectionUtils
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.503 sec
Running org.apache.hadoop.util.TestJarFinder
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.779 sec
Running org.apache.hadoop.util.TestPureJavaCrc32
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.29 sec
Running org.apache.hadoop.util.TestHostsFileReader
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.184 sec
Running org.apache.hadoop.util.TestShutdownHookManager
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.143 sec
Running org.apache.hadoop.util.TestDiskChecker
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.495 sec
Running org.apache.hadoop.util.TestStringUtils
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.135 sec
Running org.apache.hadoop.util.TestGenericsUtil
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.261 sec
Running org.apache.hadoop.util.TestAsyncDiskService
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.124 sec
Running org.apache.hadoop.util.TestProtoUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.078 sec
Running org.apache.hadoop.util.TestDataChecksum
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.186 sec
Running org.apache.hadoop.util.TestRunJar
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.127 sec
Running org.apache.hadoop.util.TestOptions
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.079 sec
Running org.apache.hadoop.util.TestShell
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.196 sec
Running org.apache.hadoop.util.TestIndexedSort
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.63 sec
Running org.apache.hadoop.util.TestStringInterner
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.115 sec
Running org.apache.hadoop.record.TestRecordVersioning
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.161 sec
Running org.apache.hadoop.record.TestBuffer
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.055 sec
Running org.apache.hadoop.record.TestRecordIO
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.18 sec
Running org.apache.hadoop.security.TestGroupFallback
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.423 sec
Running org.apache.hadoop.security.TestGroupsCaching
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.273 sec
Running org.apache.hadoop.security.TestProxyUserFromEnv
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.357 sec
Running org.apache.hadoop.security.TestUserGroupInformation
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.68 sec
Running org.apache.hadoop.security.TestJNIGroupsMapping
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.141 sec
Running 

[jira] [Created] (HADOOP-11123) Uber-JIRA: Hadoop on Java 9

2014-09-24 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-11123:
---

 Summary: Uber-JIRA: Hadoop on Java 9
 Key: HADOOP-11123
 URL: https://issues.apache.org/jira/browse/HADOOP-11123
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.0.0
 Environment: Java 9
Reporter: Steve Loughran


JIRA to cover/track issues related to Hadoop on Java 9.

Java 9 will have some significant changes —one of which is the removal of 
various {{com.sun}} classes. These removals need to be handled or Hadoop will 
not be able to run on a Java 9 JVM



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11124) Java 9 removes/hides Java internal classes

2014-09-24 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-11124:
---

 Summary: Java 9 removes/hides Java internal classes
 Key: HADOOP-11124
 URL: https://issues.apache.org/jira/browse/HADOOP-11124
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 3.0.0
Reporter: Steve Loughran


Java 9 removes various internal classes; adapt the code to this.

It should be possible to switch to code that works on Java7+, yet which adapts 
to the changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Time to address the Guava version problem

2014-09-24 Thread Billie Rinaldi
The use of an unnecessarily old dependency encourages problems like
HDFS-7040.  The current Guava dependency is a big problem for downstream
apps and I'd really like to see it addressed.

On Tue, Sep 23, 2014 at 2:09 PM, Steve Loughran ste...@hortonworks.com
wrote:

 I'm using curator elsewhere, it does log a lot (as does the ZK client), but
 it solves a lot of problem. It's being adopted more downstream too.

 I'm wondering if we can move the code to the extent we know it works with
 Guava 16, with the hadoop core being 16-compatible, but not actually
 migrated to 16.x only. Then hadoop ships with 16 for curator  downstream
 apps, but we say you can probably roll back to 11 provided you don't use
 features x-y-z.

 On 23 September 2014 21:55, Robert Kanter rkan...@cloudera.com wrote:

  At the same time, not being able to use Curator will require a lot of
 extra
  code, a lot of which we probably already have from the ZKRMStateStore,
 but
  it's not available to use in hadoop-auth.  We'd need to create our own ZK
  libraries that Hadoop components can use, but (a) that's going to take a
  while, and (b) it seems silly to reinvent the wheel when Curator already
  does all this.
 
  I agree that upgrading Guava will be a compatibility problem though...
 
  On Tue, Sep 23, 2014 at 9:30 AM, Sandy Ryza sandy.r...@cloudera.com
  wrote:
 
   If we've broken compatibility in branch-2, that's a bug that we need to
   fix. HADOOP-10868 has not yet made it into a release; I don't see it
 as a
   justification for solidifying the breakage.
  
   -1 to upgrading Guava in branch-2.
  
   On Tue, Sep 23, 2014 at 3:06 AM, Steve Loughran 
 ste...@hortonworks.com
   wrote:
  
+1 to upgrading guava. Irrespective of downstream apps, the hadoop
  source
tree is now internally inconsistent
   
On 22 September 2014 17:56, Sangjin Lee sj...@apache.org wrote:
   
 I agree that a more robust solution is to have better classloading
 isolation.

 Still, IMHO guava (and possibly protobuf as well) sticks out like a
   sore
 thumb. There are just too many issues in trying to support both
 guava
   11
 and guava 16. Independent of what we may do with the classloading
 isolation, we should still consider upgrading guava.

 My 2 cents.

 On Sun, Sep 21, 2014 at 3:11 PM, Karthik Kambatla 
  ka...@cloudera.com
 wrote:

  Upgrading Guava version is tricky. While it helps in many cases,
 it
   can
  break existing applications/deployments. I understand we do not
  have
   a
  policy for updating dependencies, but still we should be careful
  with
  Guava.
 
  I would be more inclined towards a more permanent solution to
 this
 problem
  - how about prioritizing classpath isolation so applications
 aren't
  affected by Hadoop dependency updates at all? I understand that
  will
also
  break user applications, but it might be the driving feature for
   Hadoop
  3.0?
 
  On Fri, Sep 19, 2014 at 5:13 PM, Sangjin Lee sj...@apache.org
   wrote:
 
   I would also agree on upgrading guava. Yes I am aware of the
potential
   impact on customers who might rely on hadoop bringing in guava
  11.
  However,
   IMHO the balance tipped over to the other side a while ago;
 i.e.
  I
 think
   there are far more people using guava 16 in their code and
   scrambling
 to
   make things work than the other way around.
  
   On Thu, Sep 18, 2014 at 2:40 PM, Steve Loughran 
 ste...@hortonworks.com
   wrote:
  
I know we've been ignoring the Guava version problem, but
 HADOOP-10868
added a transitive dependency on Guava 16 by way of Curator
  2.6.
   
Maven currently forces the build to use Guava 11.0.2, but
 this
  is
  hiding
   at
compile timeall code paths from curator which may use
 classes 
 methods
that aren't there.
   
I need curator for my own work (2.4.1  Guava 14.0 was what
 I'd
been
using), so don't think we can go back.
   
HADOOP-11102 covers the problem -but doesn't propose a
 specific
  solution.
But to me the one that seems most likely to work is: update
  Guava
   
-steve
   
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the
 individual
  or
  entity
   to
which it is addressed and may contain information that is
 confidential,
privileged and exempt from disclosure under applicable law.
 If
   the
  reader
of this message is not the intended recipient, you are hereby
 notified
   that
any printing, copying, dissemination, distribution,
 disclosure
  or
forwarding of this communication is strictly prohibited. If
 you
have
received this communication in error, please contact the
 sender
   immediately
and delete 

Re: Thinking ahead to hadoop-2.6

2014-09-24 Thread Andrew Wang
Hey Nicholas,

My concern about Archival Storage isn't related to the code quality or the
size of the feature. I think that you and Jing did good work. My concern is
that once we ship, we're locked into that set of archival storage APIs, and
these APIs are not yet finalized. Simply being able to turn off the feature
does not change the compatibility story.

I'm willing to devote time to help review these JIRAs and kick the tires on
the APIs, but my point above was that I'm not sure it'd all be done by the
end of the week. Testing might also reveal additional changes that need to
be made, which also might not happen by end-of-week.

I guess the question before us is if we're comfortable putting something in
branch-2.6 and then potentially adding API changes after. I'm okay with
that as long as we're all aware that this might happen.

Arun, as RM is this cool with you? Again, I like this feature and I'm fine
with it's inclusion, just a heads up that we might need some extra time to
finalize things before an RC can be cut.

Thanks,
Andrew

On Tue, Sep 23, 2014 at 7:30 PM, Tsz Wo (Nicholas), Sze 
s29752-hadoop...@yahoo.com.invalid wrote:

 Hi,

 I am worry about KMS and transparent encryption since there are quite many
 bugs discovered after it got merged to branch-2.  It gives us an impression
 that the feature is not yet well tested.  Indeed, transparent encryption is
 a complicated feature which changes the core part of HDFS.  It is not easy
 to get everything right.


 For HDFS-6584: Archival Storage, it is a relatively simple and low risk
 feature.  It introduces a new storage type ARCHIVE and the concept of block
 storage policy to HDFS.  When a cluster is configured with ARCHIVE storage,
 the blocks will be stored using the appropriate storage types specified by
 storage policies assigned to the files/directories.  Cluster admin could
 disable the feature by simply not configuring any storage type and not
 setting any storage policy as before.   As Suresh mentioned, HDFS-6584 is
 in the final stages to be merged to branch-2.

 Regards,
 Tsz-Wo



 On Wednesday, September 24, 2014 7:00 AM, Suresh Srinivas 
 sur...@hortonworks.com wrote:


 
 
 I actually would like to see both archival storage and single replica
 memory writes to be in 2.6 release. Archival storage is in the final
 stages
 of getting ready for branch-2 merge as Nicholas has already indicated on
 the dev mailing list. Hopefully HDFS-6581 gets ready sooner. Both of these
 features are being in development for sometime.
 
 On Tue, Sep 23, 2014 at 3:27 PM, Andrew Wang andrew.w...@cloudera.com
 wrote:
 
  Hey Arun,
 
  Maybe we could do a quick run through of the Roadmap wiki and
 add/retarget
  things accordingly?
 
  I think the KMS and transparent encryption are ready to go. We've got a
  very few further bug fixes pending, but that's it.
 
  Two HDFS things that I think probably won't make the end of the week are
  archival storage (HDFS-6584) and single replica memory writes
 (HDFS-6581),
  which I believe are under the HSM banner. HDFS-6484 was just merged to
  trunk and I think needs a little more work before it goes into branch-2.
  HDFS-6581 hasn't even been merged to trunk yet, so seems a bit further
 off
  yet.
 
  Just my 2c as I did not work directly on these features. I just
 generally
  shy away from shipping bits quite this fresh.
 
  Thanks,
  Andrew
 
  On Tue, Sep 23, 2014 at 3:03 PM, Arun Murthy a...@hortonworks.com
 wrote:
 
   Looks like most of the content is in and hadoop-2.6 is shaping up
 nicely.
  
   I'll create branch-2.6 by end of the week and we can go from there to
   stabilize it - hopefully in the next few weeks.
  
   Thoughts?
  
   thanks,
   Arun
  
   On Tue, Aug 12, 2014 at 1:34 PM, Arun C Murthy a...@hortonworks.com
   wrote:
  
Folks,
   
 With hadoop-2.5 nearly done, it's time to start thinking ahead to
hadoop-2.6.
   
 Currently, here is the Roadmap per the wiki:
   
• HADOOP
• Credential provider HADOOP-10607
• HDFS
• Heterogeneous storage (Phase 2) - Support APIs for
   using
storage tiers by the applications HDFS-5682
• Memory as storage tier HDFS-5851
• YARN
• Dynamic Resource Configuration YARN-291
• NodeManager Restart YARN-1336
• ResourceManager HA Phase 2 YARN-556
• Support for admin-specified labels in YARN
 YARN-796
• Support for automatic, shared cache for YARN
   application
artifacts YARN-1492
• Support NodeGroup layer topology on YARN YARN-18
• Support for Docker containers in YARN YARN-1964
• YARN service registry YARN-913
   
 My suspicion is, as is normal, some will make the cut and some
 won't.
Please do add/subtract from the list as appropriate. Ideally, it
 would
  be
good to ship hadoop-2.6 

[jira] [Created] (HADOOP-11126) Findbugs link in Jenkins needs to be fixed

2014-09-24 Thread Ray Chiang (JIRA)
Ray Chiang created HADOOP-11126:
---

 Summary: Findbugs link in Jenkins needs to be fixed
 Key: HADOOP-11126
 URL: https://issues.apache.org/jira/browse/HADOOP-11126
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ray Chiang


For YARN-2284, the latest Jenkins notification points to the following URL for 
the Findbugs report:

https://builds.apache.org/job/PreCommit-YARN-Build/5103//artifact/PreCommit-HADOOP-Build-patchprocess/newPatchFindbugsWarningshadoop-common.html

The real URL I found manually is:

https://builds.apache.org/job/PreCommit-YARN-Build/5103/artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html

It would be good to get this URL correct for future notifications.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Thinking ahead to hadoop-2.6

2014-09-24 Thread Suresh Srinivas
Given some of the features are in final stages of stabilization,
Arun, we should hold off creating 2.6 branch or building an RC by a week?
All the features in flux are important ones and worth delaying the release
by a week.

On Wed, Sep 24, 2014 at 11:36 AM, Andrew Wang andrew.w...@cloudera.com
wrote:

 Hey Nicholas,

 My concern about Archival Storage isn't related to the code quality or the
 size of the feature. I think that you and Jing did good work. My concern is
 that once we ship, we're locked into that set of archival storage APIs, and
 these APIs are not yet finalized. Simply being able to turn off the feature
 does not change the compatibility story.

 I'm willing to devote time to help review these JIRAs and kick the tires on
 the APIs, but my point above was that I'm not sure it'd all be done by the
 end of the week. Testing might also reveal additional changes that need to
 be made, which also might not happen by end-of-week.

 I guess the question before us is if we're comfortable putting something in
 branch-2.6 and then potentially adding API changes after. I'm okay with
 that as long as we're all aware that this might happen.

 Arun, as RM is this cool with you? Again, I like this feature and I'm fine
 with it's inclusion, just a heads up that we might need some extra time to
 finalize things before an RC can be cut.

 Thanks,
 Andrew

 On Tue, Sep 23, 2014 at 7:30 PM, Tsz Wo (Nicholas), Sze 
 s29752-hadoop...@yahoo.com.invalid wrote:

  Hi,
 
  I am worry about KMS and transparent encryption since there are quite
 many
  bugs discovered after it got merged to branch-2.  It gives us an
 impression
  that the feature is not yet well tested.  Indeed, transparent encryption
 is
  a complicated feature which changes the core part of HDFS.  It is not
 easy
  to get everything right.
 
 
  For HDFS-6584: Archival Storage, it is a relatively simple and low risk
  feature.  It introduces a new storage type ARCHIVE and the concept of
 block
  storage policy to HDFS.  When a cluster is configured with ARCHIVE
 storage,
  the blocks will be stored using the appropriate storage types specified
 by
  storage policies assigned to the files/directories.  Cluster admin could
  disable the feature by simply not configuring any storage type and not
  setting any storage policy as before.   As Suresh mentioned, HDFS-6584 is
  in the final stages to be merged to branch-2.
 
  Regards,
  Tsz-Wo
 
 
 
  On Wednesday, September 24, 2014 7:00 AM, Suresh Srinivas 
  sur...@hortonworks.com wrote:
 
 
  
  
  I actually would like to see both archival storage and single replica
  memory writes to be in 2.6 release. Archival storage is in the final
  stages
  of getting ready for branch-2 merge as Nicholas has already indicated on
  the dev mailing list. Hopefully HDFS-6581 gets ready sooner. Both of
 these
  features are being in development for sometime.
  
  On Tue, Sep 23, 2014 at 3:27 PM, Andrew Wang andrew.w...@cloudera.com
  wrote:
  
   Hey Arun,
  
   Maybe we could do a quick run through of the Roadmap wiki and
  add/retarget
   things accordingly?
  
   I think the KMS and transparent encryption are ready to go. We've got
 a
   very few further bug fixes pending, but that's it.
  
   Two HDFS things that I think probably won't make the end of the week
 are
   archival storage (HDFS-6584) and single replica memory writes
  (HDFS-6581),
   which I believe are under the HSM banner. HDFS-6484 was just merged to
   trunk and I think needs a little more work before it goes into
 branch-2.
   HDFS-6581 hasn't even been merged to trunk yet, so seems a bit further
  off
   yet.
  
   Just my 2c as I did not work directly on these features. I just
  generally
   shy away from shipping bits quite this fresh.
  
   Thanks,
   Andrew
  
   On Tue, Sep 23, 2014 at 3:03 PM, Arun Murthy a...@hortonworks.com
  wrote:
  
Looks like most of the content is in and hadoop-2.6 is shaping up
  nicely.
   
I'll create branch-2.6 by end of the week and we can go from there
 to
stabilize it - hopefully in the next few weeks.
   
Thoughts?
   
thanks,
Arun
   
On Tue, Aug 12, 2014 at 1:34 PM, Arun C Murthy a...@hortonworks.com
 
wrote:
   
 Folks,

  With hadoop-2.5 nearly done, it's time to start thinking ahead to
 hadoop-2.6.

  Currently, here is the Roadmap per the wiki:

 • HADOOP
 • Credential provider HADOOP-10607
 • HDFS
 • Heterogeneous storage (Phase 2) - Support APIs
 for
using
 storage tiers by the applications HDFS-5682
 • Memory as storage tier HDFS-5851
 • YARN
 • Dynamic Resource Configuration YARN-291
 • NodeManager Restart YARN-1336
 • ResourceManager HA Phase 2 YARN-556
 • Support for admin-specified labels in YARN
  YARN-796
 • Support for automatic, 

Re: Thinking ahead to hadoop-2.6

2014-09-24 Thread Jitendra Pandey
I also believe its worth a week's wait to include HDFS-6584 and HDFS-6581
in 2.6.

On Wed, Sep 24, 2014 at 3:28 PM, Suresh Srinivas sur...@hortonworks.com
wrote:

 Given some of the features are in final stages of stabilization,
 Arun, we should hold off creating 2.6 branch or building an RC by a week?
 All the features in flux are important ones and worth delaying the release
 by a week.

 On Wed, Sep 24, 2014 at 11:36 AM, Andrew Wang andrew.w...@cloudera.com
 wrote:

  Hey Nicholas,
 
  My concern about Archival Storage isn't related to the code quality or
 the
  size of the feature. I think that you and Jing did good work. My concern
 is
  that once we ship, we're locked into that set of archival storage APIs,
 and
  these APIs are not yet finalized. Simply being able to turn off the
 feature
  does not change the compatibility story.
 
  I'm willing to devote time to help review these JIRAs and kick the tires
 on
  the APIs, but my point above was that I'm not sure it'd all be done by
 the
  end of the week. Testing might also reveal additional changes that need
 to
  be made, which also might not happen by end-of-week.
 
  I guess the question before us is if we're comfortable putting something
 in
  branch-2.6 and then potentially adding API changes after. I'm okay with
  that as long as we're all aware that this might happen.
 
  Arun, as RM is this cool with you? Again, I like this feature and I'm
 fine
  with it's inclusion, just a heads up that we might need some extra time
 to
  finalize things before an RC can be cut.
 
  Thanks,
  Andrew
 
  On Tue, Sep 23, 2014 at 7:30 PM, Tsz Wo (Nicholas), Sze 
  s29752-hadoop...@yahoo.com.invalid wrote:
 
   Hi,
  
   I am worry about KMS and transparent encryption since there are quite
  many
   bugs discovered after it got merged to branch-2.  It gives us an
  impression
   that the feature is not yet well tested.  Indeed, transparent
 encryption
  is
   a complicated feature which changes the core part of HDFS.  It is not
  easy
   to get everything right.
  
  
   For HDFS-6584: Archival Storage, it is a relatively simple and low risk
   feature.  It introduces a new storage type ARCHIVE and the concept of
  block
   storage policy to HDFS.  When a cluster is configured with ARCHIVE
  storage,
   the blocks will be stored using the appropriate storage types specified
  by
   storage policies assigned to the files/directories.  Cluster admin
 could
   disable the feature by simply not configuring any storage type and not
   setting any storage policy as before.   As Suresh mentioned, HDFS-6584
 is
   in the final stages to be merged to branch-2.
  
   Regards,
   Tsz-Wo
  
  
  
   On Wednesday, September 24, 2014 7:00 AM, Suresh Srinivas 
   sur...@hortonworks.com wrote:
  
  
   
   
   I actually would like to see both archival storage and single replica
   memory writes to be in 2.6 release. Archival storage is in the final
   stages
   of getting ready for branch-2 merge as Nicholas has already indicated
 on
   the dev mailing list. Hopefully HDFS-6581 gets ready sooner. Both of
  these
   features are being in development for sometime.
   
   On Tue, Sep 23, 2014 at 3:27 PM, Andrew Wang 
 andrew.w...@cloudera.com
   wrote:
   
Hey Arun,
   
Maybe we could do a quick run through of the Roadmap wiki and
   add/retarget
things accordingly?
   
I think the KMS and transparent encryption are ready to go. We've
 got
  a
very few further bug fixes pending, but that's it.
   
Two HDFS things that I think probably won't make the end of the week
  are
archival storage (HDFS-6584) and single replica memory writes
   (HDFS-6581),
which I believe are under the HSM banner. HDFS-6484 was just merged
 to
trunk and I think needs a little more work before it goes into
  branch-2.
HDFS-6581 hasn't even been merged to trunk yet, so seems a bit
 further
   off
yet.
   
Just my 2c as I did not work directly on these features. I just
   generally
shy away from shipping bits quite this fresh.
   
Thanks,
Andrew
   
On Tue, Sep 23, 2014 at 3:03 PM, Arun Murthy a...@hortonworks.com
   wrote:
   
 Looks like most of the content is in and hadoop-2.6 is shaping up
   nicely.

 I'll create branch-2.6 by end of the week and we can go from there
  to
 stabilize it - hopefully in the next few weeks.

 Thoughts?

 thanks,
 Arun

 On Tue, Aug 12, 2014 at 1:34 PM, Arun C Murthy 
 a...@hortonworks.com
  
 wrote:

  Folks,
 
   With hadoop-2.5 nearly done, it's time to start thinking ahead
 to
  hadoop-2.6.
 
   Currently, here is the Roadmap per the wiki:
 
  • HADOOP
  • Credential provider HADOOP-10607
  • HDFS
  • Heterogeneous storage (Phase 2) - Support APIs
  for
 using
  storage tiers by the applications HDFS-5682
  • Memory as storage tier 

[jira] [Created] (HADOOP-11127) Improve versioning and compatibility support in native library for downstream hadoop-common users.

2014-09-24 Thread Chris Nauroth (JIRA)
Chris Nauroth created HADOOP-11127:
--

 Summary: Improve versioning and compatibility support in native 
library for downstream hadoop-common users.
 Key: HADOOP-11127
 URL: https://issues.apache.org/jira/browse/HADOOP-11127
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Reporter: Chris Nauroth


There is no compatibility policy enforced on the JNI function signatures 
implemented in the native library.  This library typically is deployed to all 
nodes in a cluster, built from a specific source code version.  However, 
downstream applications that want to run in that cluster might choose to bundle 
a hadoop-common jar at a different version.  Since there is no compatibility 
policy, this can cause link errors at runtime when the native function 
signatures expected by hadoop-common.jar do not exist in 
libhadoop.so/hadoop.dll.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Git repo ready to use

2014-09-24 Thread Ted Yu
FYI

I made some changes to:
https://builds.apache.org/view/All/job/Hadoop-branch2

because it until this morning was using svn to build.

Would 2.6.0-SNAPSHOT maven artifacts be updated after the build ?

Cheers

On Mon, Sep 15, 2014 at 11:14 AM, Todd Lipcon t...@cloudera.com wrote:

 Hey all,

 For those of you who like to see the entire history of a file going back to
 2006, I found I had to add a new graft to .git/info/grafts:

 # Project un-split in new writable git repo
 a196766ea07775f18ded69bd9e8d239f8cfd3ccc
 928d485e2743115fe37f9d123ce9a635c5afb91a
 cd66945f62635f589ff93468e94c0039684a8b6d
 77f628ff5925c25ba2ee4ce14590789eb2e7b85b

 FWIW, my entire file now contains:

 # Project split
 5128a9a453d64bfe1ed978cf9ffed27985eeef36
 6c16dc8cf2b28818c852e95302920a278d07ad0c
 6a3ac690e493c7da45bbf2ae2054768c427fd0e1
 6c16dc8cf2b28818c852e95302920a278d07ad0c
 546d96754ffee3142bcbbf4563c624c053d0ed0d
 6c16dc8cf2b28818c852e95302920a278d07ad0c
 4e569e629a98a4ef5326e5d25a84c7d57b5a8f7a
 c78078dd2283e2890018ff0e87d751c86163f99f

 # Project un-split in new writable git repo
 a196766ea07775f18ded69bd9e8d239f8cfd3ccc
 928d485e2743115fe37f9d123ce9a635c5afb91a
 cd66945f62635f589ff93468e94c0039684a8b6d
 77f628ff5925c25ba2ee4ce14590789eb2e7b85b

 which seems to do a good job for me (not sure if the first few lines are
 necessary anymore in the latest world)

 -Todd



 On Fri, Sep 12, 2014 at 11:31 AM, Colin McCabe cmcc...@alumni.cmu.edu
 wrote:

  It's an issue with test-patch.sh.  See
  https://issues.apache.org/jira/browse/HADOOP-11084
 
  best,
  Colin
 
  On Mon, Sep 8, 2014 at 3:38 PM, Andrew Wang andrew.w...@cloudera.com
  wrote:
   We're still not seeing findbugs results show up on precommit runs. I
 see
   that we're archiving ../patchprocess/*, and Ted thinks that since
 it's
   not in $WORKSPACE it's not getting picked up. Can we get confirmation
 of
   this issue? If so, we could just add patchprocess to the toplevel
   .gitignore.
  
   On Thu, Sep 4, 2014 at 8:54 AM, Sangjin Lee sjl...@gmail.com wrote:
  
   That's good to know. Thanks.
  
  
   On Wed, Sep 3, 2014 at 11:15 PM, Vinayakumar B 
 vinayakum...@apache.org
  
   wrote:
  
I think its still pointing to old svn repository which is just read
  only
now.
   
You can use latest mirror:
https://github.com/apache/hadoop
   
Regards,
Vinay
On Sep 4, 2014 11:37 AM, Sangjin Lee sjl...@gmail.com wrote:
   
 It seems like the github mirror at
https://github.com/apache/hadoop-common
 has stopped getting updates as of 8/22. Could this mirror have
 been
broken
 by the git transition?

 Thanks,
 Sangjin


 On Fri, Aug 29, 2014 at 11:51 AM, Ted Yu yuzhih...@gmail.com
  wrote:

  From
 https://builds.apache.org/job/Hadoop-hdfs-trunk/1854/console
  :
 
  ERROR: No artifacts found that match the file pattern
  trunk/hadoop-hdfs-project/*/target/*.tar.gz. Configuration
  error?ERROR 
 http://stacktrace.jenkins-ci.org/search?query=ERROR
  :
  ?trunk/hadoop-hdfs-project/*/target/*.tar.gz? doesn?t match
  anything,
  but ?hadoop-hdfs-project/*/target/*.tar.gz? does. Perhaps that?s
  what
  you mean?
 
 
  I corrected the path to hdfs tar ball.
 
 
  FYI
 
 
 
  On Fri, Aug 29, 2014 at 8:48 AM, Alejandro Abdelnur 
   t...@cloudera.com

  wrote:
 
   it seems we missed updating the HADOOP precommit job to use
  Git, it
was
   still using SVN. I've just updated it.
  
   thx
  
  
   On Thu, Aug 28, 2014 at 9:26 PM, Ted Yu yuzhih...@gmail.com
   wrote:
  
Currently patchprocess/ (contents shown below) is one level
   higher
 than
${WORKSPACE}
   
diffJavadocWarnings.txt
   newPatchFindbugsWarningshadoop-hdfs.html
 patchFindBugsOutputhadoop-hdfs.txt
   patchReleaseAuditOutput.txt
 trunkJavadocWarnings.txt
filteredPatchJavacWarnings.txt
 newPatchFindbugsWarningshadoop-hdfs.xml
patchFindbugsWarningshadoop-hdfs.xml
   patchReleaseAuditWarnings.txt
filteredTrunkJavacWarnings.txt  patch
patchJavacWarnings.txt
 testrun_hadoop-hdfs.txt
jirapatchEclipseOutput.txt
 patchJavadocWarnings.txt
 trunkJavacWarnings.txt
   
Under Files to archive input box of
PreCommit-HDFS-Build/configure, I
   saw:
   
'../patchprocess/*' doesn't match anything, but '*' does.
  Perhaps
  that's
what you mean?
   
I guess once patchprocess is moved back under ${WORKSPACE},
 a
  lot
of
   things
would be back to normal.
   
Cheers
   
On Thu, Aug 28, 2014 at 9:16 PM, Alejandro Abdelnur 
 t...@cloudera.com
  
wrote:
   
 i'm also seeing broken links for javadocs warnings.

 Alejandro
 (phone typing)

  On Aug 28, 2014, 

[jira] [Created] (HADOOP-11128) abstracting out the scale tests for FileSystem Contract tests

2014-09-24 Thread Juan Yu (JIRA)
Juan Yu created HADOOP-11128:


 Summary: abstracting out the scale tests for FileSystem Contract 
tests
 Key: HADOOP-11128
 URL: https://issues.apache.org/jira/browse/HADOOP-11128
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Juan Yu


Currently we have some scale tests for openstack and s3a. For now we'll just 
trust HDFS to handle files 5GB and delete thousands of file in a directory 
properly.
We should abstract out the scale tests so it can be applied to all FileSystems.

A few things to consider for scale tests:
scale tests rely on the tester having good/stable upload bandwidth, might need 
large disk space. It needs to be configurable or optional.
scale tests might need long time to finish, consider have test timeout 
configurable if possible



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Thinking ahead to hadoop-2.6

2014-09-24 Thread Jing Zhao
Just to give a quick update about the current status of HDFS-6584 (archival
storage). So far after HDFS-7081 got committed this morning, the main
functionalities have already been finished. As a summary two new
DistributedFileSystem APIs are added: getStoragePolicies and
setStoragePolicy. We have also been doing system tests for weeks and we
will continue testing. There are still one or two pending issues but we're
actively working on it. So I'm pretty confident that the archival storage
work can get ready given the current plan to release 2.6 next week.

On Wed, Sep 24, 2014 at 3:35 PM, Jitendra Pandey jiten...@hortonworks.com
wrote:

 I also believe its worth a week's wait to include HDFS-6584 and HDFS-6581
 in 2.6.

 On Wed, Sep 24, 2014 at 3:28 PM, Suresh Srinivas sur...@hortonworks.com
 wrote:

  Given some of the features are in final stages of stabilization,
  Arun, we should hold off creating 2.6 branch or building an RC by a week?
  All the features in flux are important ones and worth delaying the
 release
  by a week.
 
  On Wed, Sep 24, 2014 at 11:36 AM, Andrew Wang andrew.w...@cloudera.com
  wrote:
 
   Hey Nicholas,
  
   My concern about Archival Storage isn't related to the code quality or
  the
   size of the feature. I think that you and Jing did good work. My
 concern
  is
   that once we ship, we're locked into that set of archival storage APIs,
  and
   these APIs are not yet finalized. Simply being able to turn off the
  feature
   does not change the compatibility story.
  
   I'm willing to devote time to help review these JIRAs and kick the
 tires
  on
   the APIs, but my point above was that I'm not sure it'd all be done by
  the
   end of the week. Testing might also reveal additional changes that need
  to
   be made, which also might not happen by end-of-week.
  
   I guess the question before us is if we're comfortable putting
 something
  in
   branch-2.6 and then potentially adding API changes after. I'm okay with
   that as long as we're all aware that this might happen.
  
   Arun, as RM is this cool with you? Again, I like this feature and I'm
  fine
   with it's inclusion, just a heads up that we might need some extra time
  to
   finalize things before an RC can be cut.
  
   Thanks,
   Andrew
  
   On Tue, Sep 23, 2014 at 7:30 PM, Tsz Wo (Nicholas), Sze 
   s29752-hadoop...@yahoo.com.invalid wrote:
  
Hi,
   
I am worry about KMS and transparent encryption since there are quite
   many
bugs discovered after it got merged to branch-2.  It gives us an
   impression
that the feature is not yet well tested.  Indeed, transparent
  encryption
   is
a complicated feature which changes the core part of HDFS.  It is not
   easy
to get everything right.
   
   
For HDFS-6584: Archival Storage, it is a relatively simple and low
 risk
feature.  It introduces a new storage type ARCHIVE and the concept of
   block
storage policy to HDFS.  When a cluster is configured with ARCHIVE
   storage,
the blocks will be stored using the appropriate storage types
 specified
   by
storage policies assigned to the files/directories.  Cluster admin
  could
disable the feature by simply not configuring any storage type and
 not
setting any storage policy as before.   As Suresh mentioned,
 HDFS-6584
  is
in the final stages to be merged to branch-2.
   
Regards,
Tsz-Wo
   
   
   
On Wednesday, September 24, 2014 7:00 AM, Suresh Srinivas 
sur...@hortonworks.com wrote:
   
   


I actually would like to see both archival storage and single
 replica
memory writes to be in 2.6 release. Archival storage is in the final
stages
of getting ready for branch-2 merge as Nicholas has already
 indicated
  on
the dev mailing list. Hopefully HDFS-6581 gets ready sooner. Both of
   these
features are being in development for sometime.

On Tue, Sep 23, 2014 at 3:27 PM, Andrew Wang 
  andrew.w...@cloudera.com
wrote:

 Hey Arun,

 Maybe we could do a quick run through of the Roadmap wiki and
add/retarget
 things accordingly?

 I think the KMS and transparent encryption are ready to go. We've
  got
   a
 very few further bug fixes pending, but that's it.

 Two HDFS things that I think probably won't make the end of the
 week
   are
 archival storage (HDFS-6584) and single replica memory writes
(HDFS-6581),
 which I believe are under the HSM banner. HDFS-6484 was just
 merged
  to
 trunk and I think needs a little more work before it goes into
   branch-2.
 HDFS-6581 hasn't even been merged to trunk yet, so seems a bit
  further
off
 yet.

 Just my 2c as I did not work directly on these features. I just
generally
 shy away from shipping bits quite this fresh.

 Thanks,
 Andrew

 On Tue, Sep 23, 2014 at 3:03 PM, Arun Murthy a...@hortonworks.com
 
wrote:

  Looks like most of the content 

[jira] [Created] (HADOOP-11129) Fix findbug issue introduced by HADOOP-11017

2014-09-24 Thread Yi Liu (JIRA)
Yi Liu created HADOOP-11129:
---

 Summary: Fix findbug issue introduced by HADOOP-11017
 Key: HADOOP-11129
 URL: https://issues.apache.org/jira/browse/HADOOP-11129
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Yi Liu
Assignee: Yi Liu


This JIRA is to fix findbug issue introduced by HADOOP-11017
{quote}
Inconsistent synchronization of 
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.delegationTokenSequenceNumber
{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11129) Fix findbug issue introduced by HADOOP-11017

2014-09-24 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu resolved HADOOP-11129.
-
Resolution: Duplicate

The findbug issue was already reported. Duplicate it with HADOOP-11129.

 Fix findbug issue introduced by HADOOP-11017
 

 Key: HADOOP-11129
 URL: https://issues.apache.org/jira/browse/HADOOP-11129
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Yi Liu
Assignee: Yi Liu

 This JIRA is to fix findbug issue introduced by HADOOP-11017
 {quote}
 Inconsistent synchronization of 
 org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.delegationTokenSequenceNumber
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Thinking ahead to hadoop-2.6

2014-09-24 Thread Vinod Kumar Vavilapalli
Thank for restarting this, Arun!

The following are the efforts that I am involved in directly or indirectly
and shepherding from YARN's point of view: rolling-upgrades (YARN-666),
work-preserving RM restart (YARN-556), minimal long-running services
support (YARN-896), including service-registry (YARN-913), timeline-service
stability/security (some sub-tasks of YARN-1935), node labels (YARN-296)
and some enhancements/bug fixes in cpu-scheduling/cgroups. We also have
reservations sub-system (YARN-1051) and  YARN shared cache (YARN-1492) that
I would like to get in and helping reviews with. Almost all these efforts
are on the edge of completion - some of them are just pending reviews.
Clearly some of them will be more stable than others, some of them make it
eventually, some may not.

+1 for branching it within a week so that we can stabilize it and let other
features go into branch-2, I am putting out all the stops to get the above
YARN efforts to be in reasonable shape for 2.6.

Regarding remaining items that I haven't paid much attention, NM restart
effort - mostly done by Jason Lowe - YARN-1336 is all in. YARN-291,
YARN-18, YARN-1964 are not going to cut from what I see.

I edited the road-map to reflect this latest reality.

+Vinod

On Tue, Sep 23, 2014 at 3:03 PM, Arun Murthy a...@hortonworks.com wrote:

 Looks like most of the content is in and hadoop-2.6 is shaping up nicely.

 I'll create branch-2.6 by end of the week and we can go from there to
 stabilize it - hopefully in the next few weeks.

 Thoughts?

 thanks,
 Arun

 On Tue, Aug 12, 2014 at 1:34 PM, Arun C Murthy a...@hortonworks.com
 wrote:

  Folks,
 
   With hadoop-2.5 nearly done, it's time to start thinking ahead to
  hadoop-2.6.
 
   Currently, here is the Roadmap per the wiki:
 
  • HADOOP
  • Credential provider HADOOP-10607
  • HDFS
  • Heterogeneous storage (Phase 2) - Support APIs for
 using
  storage tiers by the applications HDFS-5682
  • Memory as storage tier HDFS-5851
  • YARN
  • Dynamic Resource Configuration YARN-291
  • NodeManager Restart YARN-1336
  • ResourceManager HA Phase 2 YARN-556
  • Support for admin-specified labels in YARN YARN-796
  • Support for automatic, shared cache for YARN
 application
  artifacts YARN-1492
  • Support NodeGroup layer topology on YARN YARN-18
  • Support for Docker containers in YARN YARN-1964
  • YARN service registry YARN-913
 
   My suspicion is, as is normal, some will make the cut and some won't.
  Please do add/subtract from the list as appropriate. Ideally, it would be
  good to ship hadoop-2.6 in a 6-8 weeks (say, October) to keep up a
 cadence.
 
   More importantly, as we discussed previously, we'd like hadoop-2.6 to be
  the *last* Apache Hadoop 2.x release which support JDK6. I'll start a
  discussion with other communities (HBase, Pig, Hive, Oozie etc.) and see
  how they feel about this.
 
  thanks,
  Arun
 
 


 --

 --
 Arun C. Murthy
 Hortonworks Inc.
 http://hortonworks.com/

 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender immediately
 and delete it from your system. Thank You.


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Thinking ahead to hadoop-2.6

2014-09-24 Thread Vinod Kumar Vavilapalli
We can branch off in a week or two so that work on branch-2 itself can go
ahead with other features that can't fit in 2.6. Independent of that, we
can then decide on the timeline of the release candidates once branch-2.6
is close to being done w.r.t the planned features.

Branching it off can let us focus on specific features that we want in for
2.6 and then eventually blockers for the release, nothing else. There is a
trivial pain of committing to one more branch, but it's worth it in this
case IMO.

A lot of efforts are happening in parallel from the YARN side from where I
see. 2.6 is a little bulky if only on the YARN side and I'm afraid if we
don't branch off and selectively try to get stuff in, it is likely to be in
a perpetual delay.

My 2 cents.

+Vinod

On Wed, Sep 24, 2014 at 3:28 PM, Suresh Srinivas sur...@hortonworks.com
wrote:

 Given some of the features are in final stages of stabilization,
 Arun, we should hold off creating 2.6 branch or building an RC by a week?
 All the features in flux are important ones and worth delaying the release
 by a week.

 On Wed, Sep 24, 2014 at 11:36 AM, Andrew Wang andrew.w...@cloudera.com
 wrote:

  Hey Nicholas,
 
  My concern about Archival Storage isn't related to the code quality or
 the
  size of the feature. I think that you and Jing did good work. My concern
 is
  that once we ship, we're locked into that set of archival storage APIs,
 and
  these APIs are not yet finalized. Simply being able to turn off the
 feature
  does not change the compatibility story.
 
  I'm willing to devote time to help review these JIRAs and kick the tires
 on
  the APIs, but my point above was that I'm not sure it'd all be done by
 the
  end of the week. Testing might also reveal additional changes that need
 to
  be made, which also might not happen by end-of-week.
 
  I guess the question before us is if we're comfortable putting something
 in
  branch-2.6 and then potentially adding API changes after. I'm okay with
  that as long as we're all aware that this might happen.
 
  Arun, as RM is this cool with you? Again, I like this feature and I'm
 fine
  with it's inclusion, just a heads up that we might need some extra time
 to
  finalize things before an RC can be cut.
 
  Thanks,
  Andrew
 
  On Tue, Sep 23, 2014 at 7:30 PM, Tsz Wo (Nicholas), Sze 
  s29752-hadoop...@yahoo.com.invalid wrote:
 
   Hi,
  
   I am worry about KMS and transparent encryption since there are quite
  many
   bugs discovered after it got merged to branch-2.  It gives us an
  impression
   that the feature is not yet well tested.  Indeed, transparent
 encryption
  is
   a complicated feature which changes the core part of HDFS.  It is not
  easy
   to get everything right.
  
  
   For HDFS-6584: Archival Storage, it is a relatively simple and low risk
   feature.  It introduces a new storage type ARCHIVE and the concept of
  block
   storage policy to HDFS.  When a cluster is configured with ARCHIVE
  storage,
   the blocks will be stored using the appropriate storage types specified
  by
   storage policies assigned to the files/directories.  Cluster admin
 could
   disable the feature by simply not configuring any storage type and not
   setting any storage policy as before.   As Suresh mentioned, HDFS-6584
 is
   in the final stages to be merged to branch-2.
  
   Regards,
   Tsz-Wo
  
  
  
   On Wednesday, September 24, 2014 7:00 AM, Suresh Srinivas 
   sur...@hortonworks.com wrote:
  
  
   
   
   I actually would like to see both archival storage and single replica
   memory writes to be in 2.6 release. Archival storage is in the final
   stages
   of getting ready for branch-2 merge as Nicholas has already indicated
 on
   the dev mailing list. Hopefully HDFS-6581 gets ready sooner. Both of
  these
   features are being in development for sometime.
   
   On Tue, Sep 23, 2014 at 3:27 PM, Andrew Wang 
 andrew.w...@cloudera.com
   wrote:
   
Hey Arun,
   
Maybe we could do a quick run through of the Roadmap wiki and
   add/retarget
things accordingly?
   
I think the KMS and transparent encryption are ready to go. We've
 got
  a
very few further bug fixes pending, but that's it.
   
Two HDFS things that I think probably won't make the end of the week
  are
archival storage (HDFS-6584) and single replica memory writes
   (HDFS-6581),
which I believe are under the HSM banner. HDFS-6484 was just merged
 to
trunk and I think needs a little more work before it goes into
  branch-2.
HDFS-6581 hasn't even been merged to trunk yet, so seems a bit
 further
   off
yet.
   
Just my 2c as I did not work directly on these features. I just
   generally
shy away from shipping bits quite this fresh.
   
Thanks,
Andrew
   
On Tue, Sep 23, 2014 at 3:03 PM, Arun Murthy a...@hortonworks.com
   wrote:
   
 Looks like most of the content is in and hadoop-2.6 is shaping up
   nicely.

 I'll create branch-2.6 by end of the week and we 

Re: Git repo ready to use

2014-09-24 Thread Ted Yu
Billie found out that Hadoop-Common-2-Commit should be the build that
publishes artifacts.

Thanks Billie.

On Wed, Sep 24, 2014 at 4:20 PM, Ted Yu yuzhih...@gmail.com wrote:

 FYI

 I made some changes to:
 https://builds.apache.org/view/All/job/Hadoop-branch2

 because it until this morning was using svn to build.

 Would 2.6.0-SNAPSHOT maven artifacts be updated after the build ?

 Cheers


 On Mon, Sep 15, 2014 at 11:14 AM, Todd Lipcon t...@cloudera.com wrote:

 Hey all,

 For those of you who like to see the entire history of a file going back
 to
 2006, I found I had to add a new graft to .git/info/grafts:

 # Project un-split in new writable git repo
 a196766ea07775f18ded69bd9e8d239f8cfd3ccc
 928d485e2743115fe37f9d123ce9a635c5afb91a
 cd66945f62635f589ff93468e94c0039684a8b6d
 77f628ff5925c25ba2ee4ce14590789eb2e7b85b

 FWIW, my entire file now contains:

 # Project split
 5128a9a453d64bfe1ed978cf9ffed27985eeef36
 6c16dc8cf2b28818c852e95302920a278d07ad0c
 6a3ac690e493c7da45bbf2ae2054768c427fd0e1
 6c16dc8cf2b28818c852e95302920a278d07ad0c
 546d96754ffee3142bcbbf4563c624c053d0ed0d
 6c16dc8cf2b28818c852e95302920a278d07ad0c
 4e569e629a98a4ef5326e5d25a84c7d57b5a8f7a
 c78078dd2283e2890018ff0e87d751c86163f99f

 # Project un-split in new writable git repo
 a196766ea07775f18ded69bd9e8d239f8cfd3ccc
 928d485e2743115fe37f9d123ce9a635c5afb91a
 cd66945f62635f589ff93468e94c0039684a8b6d
 77f628ff5925c25ba2ee4ce14590789eb2e7b85b

 which seems to do a good job for me (not sure if the first few lines are
 necessary anymore in the latest world)

 -Todd



 On Fri, Sep 12, 2014 at 11:31 AM, Colin McCabe cmcc...@alumni.cmu.edu
 wrote:

  It's an issue with test-patch.sh.  See
  https://issues.apache.org/jira/browse/HADOOP-11084
 
  best,
  Colin
 
  On Mon, Sep 8, 2014 at 3:38 PM, Andrew Wang andrew.w...@cloudera.com
  wrote:
   We're still not seeing findbugs results show up on precommit runs. I
 see
   that we're archiving ../patchprocess/*, and Ted thinks that since
 it's
   not in $WORKSPACE it's not getting picked up. Can we get confirmation
 of
   this issue? If so, we could just add patchprocess to the toplevel
   .gitignore.
  
   On Thu, Sep 4, 2014 at 8:54 AM, Sangjin Lee sjl...@gmail.com wrote:
  
   That's good to know. Thanks.
  
  
   On Wed, Sep 3, 2014 at 11:15 PM, Vinayakumar B 
 vinayakum...@apache.org
  
   wrote:
  
I think its still pointing to old svn repository which is just read
  only
now.
   
You can use latest mirror:
https://github.com/apache/hadoop
   
Regards,
Vinay
On Sep 4, 2014 11:37 AM, Sangjin Lee sjl...@gmail.com wrote:
   
 It seems like the github mirror at
https://github.com/apache/hadoop-common
 has stopped getting updates as of 8/22. Could this mirror have
 been
broken
 by the git transition?

 Thanks,
 Sangjin


 On Fri, Aug 29, 2014 at 11:51 AM, Ted Yu yuzhih...@gmail.com
  wrote:

  From
 https://builds.apache.org/job/Hadoop-hdfs-trunk/1854/console
  :
 
  ERROR: No artifacts found that match the file pattern
  trunk/hadoop-hdfs-project/*/target/*.tar.gz. Configuration
  error?ERROR 
 http://stacktrace.jenkins-ci.org/search?query=ERROR
  :
  ?trunk/hadoop-hdfs-project/*/target/*.tar.gz? doesn?t match
  anything,
  but ?hadoop-hdfs-project/*/target/*.tar.gz? does. Perhaps
 that?s
  what
  you mean?
 
 
  I corrected the path to hdfs tar ball.
 
 
  FYI
 
 
 
  On Fri, Aug 29, 2014 at 8:48 AM, Alejandro Abdelnur 
   t...@cloudera.com

  wrote:
 
   it seems we missed updating the HADOOP precommit job to use
  Git, it
was
   still using SVN. I've just updated it.
  
   thx
  
  
   On Thu, Aug 28, 2014 at 9:26 PM, Ted Yu yuzhih...@gmail.com
 
   wrote:
  
Currently patchprocess/ (contents shown below) is one level
   higher
 than
${WORKSPACE}
   
diffJavadocWarnings.txt
   newPatchFindbugsWarningshadoop-hdfs.html
 patchFindBugsOutputhadoop-hdfs.txt
   patchReleaseAuditOutput.txt
 trunkJavadocWarnings.txt
filteredPatchJavacWarnings.txt
 newPatchFindbugsWarningshadoop-hdfs.xml
patchFindbugsWarningshadoop-hdfs.xml
   patchReleaseAuditWarnings.txt
filteredTrunkJavacWarnings.txt  patch
patchJavacWarnings.txt
 testrun_hadoop-hdfs.txt
jirapatchEclipseOutput.txt
 patchJavadocWarnings.txt
 trunkJavacWarnings.txt
   
Under Files to archive input box of
PreCommit-HDFS-Build/configure, I
   saw:
   
'../patchprocess/*' doesn't match anything, but '*' does.
  Perhaps
  that's
what you mean?
   
I guess once patchprocess is moved back under
 ${WORKSPACE}, a
  lot
of
   things
would be back to normal.
   
Cheers
   
On Thu, Aug 28, 2014 at 9:16 PM, Alejandro Abdelnur 
 

[jira] [Created] (HADOOP-11130) NFS updateMaps OS check is reversed

2014-09-24 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-11130:
-

 Summary: NFS updateMaps OS check is reversed
 Key: HADOOP-11130
 URL: https://issues.apache.org/jira/browse/HADOOP-11130
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Allen Wittenauer


getent is fairly standard, dscl is not.  Yet the code logic prefers dscl for 
non-Linux platforms. This code should for OS X and use dscl and, if not, then 
use getent.  See comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11131) getUsersForNetgroupCommand doesn't work for OS X

2014-09-24 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-11131:
-

 Summary: getUsersForNetgroupCommand doesn't work for OS X
 Key: HADOOP-11131
 URL: https://issues.apache.org/jira/browse/HADOOP-11131
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Allen Wittenauer


Apple doesn't ship getent, which this command assumes.  We should use dscl 
instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11132) checkHadoopHome still uses HADOOP_HOME

2014-09-24 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-11132:
-

 Summary: checkHadoopHome still uses HADOOP_HOME
 Key: HADOOP-11132
 URL: https://issues.apache.org/jira/browse/HADOOP-11132
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Allen Wittenauer


It should be using HADOOP_PREFIX.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)