Rajesh Balamohan created HADOOP-13432:
-
Summary: S3A: Consider using TransferManager.download for
copyToLocalFile
Key: HADOOP-13432
URL: https://issues.apache.org/jira/browse/HADOOP-13432
Project:
I think I understand a bit better, though now I ask how this date is
different from the release date. Based on the HowToRelease instructions, we
set the release date to when the release vote passes. So, start of release
vote vs. end of release vote doesn't seem that different, and these dates
are s
> Andrew: I bet many would assume it's the release date, like how Ubuntu
releases are numbered.
Good point. Maybe I confuse you because of lack of explanation.
I assume that "branch-cut off timing" mean the timing of freezing branch
like when starting the release vote. It's because that the relea
Hi Ray, if you're going to do a wiki cleanup, fair warning that I filed
this INFRA JIRA about the wiki being terribly slow, and they closed it as
WONTFIX:
https://issues.apache.org/jira/browse/INFRA-12283
So if you'd actually like to undertake a wiki cleanup, we should also
consider migrating the
Thanks for replies Akira and Tsuyoshi, inline:
Akira: Assuming 3.0.0-alpha1 will be released between 2.7.0 and 2.8.0, we
> need to add 3.0.0-alphaX if 2.8.0 is in the fix versions of a jira and we
> don't need to add 3.0.0-alphaX if 2.7.0 is in the fix versions of a jira.
> Is it right?
Yes, cor
Hi Vinod,
Thanks all guys for starting discussion!
My suggestion is adding the date when branch cut is done: like
3.0.0-alpha1-20160724, 2.8.0-20160730 or something.
Pros:-) It's totally ordered. If we have a policy such as backporting
to maintainance branches after the date, users can find that
Thanks Vinod and Andrew for the summary.
> Here's an attempt at encoding this policy as a set of rules for
setting fix
> versions (credit to Allen):
>
> 1) Set the fix version for all a.b.c versions, where c > 0.
> 2) For each major release line, set the lowest a.b.0 version.
Assuming 3.0.0-al
+1 for the source tarball.
- Downloaded source tarball and binary tarball
- Verified signatures and checksums
- Compiled and built a single node cluster
- Compiled Hive 2.1.0/1.2.1 and Tez 0.8.4/0.7.1 using Hadoop 2.7.3 pom
successfully
- Ran some Hive on Tez queries successfully
Thanks,
Akira
+1 (non-binding)
Thanks, Vinod, for all of your hard work and congratulations in completing this
release.
After downloading and building the source, I installed Hadoop 2.7.3 RC0 on a
3-node, multi-tenant, insecure cluster. I ran manual tests to ensure the
following:
- Ensure that user limit perc
+1 (non-binding).
- Built Tar ball from source.
- Deployed pseudo-distributed cluster.
- Ran sleep job.
Thank you Vinod!
Regards,Kuhu Shukla
On Tuesday, July 26, 2016 4:17 PM, Andrew Wang
wrote:
On Tue, Jul 26, 2016 at 1:23 PM, Karthik Kambatla
wrote:
> IIRR, the vote is o
Coming in late to an old thread.
I was looking around at the Hadoop documentation (hadoop.apache.org and
wiki.apache.org/hadoop) and I'd sum up the current state of the
documentation as follows:
1. hadoop.apache.org is pretty clearly full of technical information.
My only minor nit here i
Lei (Eddy) Xu created HADOOP-13431:
--
Summary: Fix error propagation when hot swap drives disallow HSM
types.
Key: HADOOP-13431
URL: https://issues.apache.org/jira/browse/HADOOP-13431
Project: Hadoop
Thanks everyone...that helped. I'll go ahead and edit the Wiki to clarify
the expectation.
I got a successful build using:
~/code/hadoop$ mvn install -DskipTests
To respond to Vinod's questions:
I think the answer is trunk. I obtained the source code using:
git clone git://git.apache.org/ha
Steven K. Wong created HADOOP-13430:
---
Summary: Optimize and fix getFileStatus in S3A
Key: HADOOP-13430
URL: https://issues.apache.org/jira/browse/HADOOP-13430
Project: Hadoop Common
Issue T
On Tue, Jul 26, 2016 at 1:23 PM, Karthik Kambatla
wrote:
> IIRR, the vote is on source artifacts and binaries are for convenience.
>
> Let me refine this statement a bit. Both the binary tarball and the JARs
we publish to Maven are still official release artifacts. This is why we
need L&Ns for th
Thanks Vinod for forking the thread. Let me try and summarize what Allen
and I talked about in the previous thread.
Currently, we've been marking JIRAs with fix versions of both 2.6.x and
2.7.x. IIUC, the chronological ordering between these two lines is actually
not important. If you're on 2.6.1,
Unsubscribe
* Verified mds and pgp signatures of both source and binary* Built tarball from
source on OS X 10.11.6 (El Capitan)* Deployed in pseudo-distributed mode* Ran
sleep jobs and other randomly selected tests on both MapReduce and Tez*
Visually verified the RM and history server UIs
Thanks,
Eric
Unsubscribe
Daryn Sharp created HADOOP-13429:
Summary: Dispose of unnecessary SASL servers
Key: HADOOP-13429
URL: https://issues.apache.org/jira/browse/HADOOP-13429
Project: Hadoop Common
Issue Type: Sub
The current HowToContribute guide expressly tells folks that they
should ensure all the tests run and pass before and after their
change.
Sounds like we're due for an update if the expectation is now that
folks should be using -DskipTests and runs on particular modules.
Maybe we could instruct fol
IIRR, the vote is on source artifacts and binaries are for convenience.
If that is right, I am open to either options - do another RC or continue
this vote and fix the binary artifacts.
On Tue, Jul 26, 2016 at 12:11 PM, Vinod Kumar Vavilapalli <
vino...@apache.org> wrote:
> Thanks Daniel and Wei
Tested as follows
- deployed a pseudo cluster from RC0 tar
- Verified signature and checksum
- Run MR sleep on YARN
- With/Without node labels enabled.
- Verified ATS V1 (One small issue though not regression, tracking URL for
running app is shown as Unassigned, will check and raise if required )
But, everyone please do continue your sanity checking on RC0 in case there are
more issues to be fixed.
Thanks
+Vinod
> On Jul 26, 2016, at 12:11 PM, Vinod Kumar Vavilapalli
> wrote:
>
> Thanks Daniel and Wei.
>
> I think these are worth fixing, I’m withdrawing this RC. Will look at fixing
Thanks Daniel and Wei.
I think these are worth fixing, I’m withdrawing this RC. Will look at fixing
these issues and roll a new candidate with the fixes as soon as possible.
Thanks
+Vinod
> On Jul 26, 2016, at 11:05 AM, Wei-Chiu Chuang wrote:
>
> I noticed two issues:
>
> (1) I ran hadoop ch
Forking the thread to make sure it attracts enough eye-balls. The earlier one
was about 3.0.0 specifically and I don’t think enough people were watching that.
I’ll try to summarize a bit.
# Today’s state of release numbering and ordering:
So far, all the releases we have done, we have follow
The short answer is that it is expected to pass without any errors.
On branch-2.x, that command passes cleanly without any errors though it takes
north of 10 minutes. Note that I run it with -DskipTests - you don’t want to
wait for all the unit tests to run, that’ll take too much time. I expect
Hi,
In the How To Contribute doc, it says:
"Try getting the project to build and test locally before writing code"
So, just to be 100% certain before I keep troubleshooting things, this
means I should be able to run
mvn clean install -Pdist -Dtar
without getting any failures or errors at a
Hi all,
I'm trying to generate JDiff for sub projects of Hadoop, some updates:
*- Common*: JDiff cannot be generated , filed
https://issues.apache.org/jira/browse/HADOOP-13428 and debugging that.
- *HDFS*: It pointed to a older version (2.6.0), we need to upgrade it to
the latest stable release (p
+1
Thanks
+Vinod
> On Jul 26, 2016, at 7:39 AM, Wangda Tan wrote:
>
> lets try to use both jdiff and the new tool and compare results because this
> is the first time with the new tool.
>
> Appreciate your time to help us about this effort!
Wangda Tan created HADOOP-13428:
---
Summary: Fix hadoop-common to generate jdiff
Key: HADOOP-13428
URL: https://issues.apache.org/jira/browse/HADOOP-13428
Project: Hadoop Common
Issue Type: Bug
I noticed two issues:
(1) I ran hadoop checknative, but it seems the binary tarball was not
compiled with native library for Linux. On the contrary, the Hadoop built
from source tarball with maven -Pnative can find the native libraries on
the same host.
(2) I noticed that the release dates in CHA
Steve Loughran created HADOOP-13427:
---
Summary: Eliminate needless uses of FileSystem.exists, iisFile,
isDirectory
Key: HADOOP-13427
URL: https://issues.apache.org/jira/browse/HADOOP-13427
Project:
Daryn Sharp created HADOOP-13426:
Summary: More efficiently build IPC responses
Key: HADOOP-13426
URL: https://issues.apache.org/jira/browse/HADOOP-13426
Project: Hadoop Common
Issue Type: Su
Daryn Sharp created HADOOP-13425:
Summary: IPC layer optimizations
Key: HADOOP-13425
URL: https://issues.apache.org/jira/browse/HADOOP-13425
Project: Hadoop Common
Issue Type: Improvement
I just downloaded the build tarball and deployed it on a 2-node
cluster. It looks to me like it's compiled for the wrong platform:
# file /usr/lib/hadoop/bin/container-executor
/usr/lib/hadoop/bin/container-executor: setuid setgid Mach-O 64-bit
executable
I'm also seeing the no-native-librar
Thanks for putting this up, Vinod.
+1 (non-binding)
* verified signature and mds of source and binary tarball
* built from source tarball on CentOS 6
* built site documentation
* deployed 3-node cluster with NN-HA and RM-HA, ran example jobs
* built rpms by using Bigtop, deployed 3-node cluster
For more details, see
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/114/
[Jul 25, 2016 1:45:03 PM] (stevel) HADOOP-13406 S3AFileSystem: Consider reusing
filestatus in delete() and
[Jul 25, 2016 2:50:23 PM] (stevel) HADOOP-13188 S3A file-create should throw
error rather than ove
Hi Sean,
Sorry I didn't make it clear, lets try to use both jdiff and the new tool and
compare results because this is the first time with the new tool.
Appreciate your time to help us about this effort!
Thanks,
Wangda
Sent from my iPhone
> On Jul 26, 2016, at 6:01 AM, Sean Busbey wrote:
>
>
+1(non-binding)
Downloaded and built from source
Cluster installed in 3 nodes and verified running simple MR jobs.
Verified for RM HA , RM work preserving restart with CapacityScheduler
Thanks & Regards
Rohith Sharma K S
> On Jul 26, 2016, at 6:50 PM, Vinayakumar B wrote:
>
> +1 (binding)
>
+1 (binding)
1. Downloaded and Built from branch-2.7.3
2. Started up HDFS and YARN in Single Node cluster.
3. Ran WordCount job multiple times and Success.
4. Verified the "Release Notes" available at the URL mentioned by Vinod.
Apart from that,
Faced same issues as Andrew wang, while running t
Just so I don't waste time chasing my tail, should I interpret this
email and the associated JIRA as the PMC preferring I not spend
volunteer time providing a compatibility breakdown as previously
discussed?
On Mon, Jul 25, 2016 at 7:54 PM, Wangda Tan wrote:
> I just filed ticket https://issues.a
Yes, the Java API Compliance Checker allows specifying Annotations to
pare down where incompatible changes happen. It was added some time
ago based on feedback from the Apache HBase project.
The limitations I've found are: 1) at least earlier versions only
supported annotations at the class level
Thank you Vinod.
+1 (non-binding)
- downloaded and built from source
- deployed HDFS-HA cluster and tested few switching behaviors
- executed few hdfs commands from command line
- viewed basic UI
- ran HDFS/Common unit tests
- checked LICENSE and NOTICE files
Regards,
Rakesh
Intel
On Tue, Jul 2
44 matches
Mail list logo