On 16 April 2014 23:42, Mohammad Islam misla...@yahoo.com wrote:
Hi,
I tried to run a test case using this command from my Linux box:
mvn clean test -PtestKerberos -Dtest=TestJHSSecurity
And I got the following exception. I know it is related to setup the
principal and other kerberos
on apacheds, and
doesnt require any os level setup. there are some tests in hadoop rhat
already use it.
thx
Alejandro
(phone typing)
On Apr 17, 2014, at 5:44, Steve Loughran ste...@hortonworks.com
wrote:
On 16 April 2014 23:42, Mohammad Islam misla...@yahoo.com wrote:
Hi,
I
jeri...@gmail.com wrote:
+1 * many. I'd love to see us clean this up. Getting to agreement on
where we are going would be a huge step forward.
--
Eric14 a.k.a. Eric Baldeschwieler
On Mon, Apr 14, 2014 at 3:33 PM, Steve Loughran ste...@hortonworks.com
wrote:
On 11 April 2014 18:37
ASF JIRA has been moving to role-based over group-based security -you may
be able to give more people a role than a group allows. But, as of last
week and a spark-initiated change, by default contributors can't assign
issues.
someone could talk to infra@apache and see if a move would help
On 14
I'd like to see YARN-2065 fixed -without that the AM restart feature
doesn't work. Or at least it works, but you can't create new containers
On 20 May 2014 07:40, Akira AJISAKA ajisa...@oss.nttdata.co.jp wrote:
Hi Arun,
I'd like to know when to release Hadoop 2.4.1.
It looks like all of the
On 28 May 2014 20:50, Niels Basjes ni...@basjes.nl wrote:
Hi,
Last week I ran into this problem again
https://issues.apache.org/jira/browse/MAPREDUCE-2094
What happens here is that the default implementation of the isSplitable
method in FileInputFormat is so unsafe that just about everyone
On 4 June 2014 15:33, xeon xeonmailingl...@gmail.com wrote:
Hello all,
I added the capacity to tolerate arbitrary faults in Hadoop 0.20 by
changing some Hadoop classes.
You can see my work in http://dl.acm.org/citation.cfm?id=2116190
is there a URL for non-ACM members?
Now, I would like
?
- Henry
On Fri, May 16, 2014 at 2:48 AM, Steve Loughran ste...@hortonworks.com
wrote:
ASF JIRA has been moving to role-based over group-based security -you may
be able to give more people a role than a group allows. But, as of last
week and a spark-initiated change, by default contributors
+1 (binding)
1. checked out the tagged release on branch-2.4.1, 14f7c6a, built and
installed into the local mvn repo
2. ran the in-incubation slider functional tests
We can't create containers after an AM-restart, YARN-2065, but this was
expected. I don't consider that a critical issue as we're
With this in mind, I struggle to see any upsides to introducing JDK7-only
APIs to trunk. Please let's not do anything on HADOOP-10530 or related
until we agree on this.
Thanks,
Andrew
On Mon, Apr 14, 2014 at 3:31 PM, Steve Loughran ste...@hortonworks.com
wrote:
On 14 April 2014 17:46, Andrew
On 18 June 2014 12:32, Andrew Wang andrew.w...@cloudera.com wrote:
Actually, a lot of our customers are still on JDK6, so if anything, its
popularity hasn't significantly decreased. We still test and support JDK6
for CDH4 and CDH5. The claim that branch-2 is effectively JDK7 because no
one
see any point to
switching trunk
over until that's true, for the aforementioned reasons.
Best,
Andrew
On Wed, Jun 18, 2014 at 12:08 PM, Steve Loughran
ste...@hortonworks.commailto:ste...@hortonworks.com
wrote:
I also think we need
On 19 June 2014 10:07, javadba java...@gmail.com wrote:
The following link from January states that Windows 7 (NOT Server) should
work. Has anyone been successful with this and have any comments on the
process?
https://wiki.apache.org/hadoop/Hadoop2OnWindows
On 20 June 2014 17:01, Andrew Wang andrew.w...@cloudera.com wrote:
Thanks everyone for the discussion so far. I talked with some of our other
teams and thought about the issue some more.
Regarding branch-2, we can't do much because of compatibility. Dropping
support for a JDK is supposed to
Having gone back through the entire thread I can see we've made progress
here, as the discussion has moved on from when to move to java 7 to when to
move to java 8... which, I've alway felt the appeal of from the
coding-side. Java 8 tomorrow is the most compelling reason to move to java
7 today.
On 20 June 2014 21:35, Steve Loughran ste...@hortonworks.com wrote:
This actually argues in favour of
-renaming branch-2 branch-3 after a release
-making trunk hadoop-4
-getting hadoop 3 released off the new branch-3 out in 2014, effectively
being an iteration of branch-2 with updated
On 21 June 2014 08:01, Andrew Wang andrew.w...@cloudera.com wrote:
Hi Steve, let me confirm that I understand your proposal correctly:
- Release an intermediate Hadoop 3 a few months out, based on JDK7 and with
bumped library versions
- Release a Hadoop 4 mid next year, based on JDK8
I
+1 (binding)
1. rm -rf ~/.m2/repository/org/apache/hadoop/
2. build and test of slider/incubating develop branch with profile
hadoop-2.4.1, which downloaded all the new artifacts from the repository
3. -tests passed
-steve
On 20 June 2014 23:51, Arun C Murthy
You have a few more hours to submit talks to apachecon EU ... something to
do during test runs
http://events.linuxfoundation.org//events/apachecon-europe/program/cfp
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to
which it is addressed and
Guava is a separate problem and I think we should have a separate
discussion what can we do about guava? That's more traumatic than a JDK
update, I fear, as the guava releases care a lot less about compatibility.
I don't worry about JDK updates removing classes like StringBuffer
because
one argument in favour of 80 is that it's easier to side-by-side diff
even so, I find it restrictive in Java code; once you go for long env vars
in bash-land then you are in trouble. As for python, you have to indent
according to your code flow.
were we to have a special getout of 120 chars in
On 29 July 2014 22:14, Sandy Ryza sandy.r...@cloudera.com wrote:
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Compatibility.html
Sandy is correct is a semantic compatibility issue. Back then the notion of
interface was defined in the early 1970s by D.L Parnas (see:
On 1 August 2014 16:25, Jean-Baptiste Note jbn...@gmail.com wrote:
JeanBaptisteNote
done
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to
which it is addressed and may contain information that is confidential,
privileged and exempt from
I'm =0 on convenience, but like you said, that's because most people have
drifted into public/private git repos for development of branches (though
that's partly to avoid the ongoing review-before-each commit overhead)
-moving to Git could encourage more in-ASF branch dev by committers
-if we
+1 binding
slider validation
purge all 2.5.0 artifacts in the local mvn repo (fish shell):
rm -rf ~/.m2/repository/org/apache/hadoop/**/*2.5.0*
clean slider build -verified download of artifacts from staging repo
run all the tests, especially the one that we'd turned off for 2.4.0:
On 6 August 2014 22:16, Karthik Kambatla ka...@cloudera.com wrote:
3. Force-push on feature-branches is allowed. Before pulling in a feature,
the feature-branch should be rebased on latest trunk and the changes
applied to trunk through git rebase --onto or git cherry-pick
commit-range.
I'd
+1 binding, same tests as before: purging mvn repo, slider test suite
including fault injection, followed by full test of S3N and openstack FS
contracts (because I know not enough people test those)
On 6 August 2014 21:59, Karthik Kambatla ka...@cloudera.com wrote:
Hi folks,
I have put
+1 (binding)
as this gets rolled out, I think we may also want define a branch naming
policy like feature/${JIRA}-text for features, as well as a policy for
deleting retired branches (tag the last commit then rm the branch)...then
we can cull some branches that are in SVN.
-steve
On 9 August
thanks for doing this Karthik
for a release, talk to pr...@apache.org as they like to work on the
announcements (and don't like to be left out, equally importantly)
On 12 August 2014 17:16, Karthik Kambatla ka...@cloudera.com wrote:
Thanks everyone for trying out the RC and voting.
The vote
to press@. I hadn't realized we need to
include them. Just sent an email.
On Tue, Aug 12, 2014 at 9:39 AM, Steve Loughran ste...@hortonworks.com
wrote:
thanks for doing this Karthik
for a release, talk to pr...@apache.org as they like to work on the
announcements (and don't like
moving to SLF4J as an API is independent —it's just a better API for
logging than commons-logging, was already a dependency and doesn't force
anyone to switch to a new log back end.
On 15 August 2014 03:34, Tsuyoshi OZAWA ozawa.tsuyo...@gmail.com wrote:
Hi,
Steve has started discussion
On 15 August 2014 17:20, Karthik Kambatla ka...@cloudera.com wrote:
However, IMO we already log too much at INFO level (particularly YARN).
Logging more at DEBUG level and lowering the overhead of enabling DEBUG
logging is preferable.
+1
This is the log4j properties file I've adopted for
On 19 August 2014 19:44, Konstantin Boudnik c...@apache.org wrote:
While it sounds like a topic for a different discussion (or list@), do you
think it would be too crazy to skip JDK7 completely and just go to JDK8
directly?
The trouble with that as nobody who provides commercial big-data
On 19 August 2014 18:35, Arun Murthy a...@hortonworks.com wrote:
I suggest we do a 2.5.1 (with potentially other bug fixes) rather than fix
existing tarballs.
do we already have enough last-minute fixes to trigger a rerelease? With
all the testing that entails?
A simple 2.5.1 rebuild with
just caught up with this after some offlininess...15:48 PST is too late for
me.
I'd be -1 to a change to master because of that risk that it does break
existing code -especially people that have trunk off the git mirrors and
automated builds/merges to go with it.
master may be viewed as the
On 25 August 2014 23:45, Karthik Kambatla ka...@cloudera.com wrote:
Thanks for bringing these points up, Zhijie.
By the way, a revised How-to-commit wiki is at:
https://wiki.apache.org/hadoop/HowToCommitWithGit . Please feel free to
make changes and improve it.
looks good so far
I suspect
Now that hadoop is using git, I'm migrating my various work-in-progress
branches to the new commit tree
1. This is the process I've written up for using git format-patch then git
am to export the patch sequence and merge it in, then rebasing onto trunk
to finally get in sync
if a recommended .gitconfig section was made available :)
I plan to play with format-patch some in the near future and might do this
myself, but if any git gurus already have this ready to go, feel free to
edit.
Does the patch submit code take it?
On Tue, Sep 2, 2014 at 4:10 AM, Steve Loughran
.gitconfig section was made available :)
I plan to play with format-patch some in the near future and might do this
myself, but if any git gurus already have this ready to go, feel free to
edit.
On Tue, Sep 2, 2014 at 4:10 AM, Steve Loughran ste...@hortonworks.com
wrote:
Now that hadoop is using
On 3 September 2014 02:47, Todd Lipcon t...@cloudera.com wrote:
On Tue, Sep 2, 2014 at 2:38 PM, Andrew Wang andrew.w...@cloudera.com
wrote:
Not to derail the conversation, but if CHANGES.txt is making backports
more
annoying, why don't we get rid of it? It seems like we should be able to
is there any way of isolating compatible/incompatible changes, new
features?
I know that any change is potentially incompatible —but it is still good to
highlight the things we know are likely to cause trouble
On 4 September 2014 02:51, Allen Wittenauer a...@altiscale.com wrote:
Nothing
On 15 September 2014 18:48, Allen Wittenauer a...@altiscale.com wrote:
It’s now September. With the passage of time, I have a lot of
doubts about this plan and where that trajectory takes us.
* The list of changes that are already in branch-2 scare the crap out of
any risk adverse
I know we've been ignoring the Guava version problem, but HADOOP-10868
added a transitive dependency on Guava 16 by way of Curator 2.6.
Maven currently forces the build to use Guava 11.0.2, but this is hiding at
compile timeall code paths from curator which may use classes methods
that aren't
Looks like HADOOP-11084 isn't complete —the patch to the build to get it
working post-git
before that patch the builds weren't working at all ... now its just
getting the URLs wrong.
If you can work out the right URLs we can fix this easily enough
On 19 September 2014 09:24, Wangda Tan
if it takes
On 19 September 2014 10:54, Wangda Tan wheele...@gmail.com wrote:
Hi Steve,
I guess this problem should be also caused by wrong URL, if anybody have
admin access to Jenkins, correct URL should be easily found.
Thanks,
Wangda
On Fri, Sep 19, 2014 at 4:32 PM, Steve Loughran ste
guava 16 in their code and scrambling
to
make things work than the other way around.
On Thu, Sep 18, 2014 at 2:40 PM, Steve Loughran
ste...@hortonworks.com
wrote:
I know we've been ignoring the Guava version problem, but
HADOOP-10868
added a transitive dependency on Guava
looks like BUILDING.TXT will need changing -currently it declares the
dependency as optional
On 6 October 2014 12:19, Colin McCabe cmcc...@alumni.cmu.edu wrote:
On Thu, Oct 2, 2014 at 1:15 PM, Ted Yu yuzhih...@gmail.com wrote:
On my Mac and on Linux, I was able to
find
it addressed.
On Tue, Sep 23, 2014 at 2:09 PM, Steve Loughran ste...@hortonworks.com
wrote:
I'm using curator elsewhere, it does log a lot (as does the ZK client),
but
it solves a lot of problem. It's being adopted more downstream too.
I'm wondering if we can move the code to the extent we
On 24 October 2014 08:42, Devopam Mittra devo...@gmail.com wrote:
DevopamMittra
done
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to
which it is addressed and may contain information that is confidential,
privileged and exempt from
Yes, Guava is a constant pain; there's lots of open JIRAs related to it, as
its the one we can't seamlessly upgrade. Not unless we do our own fork and
reinsert the missing classes.
The most common uses in the code are
@VisibleForTesting (easily replicated)
and the Precondition.check() operations
+1 binding
-patched slider pom to build against 2.6.0
-verified build did download, which it did at up to ~8Mbps. Faster than a
local build.
-full clean test runs on OS/X Linux
Windows 2012:
Same thing. I did have to first build my own set of the windows native
binaries, by checking out
+1 binding
purged local m2 repos of org/apache/hadoop/*
rebuilt and reran slider unit and functional tests on OSX and windows,
triggered jenkins builds on Centos debian. All passed.
On 13 November 2014 23:08, Arun C Murthy a...@hortonworks.com wrote:
Folks,
I've created another release
If you have a change that spans the projects you need to submit a separate
JIRA for each one anyway, to give each team the ability to
review/accept/reject the patch
On 18 November 2014 23:58, Ray Chiang rchi...@cloudera.com wrote:
I had to go back and look, but it was Akira Ajisaka that had
. Mind doing another check and make sure
nothing is amiss? Thanks.
On Fri, Nov 21, 2014 at 2:12 AM, Steve Loughran ste...@hortonworks.com
wrote:
I know there was work underway to ship hadoop 2.5.2, but has it actually
been released? I can't seem to find any vote details in my mail, but we
can we do HADOOP--001.patch
with the 001 being the revision.
-That numbering scheme guarantees listing order in directories c
-having .patch come after ensures that those people who have .patch bound
in their browser to a text editor (e.g. textmate) can view the patch with
ease
I know
On 25 November 2014 at 00:58, Bernd Eckenfels e...@zusammenkunft.net
wrote:
Hello,
Am Mon, 24 Nov 2014 16:16:00 -0800
schrieb Colin McCabe cmcc...@alumni.cmu.edu:
Conceptually, I think it's important to support patches that modify
multiple sub-projects. Otherwise refactoring things in
On Sat, Nov 22, 2014 at 10:24 AM, Steve Loughran ste...@hortonworks.com
wrote:
can we do HADOOP--001.patch
with the 001 being the revision.
-That numbering scheme guarantees listing order in directories c
-having .patch come after ensures that those people who have
I'm planning to flip the Javac language JVM settings to java 7 this week
https://issues.apache.org/jira/browse/HADOOP-10530
the latest patch also has a profile that sets the language to java8, for
the curious; one bit of code will need patching to compile there.
The plan for the change
to be
switched to Java 7 as well.
Haohui
On Dec 1, 2014, at 5:41 AM, Steve Loughran ste...@hortonworks.com
wrote:
I'm planning to flip the Javac language JVM settings to java 7 this
week
https://issues.apache.org/jira/browse/HADOOP-10530
the latest patch also has a profile that sets
The latest migration status:
if the jenkins builds are happy then the patch will go in -I do that
monday morning 10:00 UTC
https://builds.apache.org/view/H-L/view/Hadoop/
Getting jenkins to work has been surprisingly difficult...it turns out
that those builds which we thought were java7 or
, Dec 7, 2014 at 2:09 PM, Steve Loughran ste...@hortonworks.com
wrote:
The latest migration status:
if the jenkins builds are happy then the patch will go in -I do that
monday morning 10:00 UTC
https://builds.apache.org/view/H-L/view/Hadoop/
Getting jenkins to work has been
On 8 December 2014 at 14:58, Ted Yu yuzhih...@gmail.com wrote:
Looks like there was still OutOfMemoryError :
https://builds.apache.org/job/Hadoop-Hdfs-trunk/1964/testReport/junit/org.apache.hadoop.hdfs.server.namenode.snapshot/TestRenameWithSnapshots/testRenameDirAcrossSnapshottableDirs/
On 8 December 2014 at 14:58, Ted Yu yuzhih...@gmail.com wrote:
Looks like there was still OutOfMemoryError :
https://builds.apache.org/job/Hadoop-Hdfs-trunk/1964/testReport/junit/org.apache.hadoop.hdfs.server.namenode.snapshot/TestRenameWithSnapshots/testRenameDirAcrossSnapshottableDirs/
On 8 December 2014 at 19:58, Colin McCabe cmcc...@alumni.cmu.edu wrote:
It would be nice if we could have a separate .m2 directory per test
executor.
It seems like that would eliminate these race conditions once and for
all, at the cost of storing a few extra jars (proportional to the # of
On 8 December 2014 at 19:48, Colin McCabe cmcc...@alumni.cmu.edu wrote:
Are there a lot of open JDK7
issues that would require a release to straighten out?
I don't think so —the 2.6 release was tested pretty aggressively on JDK7,
and solely on it for Windows. Pushing out a 2.7 release would
On 10 December 2014 at 10:31, malcolm malcolm.kaval...@oracle.com wrote:
Also, I have been requested to ensure my port is available on 2.4,
perceived as a more stable release. If I make changes to this branch are
they automatically available for 2.6, or will I need multiple JIRAs ?
nobody is
one more thing, the if excludes object stores which don't offer
consistency and atomic create-no-overwrite and rename. You can't run all
hadoop apps directly on top of Amazon S3, without extra work (see netflix
S3mper). Object stores do not always behave as filesystems, even if they
implement the
On 13 December 2014 at 09:29, malcolm malcolm.kaval...@oracle.com wrote:
I am not sure what you mean by a thread-local buffer (in native code). In
Java this is pretty standard, but I couldn't find any implementation for C
code.
There's stuff around; I don't know how well it works across
a couple more benefits
1. when you post a patch you can add a comment like patch 003 killed NPE
in auth, and the comment history then integrations with the revisions. You
can also do this in your private git repository, so correlate commits there
with patch versions.
2. they list in creation
On 14 December 2014 at 16:52, Allen Wittenauer a...@altiscale.com wrote:
Well, slight correction: only one thing in the code that has been
replaced. There are a two patches waiting to get reviewed and applied that
fix the rest of the shipping shell code: HADOOP-10788 and HADOOP-11346.
On 16 December 2014 at 16:01, malcolm malcolm.kaval...@oracle.com wrote:
1. Findbugs , 3 warnings in Java code (which of course I did not touch)
2. Test failures also with no connection to terror: A java socket timeout,
ongoing issues with (1) transition to java 7 builds and (2) some
On 16 December 2014 at 16:24, Yongjun Zhang yzh...@cloudera.com wrote:
Interesting to see that the setting is not configurable and it is not a
simple fix.
that's the price of closed source software
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or
On 21 December 2014 at 06:23, Raghavendra Vaidya
raghavendra.vai...@gmail.com wrote:
[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-antrun-plugin:1.7:run (make) on project
hadoop-common: An Ant BuildException has occured: exec returned: 1
[ERROR] around Ant part ...exec
A six years ago I filed a patch HADOOP-6221 , RPC Client operations cannot
be interrupted
https://issues.apache.org/jira/browse/HADOOP-6221
The latest patch is in sync with trunk.
Could someone take a look?
-Steve
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the
I'm moving this all to common-dev@ as general is more where announcements
go than anything else.
I think too many patches are falling by the wayside. It takes a lot of time
and effort to get patches in, small patches that aren't viewed as critical
tend to atrophy: lost in patch-availalble state,
Done
On 3 February 2015 at 23:38:58, Jesse Crouch
(je...@atxcursions.commailto:je...@atxcursions.com) wrote:
JesseCrouch
I'm worrying more about the ongoing situation. As a release approaches someone
effectively goes full time as the gatekeeper, -for a good release they should
be saying too late! for most features and only if it's low risk to
non-critical bug fixes
Which means that non-critical stuff don't get
Given experience of apache reviews, I don't know how much time to spend on it.
I'm curious about Gerrit, but again, if JIRA integration is what is sought,
Cruicible sounds better.
Returning to other issues in the discussion
1. Improving test times would make a big difference; locally as well
On 7 February 2015 at 02:14:39, Colin P. McCabe
(cmcc...@apache.orgmailto:cmcc...@apache.org) wrote:
I think it's healthy to have lots of JIRAs that are patch available.
It means that there is a lot of interest in the project and people
want to contribute. It would be unhealthy if JIRAs that
Do we have a native code style guide?
Specifically, it is OK to use goto as a way of bailing out of things and into
some memory release logic as part of some JNI exception generation routine?
Don't worry, I'm asking on behalf of someone else
-Steve
ps: what I have been intermittently doing
Gokul,
What we expect from a filesystem is defined in (a) the HDFS code , (b) the
filesystem spec as derived from (a), and (c) contract tests derived from
(a) and (b)
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/index.html
There's a wiki page to go with
The ASF is hosting its North America conference in Austin, Texas, April 13
- 16, 2015.
They're still welcoming submissions —you have until Feb 1 to get them in
http://events.linuxfoundation.org/events/apachecon-north-america
As well as technical talks, talks from end users of ASF technology,
On 11 February 2015 at 21:11:25, Chris Douglas
(cdoug...@apache.orgmailto:cdoug...@apache.org) wrote:
+1; ChrisN's formulation is exactly right.
The patch manager can't force (or shame) anyone into caring about your
issue. One of the benefits of RTC is that parts of the code with a
single
following on the patch-management discussion, I've been collaborating with
Thomas Demoor +others on getting the s3a stuff working better, including fixing
some issues that have surface in 2.6, with a goal of getting them in to 2.7:
https://issues.apache.org/jira/browse/HADOOP-11571
One patch
I'd be +1 on trying reviews.apache.org on a JIRA which
1. had multiple distributed people working on it
2. had some tangible code needing reviewing
3. was of limited enough size/duration that we'd see how well it worked
do that, get feedback from the participants and repeat until we're
On 22 January 2015 at 19:34, Edward Nevill edward.nev...@linaro.org wrote:
Another question is whether we actually care about 32 bit platforms, or can
they just all downgrade to C code. Does anyone actually build Hadoop on a
32 bit platform?
I think we can assume that pretty much everyone
On 9 February 2015 at 21:18:52, Colin P. McCabe
(cmcc...@apache.orgmailto:cmcc...@apache.org) wrote:
What happened with the Crucible experiment? Did we get a chance to
try that out? That would be a great way to speed up patch reviews,
and one that is well-integrated with JIRA.
I am -1 on
Mvn is a dark mystery to us all. I wouldn't trust it not pick up things from
other builds if they ended up published to ~/.m2/repository during the process
On 9 February 2015 at 19:29:06, Colin P. McCabe
(cmcc...@apache.orgmailto:cmcc...@apache.org) wrote:
I'm sorry, I don't have any insight
On 14 February 2015 at 00:37:07, Karthik Kambatla
(ka...@cloudera.commailto:ka...@cloudera.com) wrote:
2 weeks from now (end of Feb) sounds reasonable. The one feature I would
like for to be included is shared-cache: we are pretty close - two more
main items to take care of.
In an offline
On 8 February 2015 at 09:55:42, Karthik Kambatla
(ka...@cloudera.commailto:ka...@cloudera.com) wrote:
On Fri, Feb 6, 2015 at 6:14 PM, Colin P. McCabe cmcc...@apache.org wrote:
I think it's healthy to have lots of JIRAs that are patch available.
It means that there is a lot of interest in
If 3.x is going to be Java 8 not backwards compatible, I don't expect anyone
wanting to use this in production until some time deep into 2016.
Issue: JDK 8 vs 7
It will require Hadoop clusters to move up to Java 8. While there's dev pull
for this, there's ops pull against this: people are
On 09/03/2015 15:56, Andrew Wang andrew.w...@cloudera.com wrote:
I find this proposal very surprising. We've intentionally deferred
incompatible changes to trunk, because they are incompatible and do not
belong in a minor release. Now we are supposed to blur our eyes and
release
these changes
Sorry, outlook dequoted Alejandros's comments.
Let me try again with his comments in italic and proofreading of mine
On 05/03/2015 13:59, Steve Loughran
ste...@hortonworks.commailto:ste...@hortonworks.com wrote:
On 05/03/2015 13:05, Alejandro Abdelnur
tuc...@gmail.commailto:tuc
On 05/03/2015 13:05, Alejandro Abdelnur
tuc...@gmail.commailto:tuc...@gmail.com wrote:
IMO, if part of the community wants to take on the responsibility and work
that takes to do a new major release, we should not discourage them from
doing that.
Having multiple major branches active is a
Looking ahead to Java 9, here's where the builds are up for D/L
From: Rory O'Donnell
Subject: Early Access builds for JDK 9 b53 and JDK 8u60 b05 are available on
java.net
Hi Andrew,
Early Access build for JDK 9 b53https://jdk9.java.net/download/ available on
java.net, summary of changes
SELinux does nothing for Hadoop cluster security at the data-layer, which is
why there tools on top, not only to lock down systems, but to provide better
data governance: where did things come from, has it been tainted by merging
with sensitive data, etc, etc.
Where it could be good is
1.
I'm +1 for a migrate to Java 8 as soon as possible.
That's branch-2 trunk, as having them on the same language level makes
cherrypicking stuff off trunk possible. That's particularly the case for Java 8
as it is the first major change to the language since Java 5.
w.r.t shipping trunk as 3.x,
I want to understand a lot more about the classpath isolation (HADOOP-11656)
proposal, specifically, what is proposed and does it have to be tagged as
incompatible? That's a bigger change than must setting javac.version=8 in the
POM —though given what a fundamental problem it addresses, I'm in
One other late-breaking issue may we what to do about the fact that Java 7 8
have a broken sort algorithm?, which has surfaced
recentlyhttp://envisage-project.eu/proving-android-java-and-python-sorting-algorithm-is-broken-and-how-to-fix-it/
I believe some other OSS projects have tried to
Can I draw attention to the fact that within the last week, every single Hadoop
jenkins run has started failing.
Ports in use
https://issues.apache.org/jira/browse/YARN-3433
https://issues.apache.org/jira/browse/HADOOP-11788
NPEs
https://issues.apache.org/jira/browse/HADOOP-11789 NPE in
201 - 300 of 3343 matches
Mail list logo