Liang Xie created HADOOP-11407:
--
Summary: Adding socket receive buffer size support in Client.java
Key: HADOOP-11407
URL: https://issues.apache.org/jira/browse/HADOOP-11407
Project: Hadoop Common
See https://builds.apache.org/job/Hadoop-common-trunk-Java8/45/changes
Changes:
[harsh] MAPREDUCE-6194. Bubble up final exception in failures during creation
of output collectors. Contributed by Varun Saxena.
--
[...truncated 5188 lines...]
Tests run:
See https://builds.apache.org/job/Hadoop-Common-trunk/1344/changes
Changes:
[harsh] MAPREDUCE-6194. Bubble up final exception in failures during creation
of output collectors. Contributed by Varun Saxena.
--
[...truncated 4738 lines...]
Tests run: 1,
On 14 December 2014 at 16:52, Allen Wittenauer a...@altiscale.com wrote:
Well, slight correction: only one thing in the code that has been
replaced. There are a two patches waiting to get reviewed and applied that
fix the rest of the shipping shell code: HADOOP-10788 and HADOOP-11346.
Folks,
I want to contribute to Hadoop ... I have downloaded the hadoop source and
set up the same on Intellij on Mac ...
I would like to start by executing / writing unit test cases ... could some
one point me to some resources to how to do that ?
Regards
Raghavendra Vaidya
Yongjun Zhang created HADOOP-11408:
--
Summary: TestRetryCacheWithHA.testUpdatePipeline failed in trunk
Key: HADOOP-11408
URL: https://issues.apache.org/jira/browse/HADOOP-11408
Project: Hadoop Common
[
https://issues.apache.org/jira/browse/HADOOP-11408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Yongjun Zhang resolved HADOOP-11408.
Resolution: Duplicate
TestRetryCacheWithHA.testUpdatePipeline failed in trunk
I want to work on hadoop live project for 2 hr every day
please help me
Warms Regards,
Prema Vishnoi
“Try not to become a man of success but rather to become a man of value”
On Mon, Dec 15, 2014 at 8:05 PM, Raghavendra Vaidya
raghavendra.vai...@gmail.com wrote:
[image: Boxbe]
I want to preface this response by saying that I have not contributed code
to Hadoop.
However, I have been interested in doing so and I can share some of my
research with those of you who are also interested in contributing.
First of all, check out these links for information on how to get a
One easy place to contribute in small increments could be the reproducing of
bugs in jiras that are filed and open.
If every day you spent an hour reproducing a bug filed in a jira, you could
come up to speed eventually on a lot of sharp corners of the source code, and
probably contribute
[
https://issues.apache.org/jira/browse/HADOOP-7852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Allen Wittenauer resolved HADOOP-7852.
--
Resolution: Won't Fix
Templates and configuration tool have been removed from trunk.
[
https://issues.apache.org/jira/browse/HADOOP-8505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Allen Wittenauer resolved HADOOP-8505.
--
Resolution: Implemented
Trunk/3.x supports appending to JAVA_LIBRARY_PATH. Closing as
Guys,
I agree that revision numbers are useful if you need to reference a
particular attachment. As well as with all your other arguments.
My general point is that the infrastructure we use should be convenient for
the users to do such simple things automatically. Rather than us
introducing rules
I'm all for changing the default sort order, but it doesn't address the
point that Steve and I brought up about local downloads.
If you want to push on the INFRA JIRA though, please feel free. I'm +1 for
that.
Best,
Andrew
On Mon, Dec 15, 2014 at 11:40 AM, Konstantin Shvachko
Jason Lowe created HADOOP-11409:
---
Summary: FileContext.getFileContext can stack overflow if default
fs misconfigured
Key: HADOOP-11409
URL: https://issues.apache.org/jira/browse/HADOOP-11409
Project:
Colin Patrick McCabe created HADOOP-11410:
-
Summary: make the rpath of libhadoop.so configurable
Key: HADOOP-11410
URL: https://issues.apache.org/jira/browse/HADOOP-11410
Project: Hadoop
Please also go through BUILDING.txt in the code base to find out how to build
the source code. A very good area to start off learning about Hadoop and
helping the community is to fix a failing unit test case.
On Monday, December 15, 2014 10:02 AM, Jay Vyas
jayunit100.apa...@gmail.com
Jason Dere created HADOOP-11411:
---
Summary: Hive build failure on hadoop-2.7 due to HADOOP-11356
Key: HADOOP-11411
URL: https://issues.apache.org/jira/browse/HADOOP-11411
Project: Hadoop Common
[
https://issues.apache.org/jira/browse/HADOOP-11411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jason Dere resolved HADOOP-11411.
-
Resolution: Duplicate
Release Note: Opened Hive Jira at HIVE-9115
Hive build failure on
Did some research on changing the default order of attachments.
It is not a configuration or INFRA issue.
Turned out to be a controversial topic in the Jira itself, which was
explicitly rejected by the developers. With many users unsatisfied.
https://jira.atlassian.com/browse/JRA-28290
I thought
Thanks, Malcom. I reviewed it. The only thing you still have to do
is hit submit patch to get a Jenkins run. See our HowToContribute
wiki page for more details.
wiki.apache.org/hadoop/HowToContribute
best,
Colin
On Sat, Dec 13, 2014 at 9:22 PM, malcolm malcolm.kaval...@oracle.com wrote:
I
Hervé Boutemy created HADOOP-11412:
--
Summary: POMs mention The Apache Software License rather than
Apache License
Key: HADOOP-11412
URL: https://issues.apache.org/jira/browse/HADOOP-11412
Project:
Try to run teragen based on hadoop 2.6.0 using docker and met following
error:
2014-12-15 04:15:21,385 DEBUG [main]
org.apache.hadoop.crypto.OpensslCipher: Failed to load OpenSSL Cipher.
java.lang.UnsatisfiedLinkError:
org.apache.hadoop.util.NativeCodeLoader.buildSupportsOpenssl()Z
at
Done, and added the comment as you requested.
I attached a second patch file to the JIRA (with .002 appended as per
convention) assuming Jenkins knows to take the latest version, since I
understand that I cannot remove the previous patch file .
On 12/16/2014 04:12 AM, Colin McCabe wrote:
Yi Liu created HADOOP-11413:
---
Summary: Remove unused CryptoCodec in org.apache.hadoop.fs.Hdfs
Key: HADOOP-11413
URL: https://issues.apache.org/jira/browse/HADOOP-11413
Project: Hadoop Common
Issue
Hi Chen He,
If native is not available, JCE will be used. So you can ignore that debug
info. Actually CryptoCodec is not necessary in org.apache.hadoop.fs.Hdfs, and I
have created a JIRA to fix it.
Regards,
Yi Liu
-Original Message-
From: Chen He [mailto:airb...@gmail.com]
Sent:
26 matches
Mail list logo