Ted Yu created HADOOP-11480:
---
Summary: Typo in hadoop-aws/index.md uses wrong scheme for
test.fs.s3.name
Key: HADOOP-11480
URL: https://issues.apache.org/jira/browse/HADOOP-11480
Project: Hadoop Common
Ranadip created HADOOP-11478:
Summary: HttpFSServer does not properly impersonate a real user
when executing open operation in a kerberised environment
Key: HADOOP-11478
URL:
Hi Niels,
I agree that direct-attached storage seems more economical for many users.
As an HDFS developer, I certainly have a dog in this fight as well :)
But we should be respectful towards people trying to contribute code to
Hadoop and evaluate the code on its own merits. It is up to our
Patch attached. I'm not sure what is necessary for changing the stability
of class. Please review.
On Tue, Jan 13, 2015 at 5:09 PM, Abraham Elmahrek a...@cloudera.com wrote:
Thanks for your thoughts guys. I've created
https://issues.apache.org/jira/browse/HADOOP-11476 to follow through on
Ranadip created HADOOP-11479:
Summary: hdfs crypto -createZone fails to impersonate the real
user in a kerberised environment
Key: HADOOP-11479
URL: https://issues.apache.org/jira/browse/HADOOP-11479
Hi Colin,
Yeah, I should add the reasons to the README. We tried LocalFileSystem when
we started out but we think we can do tighter Hadoop integration if we
write a connector.
Some examples include:
1. Limit over-prefetching of data - MapReduce splits the jobs into 128MB
splits and standard NFS
Hi Niels,
Thanks for your comments. My goal in designing the NFS connector is *not*
to replace HDFS. HDFS is ideally suited for Hadoop (otherwise why was it
built?).
The problem is that we have people who have PBs (10PB to 50PB) of data on
NFS storage that they would like process using Hadoop.
Lars Francke created HADOOP-11477:
-
Summary: Link to source code is missing
Key: HADOOP-11477
URL: https://issues.apache.org/jira/browse/HADOOP-11477
Project: Hadoop Common
Issue Type: Bug
See https://builds.apache.org/job/Hadoop-common-trunk-Java8/75/changes
See https://builds.apache.org/job/Hadoop-Common-trunk/1375/changes
Hi,
The main reason Hadoop scales so well is because all components try to
adhere to the idea around having Data Locality.
In general this means that you are running the processing/query software on
the system where the actual data is already present on the local disk.
To me this NFS solution
11 matches
Mail list logo