See https://builds.apache.org/job/Hadoop-Common-trunk/484/changes
Changes:
[szetszwo] HDFS-3696. Set chunked streaming mode in WebHdfsFileSystem write
operations to get around a Java library bug causing OutOfMemoryError.
[todd] Amend previous commit of HDFS-3626: accidentally included a hunk
Mike Percy created HADOOP-8625:
--
Summary: Use GzipCodec to decompress data in
ResetableGzipOutputStream test
Key: HADOOP-8625
URL: https://issues.apache.org/jira/browse/HADOOP-8625
Project: Hadoop
In general multithreaded does not get you much in traditional Map/Reduce.
If you want the mappers to run faster you can drop the split size and get
a similar result, because you get more parallelism. This is the use case
that we have typically concentrated on. About the only time that
On Thu, Jul 26, 2012 at 7:42 AM, Robert Evans ev...@yahoo-inc.com wrote:
About the only time that
MultiThreaded mapper makes a lot of since is if there is a lot of
computation associated with each key/value pair.
Or if the mapper does a lot of i/o to some external resource, e.g., a
web
But I found that synchronization is needed for record reading(read
the input Key and Value) and result output.
I use Spring Batch for that. it has io buffering builtin and it is very easy to
use and well documented.
Jonathan Natkins created HADOOP-8626:
Summary: Typo in default setting for
hadoop.security.group.mapping.ldap.search.filter.user
Key: HADOOP-8626
URL: https://issues.apache.org/jira/browse/HADOOP-8626
Thank yours response.
I am using hadoop-2.0.0-alpha from apache site. In which version it should
configure with HTTP/_h...@site.com? I think not in hadoop-2.0.0-alpha. Because
I login successful with other principal, pls refer below log:
2012-07-23 22:48:17,303 INFO
you need to use HTTP/_h...@site.com as that is the principal needed by spnego.
So you would need create the HTTP/_HOST principal and add it to the same keytab
(/home/hdfs/keytab/nn.service.keytab).
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Jul 26, 2012, at 6:54 PM, Wangwenli
Could you spent one minute to check whether below code will cause issue or not?
In org.apache.hadoop.hdfs.server.namenode.NameNode.loginAsNameNodeUser(), it
use socAddr.getHostName() to get _HOST,
But in org.apache.hadoop.security.SecurityUtil.replacePattern(), in
getLocalHostName(), it use