Hi Everyone , We are Planning to upgrade our Clusters to Hadoop 2.7.1 Stable
Release .. Can someone help me to get List of Deprecated API's or
Configurations ? Hopefully the Jobs which are running against 2.4.0 should run
in 2.7.1 smoothly..
--Senthil
\bbb
On Jun 10, 2015 10:58 AM, anvesh ragi annunarc...@gmail.com wrote:
Hello all,
I know that the tab is default input separator for fields :
stream.map.output.field.separator
stream.reduce.input.field.separator
stream.reduce.output.field.separator
mapreduce.textoutputformat.separator
That did not work either.
Thanks Regards,
Anvesh R
On Tue, Jun 9, 2015 at 11:12 PM, Kiran Dangeti kirandkumar2...@gmail.com
wrote:
\bbb
On Jun 10, 2015 10:58 AM, anvesh ragi annunarc...@gmail.com wrote:
Hello all,
I know that the tab is default input separator for fields :
Hello all,
I know that the tab is default input separator for fields :
stream.map.output.field.separator
stream.reduce.input.field.separator
stream.reduce.output.field.separator
mapreduce.textoutputformat.separator
but if i try to write the generic parser option :
I'm running a single node Apache Hadoop 2.4.0 cluster and trying to test
my application's behavior when it exceeds the memory allocated for the
containers. No matter what I do I can't seem to get the containers to be
killed when they start exceeding physical memory allocated.
Any suggestions
Hello Hadoopers,
When I run HDP 2.1/HBase 0.98.0/Hadoop/2.4.0, I always got the fatal
problem: DFSClient does not closing a closed socket resulting in thousand of
CLOSE_WAIT sockets. Have you guys got same issue, if that please share to
me? Thanks a lot. I also create a issue HDFS-6973
, arthur.hk.c...@gmail.com
arthur.hk.c...@gmail.com wrote:
Hi,
I have installed Hadoop 2.4.0 with 5 nodes, each node physically has 4T hard
disk, when checking the configured capacity, I found it is about 49.22 GB per
node, can anyone advise how to set bigger “configured capacity” e.g. 2T
.
On Jul 28, 2014 5:14 AM, arthur.hk.c...@gmail.com
arthur.hk.c...@gmail.com wrote:
Hi,
I have installed Hadoop 2.4.0 with 5 nodes, each node physically has 4T
hard disk, when checking the configured capacity, I found it is about 49.22
GB per node, can anyone advise how to set bigger “configured
You need to add each disk inside dfs.name.data.dir parameter.
On Jul 28, 2014 5:14 AM, arthur.hk.c...@gmail.com
arthur.hk.c...@gmail.com wrote:
Hi,
I have installed Hadoop 2.4.0 with 5 nodes, each node physically has 4T
hard disk, when checking the configured capacity, I found it is about
the library in 'hadoop-dist/target/hadoop-2.4.0/lib/native'
Thanks,
Akira
(2014/07/03 15:32), Ritesh Kumar Singh wrote:
@Akira : if i delete my native library, how exactly do i generate my own
copy of it?
@Chris : This is the content of my /etc/hosts file :
127.0.0.1localhost
127.0.1.1hduser
Did you move your native library to /usr/local/hadoop/lib/native ?
Thanks,
Akira
(2014/07/07 0:59), Ritesh Kumar Singh wrote:
My hadoop is still giving the above mentioned error. Please help.
On Thu, Jul 3, 2014 at 12:50 PM, Akira AJISAKA
ajisa...@oss.nttdata.co.jp
wrote:
You can download the source code and generate your own native library by
$ mvn package -Pdist,native -Dtar -DskipTests
You should see the library in 'hadoop-dist/target/hadoop-2.4.0/lib/native'
Thanks,
Akira
(2014/07/03 15:32), Ritesh Kumar Singh wrote:
@Akira : if i
When I try to start dfs using start-dfs.sh I get this error message:
14/07/03 11:03:21 WARN util.NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
Starting namenodes on [OpenJDK 64-Bit Server VM warning: You have loaded
library
It looks like the native library is not compatible with your
environment. You should delete '/usr/local/hadoop/lib/native' directory
or compile source code to get your own native library.
Thanks,
Akira
(2014/07/03 15:05), Ritesh Kumar Singh wrote:
When I try to start dfs using start-dfs.sh I
@Akira : if i delete my native library, how exactly do i generate my own
copy of it?
@Chris : This is the content of my /etc/hosts file :
127.0.0.1 localhost
127.0.1.1 hduser
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
fe00::0
You can download the source code and generate your own native library by
$ mvn package -Pdist,native -Dtar -DskipTests
You should see the library in 'hadoop-dist/target/hadoop-2.4.0/lib/native'
Thanks,
Akira
(2014/07/03 15:32), Ritesh Kumar Singh wrote:
@Akira : if i delete my native
...@gmail.com
wrote:
Are you sure that CMake is installed?
Best,
Silvina
On 28 April 2014 13:05, ascot.m...@gmail.com ascot.m...@gmail.com wrote:
Hi,
I am trying to install Hadoop 2.4.0 from source, I got the following
error, please help!!
Can anyone share the apache-maven-3.1.1/conf
You are hitting this
http://hadoop.apache.org/docs/r2.4.0/hadoop-project-dist/hadoop-hdfs/CHANGES.txtHDFS-5308.
Replace HttpConfig#getSchemePrefix with implicit schemes in HDFS JSP.
(Haohui Mai via jing9)
On Wed, Apr 30, 2014 at 6:08 PM, Gaurav Gupta gaurav.gopi...@gmail.comwrote:
I am trying
https://issues.apache.org/jira/browse/HDFS-5308
On Thu, May 1, 2014 at 8:58 AM, Hardik Pandya smarty.ju...@gmail.comwrote:
You are hitting this
http://hadoop.apache.org/docs/r2.4.0/hadoop-project-dist/hadoop-hdfs/CHANGES.txtHDFS-5308.
Replace HttpConfig#getSchemePrefix with implicit
Hi,
I was using hadoop 2.2.0 to build my application. I was using
HttpConfig.getSchemaPrefix() api call. When I updated hadoop to 2.4.0, the
compilation fails for my application and I see that HttpConfig
(org.apache.hadoop.http.HttpConfig) APIs have changed.
How do I get the schrema Prefix in
Hi,
Can you describe your use cases, that is, how the prefix is used? Usually
you can get around with it by generating relative URLs, which starts at
//.
~Haohui
On Wed, Apr 30, 2014 at 2:31 PM, Gaurav Gupta gaurav.gopi...@gmail.comwrote:
Hi,
I was using hadoop 2.2.0 to build my
I am trying to get the container logs url and here is the code snippet
containerLogsUrl = HttpConfig.getSchemePrefix() +
this.container.nodeHttpAddress + /node/containerlogs/ + id + / +
System.getenv(ApplicationConstants.Environment.USER.toString());
Thanks
Gaurav
On Wed, Apr 30, 2014 at
Hi,
I am trying to install Hadoop 2.4.0 from source, I got the following error,
please help!!
Can anyone share the apache-maven-3.1.1/conf/settings.xml” setting?
Regards
O/S Ubuntu: 12.04 (64-bit)
Java: java version 1.6.0_45
protoc —version: libprotoc 2.5.0
Command: mvn package -Pdist
Are you sure that CMake is installed?
Best,
Silvina
On 28 April 2014 13:05, ascot.m...@gmail.com ascot.m...@gmail.com wrote:
Hi,
I am trying to install Hadoop 2.4.0 from source, I got the following
error, please help!!
Can anyone share the apache-maven-3.1.1/conf/settings.xml” setting
Hadoop 2.4.0 from source, I got the following error,
please help!!
Can anyone share the apache-maven-3.1.1/conf/settings.xml” setting?
Regards
O/S Ubuntu: 12.04 (64-bit)
Java: java version 1.6.0_45
protoc —version: libprotoc 2.5.0
Command: mvn package -Pdist,native -DskipTests -Dtar
-1934 can be critical if you're using RM-HA on some situation.
Thanks,
- Tsuyoshi
On Fri, Apr 18, 2014 at 2:07 AM, MrAsanjar . afsan...@gmail.com wrote:
Hi all,
How stable is hadoop 2.4.0? what are the known issues? anyone has done an
extensive testing on it?
Thanks in advance..
Hi all,
How stable is hadoop 2.4.0? what are the known issues? anyone has done an
extensive testing on it?
Thanks in advance..
Hadoop 2.4.0 doesn't has the known issue now. I think it's a stable release
even if it's not in the stable download list. the only one issue I met is
that you should upgrade Hive to Hive-0.12.0 after upgrade to 2.4.0 for the
API compatible.
On Fri, Apr 18, 2014 at 1:07 AM, MrAsanjar . afsan
28 matches
Mail list logo