Hadoop can be run on a hardware heterogeneous cluster. Currently,
Hadoop clusters really only run well on Linux although you can run a
Hadoop client on non-Linux machines.
You will need to have a special configuration for each of the machine
in your cluster based on their hardware profile.
Hello Everybody,
How can we handle different applications having
different requirement being run on the same hadoop cluster ? What are the
various approaches to solve such problem.. if possible please mention some
of those ideas.
Does such implementation exists ?
Thanks
Can you tell few of the challenges in configuring heterogeneous cluster...or
can pass on some link where I would get some information regarding
challenges in running Hadoop on heterogeneous hardware
One more things is How about running different applications on the same
Hadoop cluster?and what cha
Does that mean hadoop is not scalable wrt heterogeneous environment? and one
more question is can we run different application on the same hadoop cluster
.
Thanks.
Regards,
Ashish
On Thu, Jun 18, 2009 at 8:30 PM, jason hadoop wrote:
> Hadoop has always been reasonably agnostic wrt hardware and h
Hey Snehal (removing the core-dev list; please only post to one at a
time),
The access time should be fine, but it depends on what you define as
an acceptable access time. If this is not acceptable, I'd suggest
putting it behind a web cache like Squid. The best way to find out is
to use
-
From: Mafish Liu [mailto:[EMAIL PROTECTED]
Sent: Tuesday, September 16, 2008 7:37 PM
To: core-user@hadoop.apache.org
Subject: Re: Need help in hdfs configuration fully distributed way in Mac OSX...
Hi, souravm:
I don't know exactly what's wrong with your configuration from your
asters file (in
> hadoop/conf) in datanode ? I've currently specified @ server ip>. I'm thinking there might be a problem as in log file of data
> node I can see the message '2008-09-16 14:38:51,501 INFO
> org.apache.hadoop.ipc.RPC: Server at /192.168.1.102:9000 not availabl
available yet, Z...'
Any help ?
Regards,
Sourav
From: Samuel Guo [EMAIL PROTECTED]
Sent: Tuesday, September 16, 2008 5:49 AM
To: core-user@hadoop.apache.org
Subject: Re: Need help in hdfs configuration fully distributed way in Mac OSX...
check the n
check the namenode's log in machine1 to see if your namenode started
successfully :)
On Tue, Sep 16, 2008 at 2:04 PM, souravm <[EMAIL PROTECTED]> wrote:
> Hi All,
>
> I'm facing a problem in configuring hdfs in a fully distributed way in Mac
> OSX.
>
> Here is the topology -
>
> 1. The namenode i
Liu <[EMAIL PROTECTED]>
To: core-user@hadoop.apache.org
Sent: Mon Sep 15 23:26:10 2008
Subject: Re: Need help in hdfs configuration fully distributed way in Mac OSX...
Hi:
You need to configure your nodes to ensure that node 1 can connect to node
2 without password.
On Tue, Sep 16, 200
Hi:
You need to configure your nodes to ensure that node 1 can connect to node
2 without password.
On Tue, Sep 16, 2008 at 2:04 PM, souravm <[EMAIL PROTECTED]> wrote:
> Hi All,
>
> I'm facing a problem in configuring hdfs in a fully distributed way in Mac
> OSX.
>
> Here is the topology -
>
> 1
I tried this. Frankly, the hardest part was getting Java set up on that
machine. GIJ got in the way of -everything-, causing me much frustration and
furious anger. Even if you install sun java, it's possible that all the
symbolic links don't point to sun java, but rather gij. I'm not sure if this
i
hemal patel wrote:
Hello ,
Can u help me to solve this problem..
When I am trying to run this program it give me error like this.
bin/hadoop jar hadoop-*-examples.jar grep input output 'dfs[a-z.]+'
08/05/12 17:32:59 INFO mapred.FileInputFormat: Total input paths to process
: 12
java.io.IOExce
Hi,
it seems it didn't accept [conf] as a directory inside [input], please make
sure there are no subdirectories or write a script to handle it
On 10/04/2008, krishna prasanna <[EMAIL PROTECTED]> wrote:
>
> Hi I started using hadoop very recently I am struct with the basic example
> when i am try
14 matches
Mail list logo