"Becoming a part of the cluster" implies that you're running the daemons on
the node. You need the Hadoop JARs on the client machine so that you can use
FileSystem.open(), etc. And the conf/hadoop-site.xml file should indicate
the NameNode's address in its fs.default.name parameter -- that's how the
client knows which HDFS server to connect to.

But that machine will not "be part of the cluster" and store HDFS blocks or
compute MapReduce jobs unless you explicitly start the datanode and
tasktracker services on that machine (or do it implicitly by adding the
machine to the slaves file and running bin/start-all.sh, etc.)

- Aaron

On Mon, Mar 30, 2009 at 5:01 AM, Brian Bockelman <[email protected]>wrote:

>
> On Mar 30, 2009, at 3:59 AM, W wrote:
>
>  I already try the mountable HDFS, both webDav and FUSE approach, it
>> seem both of it is not
>> production ready ..
>>
>>
> Depends on what you define to be "production ready"; for a business serving
> HDFS to external customers directly, no.  But then again, it's questionable
> whether Hadoop itself has that level of "production ready".  For internal
> batch/data processing, the uptime requirements are far less.
>
> We move 10's of TB a day through FUSE (yesterday looks like it was a 50TB
> day), and I have seen it move up to 300TB in 12 hours.
>
> Brian
>
>
>  CMIIW
>>
>> Best Regards,
>> Wildan
>>
>> ---
>> OpenThink Labs
>> www.tobethink.com
>>
>> Aligning IT and Education
>>
>>  021-99325243
>>>>
>>> Y! : hawking_123
>> Linkedln : http://www.linkedin.com/in/wildanmaulana
>>
>>
>>
>> On Sun, Mar 29, 2009 at 2:52 PM, Sagar Naik <[email protected]> wrote:
>>
>>> Yes u can
>>> Java Client :
>>> Copy the conf dir (same as one on namenode/datanode) and hadoop jars shud
>>> be
>>> in the classpath of client
>>> Non Java Client :
>>> http://wiki.apache.org/hadoop/MountableHDFS
>>>
>>>
>>>
>>> -Sagar
>>>
>>> -Sagar
>>>
>>> deepya wrote:
>>>
>>>>
>>>> Hi,
>>>>  I am SreeDeepya doing MTech in IIIT.I am working on a project named
>>>> cost
>>>> effective and scalable storage server.I configured a small hadoop
>>>> cluster
>>>> with only two nodes one namenode and one datanode.I am new to hadoop.
>>>> I have a small doubt.
>>>>
>>>> Can a system not in the hadoop cluster access the namenode or the
>>>> datanode????If yes,then can you please tell me the necessary
>>>> configurations
>>>> that has to be done.
>>>>
>>>> Thanks in advance.
>>>>
>>>> SreeDeepya
>>>>
>>>>
>>>
>

Reply via email to