Hmmm....this thread is very interesting - I didn't know most of the stuff 
mentioned here.
   
  Ted, when you say "copy in the distro" do you need to include the 
configuration files from the running grid?  You don't need to actually start 
HDFS on this node do you?
   
  If I'm following this approach correctly, I would want to have an "xfer 
server" whose job it is to essentially run dfs -copyFromLocal on all 
inbound-to-HDFS data. Once I'm certain that my data has copied correctly, I can 
delete the local files on the xfer server.
   
  This is great news, as my current system wastes a lot of time copying data 
from data acquisition servers to the master node. If I can copy to HDFS 
directly from ny acquisition servers then I am a happy guy....
   
  Thanks,
  C G
  

Ted Dunning <[EMAIL PROTECTED]> wrote:
  

Just copy the hadoop distro directory to the other machine and use whatever
command you were using before.

A program that uses hadoop just have to have access to all of the nodes
across the net. It doesn't assume anything else.




On 12/20/07 2:35 PM, "Jeff Eastman" wrote:

> .... Can you give me a pointer on how to accomplish this (upload from other
> machine)? 



       
---------------------------------
Be a better friend, newshound, and know-it-all with Yahoo! Mobile.  Try it now.

Reply via email to