Use the Thrift HDFS API.
On Tue, 6 Sep 2011 19:28:07 +0200, "Ralf Heyde" <[email protected]> wrote: > Yeah it works. ... > I just copied the core-site.xml and hdfs-site.xml ... This setup does not > work. > After copying the whole hadoop-installation-folder from the master-node ... > it works. > > Thanks. > > -----Original Message----- > From: Uma Maheswara Rao G 72686 [mailto:[email protected]] > Sent: Montag, 5. September 2011 17:04 > To: [email protected] > Subject: Re: Is it possible to access the HDFS via Java OUTSIDE the > Cluster? > > Hi, > > It is very much possible. Infact that is the main use case for Hadoop :-) > > You need to put the hadoop-hdfs*.jar hdoop-common*.jar's in your class > path > from where you want to run the client program. > > At client node side use the below sample code > > Configuration conf=new Configuration(); //you can set the required > configurations here > FileSystem fs =new DistributedFileSystem(); > fs.initialize(new URI(<Name_Node_URL>), conf); > > fs.copyToLocal(srcPath, destPath) > fs.copyFromLocal(srcPath,destPath) > .....etc > There are many API exposed in FileSystem.java class. So, you can make use > of them. > > > Regards, > Uma > > > ----- Original Message ----- > From: Ralf Heyde <[email protected]> > Date: Monday, September 5, 2011 7:59 pm > Subject: Is it possible to access the HDFS via Java OUTSIDE the Cluster? > To: [email protected] > >> Hello, >> >> >> >> I have found a HDFSClient which shows me, how to access my HDFS >> from inside >> the cluster (i.e. running on a Node). >> >> >> >> My Idea is, that different processes may write 64M Chunks to HDFS from >> external Sources/Clients. >> >> Is that possible? >> >> How that can be done? Does anybody have some Example Code? >> >> >> >> Thanks, >> >> >> >> Ralf >> >> >> >>
