Hi harsh,

I've been tasked to create a simple portal to obtain info from 
namenode/datanodes of a hadoop installation. 

I know this might be available from cloudera and mapR, the tech leads want to 
roll their own simple implementation to reduce dependencies when deploying. 

We'll be fixed to a hadoop version to maintain consistency. So the API change 
shouldn't be a problem. 

Appreciate if this is the "correct" path to go about this :)


Thanks



On Feb 13, 2012, at 17:38, Harsh J <ha...@cloudera.com> wrote:

> If you want to use the many methods of DFS, that is how you may check
> for it and use it. However, the use of DistributedFileSystem directly
> is discouraged as well (its not an interface, and is not meant to be
> used directly by users at least).
> 
> What are you looking to do with it exactly Michael, and which method
> inside DFS particularly interests you that FS itself does not provide?
> 
> If you can tell me your reasons, I can be a better judge, but in any
> case it could turn out to be a problem to maintain as the framework
> gives you no guarantee of it breaking/remaining stable in future (but
> you are mostly okay if you are sticking to one version).
> 
> On Mon, Feb 13, 2012 at 2:59 PM, Michael Lok <fula...@gmail.com> wrote:
>> Hi folks,
>> 
>> I'm using the FileSystem class to connect to a HDFS installation.  I'm also
>> checking the instance if it's of DistributedFileSystem.
>> 
>>> FileSystem fs = FileSystem.get(uri, conf, "hadoop");
>>> 
>>> DistributedFileSystem dfs = null;
>>> 
>>> if (fs instanceof DistributedFileSystem) {
>>> ...
>> 
>> 
>> Was wondering if there's a way to obtain an instance of the NameNode class
>> via the DistributedFileSystem or do I need to use ClientProtocol (which
>> doesn't seem likely as the InterfaceAudience is set to Private)?
>> 
>> 
>> Thanks!
> 
> 
> 
> -- 
> Harsh J
> Customer Ops. Engineer
> Cloudera | http://tiny.cloudera.com/about

Reply via email to