With your example your hbase scripts seem to be at the bash shell level (as opposed to the jruby hbase shell level). If this is the case, is there a reason why you can't just use the hadoop script (which i believe calls FsShell) in your scripts?
Ex: $HADOOP_HOME/bin/hadoop fs -ls hdfs://127.0.0.1/hbase/.logs Jon. On Thu, Mar 21, 2013 at 6:20 AM, rajeshbabu chintaguntla < [email protected]> wrote: > ________________________________ > > > Hi Dev, > > Just I want to know your opinion about having file system query support > from hbase script. > Presently We can use org.apache.hadoop.fs.FsShell tool from hbase script > to get basic information like list of files,disk usage etc from underneath > file system. > But user need to pass full path details like > hdfs://namenodehost/parent/child or file:///rootdir/path to the command. > For this user need to know whether hbase is using local file system or > hdfs and host details to form full path(need to check in configurations). > > Ex: $HBASE_HOME/bin/hbase org.apache.hadoop.fs.FsShell -ls hdfs:// > 127.0.0.1/hbase/.logs > > I want to simplify this by having one more command like "fscli" in hbase > script just like zkcli command support in such a way that user can query > file system by passing commands with simple paths as argument. > > Ex: $HBASE_HOME/bin/hbase fscli -ls /hbase/.logs > > New fscli command will also use FsShell tool but we will parse the > hbase.rootdir property value to get FS details and and append simple path > to it. > > If every one ok with it I will open a jira and contribute it. > > Thanks and Regards. > Rajeshbabu > > > > > -- // Jonathan Hsieh (shay) // Software Engineer, Cloudera // [email protected]
