Hi,
You can just write a class extending FileSystem. But i fail to
understand your question about starting hdfs. If you implement your own
filesystem(using JR), then you will not need hdfs, right?
In case you haven't seen, there is a work-in-progress patch for webdav
interface for hadoop, https://issues.apache.org/jira/browse/HADOOP-496.
Eugeny N Dzhurinsky wrote:
Hello!
We would like to use HDFS for our software, which software will be extended to
use a cluster later. For now we would like to just create an implementation of
file system interface for JackRabbit.
We found how can we implement this using Hadoop part for HDFS, however it's
still not clear for us how should we configure the HDFS module.
We would like to start from single host computer where the application will be
deployed, and create cluster/add host to it when needed.
The question is - where should we start? =) We have an implementation of
file system interface, which uses HDFS-related classes, but how to configure
HDFS itself and start it, to see if system works fine or it doesn't?
Thank you in advance!