On Tue, Feb 10, 2009 at 2:44 PM, imadhusudhanan
<[email protected]> wrote:
> I use the Apache Hadoop project as DFS. Have anyone dealt with the similar JR 
> to DFS conversion.. ?? pls explain ...

Still, what do you mean by DFS? Distributed File System? How do you
"use" it (ie. Apache Hadoop)  in your client applications, what is the
interface you use? Direct filesystem access, webdav, Hadoop API, etc?

Jackrabbit obviously mainly provides the JCR API as interface, but it
also provides a stable WebDAV filesystem-like mapping (only
nt:file/nt:folder in the repository) that can be mounted as file
system. The backend part of Jackrabbit (persistence managers,
datastore) is optimized for performance and pure JCR usage, it is an
integral part of Jackrabbit's internal architecture. If you want to
connect existing datasources via JCR, the Jackrabbit SPI interface is
thought to make development of such connectors/adaptors simpler.

Regards,
Alex

-- 
Alexander Klimetschek
[email protected]

Reply via email to