Hello to all.
I'm interested in using Hadoop as a large distributed filesystem. Right
now I'm not very interested in the Map/Reduce features. I've been
trying to find examples of what would be needed to:
1) Setup Hadoop running as a filesystem on 1 or more machines. Not as
the OS filesystem for that machine, just running on a machine. I can
set it up on 1 and them more from there once I grasp the configuration a
little better.
2) An easy example of writing a string to a file on Hadoop and then
reading it back. I've looked though the API and the examples but can't
seem to find a place to start. All the examples seem to be more of the
Map/Reduce type of example.
Any help would be greatly appreciated.
Thanks in advance,
-Cesar