This is where my limited knowledge of Accumulo/hadoop is not so good.
After running the mapreduce examples and seeing how it defines the files and
directories it was my assumption that was under the control of Hadoop
FileSystem as to what to do with the files. Now sitting here thinking about
it I did notice mapreduce created a user area where it was placing files.
So how using hadoop FS would I define a working area that accumulo proper
would know about?
This is how I call the importDirectory
getAdmin().getConnector().tableOperations().
importDirectory(name(), "bulk/entities/load",
"bulk/entities_fails/failures", false);
Thanks
Paul
--
View this message in context:
http://apache-accumulo.1065345.n5.nabble.com/bulk-ingest-without-mapred-tp8904p8909.html
Sent from the Users mailing list archive at Nabble.com.