Hi you all,
I'm still new to hadoop and I'm still trying to understand a couple of
things.
I setup a lab with 4 machines, 2 namenodes and 2 datanodes.
One namenode is active active and one in standby.
First question:
I set it up to work with solr but in its configuration I can configure
only one hadoop endpoint,
becoming the single point of failure.
( from my /opt/solr/bin/solr.in.sh )
...
SOLR_OPTS="$SOLR_OPTS -Dsolr.directoryFactory=HdfsDirectoryFactory"
SOLR_OPTS="$SOLR_OPTS -Dsolr.lock.type=hdfs"
SOLR_OPTS="$SOLR_OPTS -Dsolr.hdfs.home=hdfs://had4:8020/solr"
...
is there an official way to solve this problem or it would be better to
put them behind a
reverse proxy?
Second question:
since solr cannot use the mapreduce function of hadoop what would be the
exact
advantage to use hdfs instead of a usual file system?
Thanks in advance
Roberto
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
For additional commands, e-mail: user-h...@hadoop.apache.org