[
https://issues.apache.org/jira/browse/YARN-9254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16822076#comment-16822076
]
Billie Rinaldi commented on YARN-9254:
--------------------------------------
+1 for patch 5. I tested and verified that solr was using the HDFS directory
specified. I also killed the container to see if a new container would be able
to access the stored data. I had to manually remove the write lock at
$SOLR_DATA_DIR/index/write.lock for a new container to be able to load the HDFS
data. Once I did that, the new container was able to come up with the same
running applications and an application template that I had registered in the
first instance. So, the HDFS support is working, and I think we should open
another ticket to support solr cloud mode, which would help address this
locking issue. Thanks for the patch, [~eyang]!
> Externalize Solr data storage
> -----------------------------
>
> Key: YARN-9254
> URL: https://issues.apache.org/jira/browse/YARN-9254
> Project: Hadoop YARN
> Issue Type: Sub-task
> Reporter: Eric Yang
> Assignee: Eric Yang
> Priority: Major
> Attachments: YARN-9254.001.patch, YARN-9254.002.patch,
> YARN-9254.003.patch, YARN-9254.004.patch, YARN-9254.005.patch
>
>
> Application catalog contains embedded Solr. By default, Solr data is stored
> in temp space of the docker container. For user who likes to persist Solr
> data on HDFS, it would be nice to have a way to pass solr.hdfs.home setting
> to embedded Solr to externalize Solr data storage. This also implies passing
> Kerberos credential settings to Solr JVM in order to access secure HDFS.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]