[
https://issues.apache.org/jira/browse/LUCENE-6536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14578658#comment-14578658
]
Robert Muir commented on LUCENE-6536:
-------------------------------------
Questions:
* What will be done to deal with the bugginess of this thing? I see many
reports of user corruption issues. By committing it, we take responsibility for
this and it becomes "our problem". I don't want to see the code committed to
lucene just for this reason.
* What will be done about the performance? I am not really sure the entire
technique is viable.
Personally, I think if someone wants to do this, a better integration point is
to make it a java 7 filesystem provider. That is really how such a filesystem
should work anyway.
> Migrate HDFSDirectory from solr to lucene-hadoop
> ------------------------------------------------
>
> Key: LUCENE-6536
> URL: https://issues.apache.org/jira/browse/LUCENE-6536
> Project: Lucene - Core
> Issue Type: Improvement
> Reporter: Greg Bowyer
> Labels: hadoop, hdfs, lucene, solr
> Attachments: LUCENE-6536.patch
>
>
> I am currently working on a search engine that is throughput orientated and
> works entirely in apache-spark.
> As part of this, I need a directory implementation that can operate on HDFS
> directly. This got me thinking, can I take the one that was worked on so hard
> for solr hadoop.
> As such I migrated the HDFS and blockcache directories out to a lucene-hadoop
> module.
> Having done this work, I am not sure if it is actually a good change, it
> feels a bit messy, and I dont like how the Metrics class gets extended and
> abused.
> Thoughts anyone
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]