[ 
https://issues.apache.org/jira/browse/HDFS-15315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17096852#comment-17096852
 ] 

Wei-Chiu Chuang commented on HDFS-15315:
----------------------------------------

Thanks

 

At Cloudera we did not test EC with Solr so this is possible. It's not 
explicitly written in the Cloudera's user doc though:

 

[https://docs.cloudera.com/runtime/7.1.0/scaling-namespaces/topics/hdfs-ec-overview.html]
{quote}EC supports the following data processing engines:
* Hive
* MapReduce
* Spark
{quote}

 

> IOException on close() when using Erasure Coding
> ------------------------------------------------
>
>                 Key: HDFS-15315
>                 URL: https://issues.apache.org/jira/browse/HDFS-15315
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: 3.1.1, hdfs
>    Affects Versions: 3.1.1
>         Environment: XOR-2-1-1024k policy on hadoop 3.1.1 with 3 datanodes
>            Reporter: Anshuman Singh
>            Priority: Major
>
> When using Erasure Coding policy on a directory, the replication factor is 
> set to 1. Solr fails in indexing documents with error - _java.io.IOException: 
> Unable to close file because the last block does not have enough number of 
> replicas._ It works fine without EC (with replication factor as 3.) It seems 
> to be identical to this issue. [ 
> https://issues.apache.org/jira/browse/HDFS-11486|https://issues.apache.org/jira/browse/HDFS-11486]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to