I'd go with SolrJ personally. For a terabyte of data that (I'm inferring)
are PDF files and the like (aka "semi-structured documents) you'll
need to have Tika parse out the data you need to index. And doing
that through posting or DIH puts all the analysis on the Solr servers,
which will work, but not optimally.

Here's something to get you started:

https://lucidworks.com/blog/indexing-with-solrj/

Best,
Erick

On Mon, Aug 3, 2015 at 1:56 PM, Mugeesh Husain <muge...@gmail.com> wrote:
> Hi Alexandre,
> I have a 40 millions of files which is stored in a file systems,
> the filename saved as ARIA_SSN10_0007_LOCATION_0000129.pdf
> 1.)I have to split all underscore value from a filename and these value have
> to be index to the solr.
> 2.)Do Not need file contains(Text) to index.
>
> You Told me "The answer is Yes" i didn't get in which way you said Yes.
>
> Thanks
>
>
>
>
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Can-Apache-Solr-Handle-TeraByte-Large-Data-tp3656484p4220527.html
> Sent from the Solr - User mailing list archive at Nabble.com.

Reply via email to