Dear Solr users/developers,
Hi,
I have tried to implement the Page and Post relation in single Solr Schema.
In my use case each page has multiple posts. Page and Post fields are as
follows:
Post:{post_content, owner_page_id, document_type}
Page:{page_id, document_type}
Suppose I want to query
tly
> how to reconstruct the original steam, which very possibly would take
> up as much space as if you'd just stored the values anyway. _And_ it
> would burden every one else who didn't want to do this with a bloated
> index.
>
> Best,
> Erick
>
> On Sun, May 8, 2016 at
Dear all,
Hi,
I was wondering, is it possible to re-index Solr 6.0 data in case of
store=false? I am using Solr as a secondary datastore, and for the sake of
space efficiency all the fields (except id) are considered as store=false.
Currently, due to some changes in application business, Solr
Dear Solr Users/Developers,
Hi,
I was wondering what is the correct query syntax for searching sequence of
terms with blank character in the middle of sequence. Suppose I am looking
for a query syntax with using fq parameter. For example suppose I want to
search for all documents having "hello
Dear all,
Hi,
I am wondering, is there any way to introduce and add a function for facet
gap parameter? I already know there are some Date Math that can be used.
(Such as DAY, MONTH, and etc.) I want to add some functions and try to use
them as gap in facet range; Is it possible?
Sincerely,
Ali.
: stat_date,
start: 146027158386,
end: 1460271583864,
gap: 1
%7d%7d
Sincerely,
On Sun, Apr 10, 2016 at 4:56 PM, Yonik Seeley <ysee...@gmail.com> wrote:
> On Sun, Apr 10, 2016 at 3:47 AM, Ali Nazemian <alinazem...@gmail.com>
> wrote:
> > Dear all Solr users/developeres
Dear all Solr users/developeres,
Hi,
I am going to use Solr JSON facet range on a date filed which is stored as
long milis. Unfortunately I got java heap space exception no matter how
much memory assigned to Solr Java heap! I already test that with 2g heap
space for Solr core with 50k documents!!
g bulk size?
> How many indexing threads?
>
> Thanks,
> Emir
>
>
> On 11.12.2015 10:06, Ali Nazemian wrote:
>
>> I really appreciate if somebody can help me to solve this problem.
>> Regards.
>>
>> On Tue, Dec 8, 2015 at 9:22 PM, Ali Nazemian <alinazem...@
I really appreciate if somebody can help me to solve this problem.
Regards.
On Tue, Dec 8, 2015 at 9:22 PM, Ali Nazemian <alinazem...@gmail.com> wrote:
> I did that already. The situation was worse. The autocommit part makes
> solr unavailable.
> On Dec 8, 2015 7:13 PM, &
on
the analyzing part I think it would be acceptable).
- The concurrentsolrclient is used in all the indexing/updating cases.
Regards.
On Tue, Dec 8, 2015 at 6:36 PM, Ali Nazemian <alinazem...@gmail.com> wrote:
> Dear Emir,
> Hi,
> There are some cases that I have soft commit in my appli
--
> Monitoring * Alerting * Anomaly Detection * Centralized Log Management
> Solr & Elasticsearch Support * http://sematext.com/
>
>
>
> On 08.12.2015 08:16, Ali Nazemian wrote:
>
>> Hi,
>> There is a while since I have had problem with Solr 5.2.1 and I could not
>
ocked.
>
> Thanks,
> Emir
>
> On 08.12.2015 16:19, Ali Nazemian wrote:
>
>> The indexing load is as follows:
>> - Around 1000 documents every 5 mins.
>> - The indexing speed is slow because of the complicated analyzer which is
>> applied to each document. It takes aroun
Hi,
There is a while since I have had problem with Solr 5.2.1 and I could not
fix it yet. The only think that is clear to me is when I send bulk update
to Solr the commit thread will be blocked! Here is the thread dump output:
"qtp595445781-8207" prio=10 tid=0x7f0bf68f5800 nid=0x5785 waiting
Dear Midas,
Hi,
AFAIK, currently Solr uses virtual memory for storing memory maps.
Therefore using 36GB from 48GB of ram for Java heap is not recommended. As
a rule of thumb do not access more than 25% of your total memory to Solr
JVM in usual situations.
About your main question, setting
ime.
>
> Thanks
> Susheel
>
> On Sun, Oct 11, 2015 at 10:01 AM, Ali Nazemian <alinazem...@gmail.com>
> wrote:
>
> > Dear Susheel,
> > Hi,
> >
> > I did check the jira issue that you mentioned but it seems its target is
> > Solr 6! Am I correct?
On Mon, Oct 12, 2015 at 12:29 PM, Ali Nazemian <alinazem...@gmail.com>
wrote:
> Thank you very much.
>
> Sincerely yours.
>
> On Mon, Oct 12, 2015 at 6:15 AM, Susheel Kumar <susheel2...@gmail.com>
> wrote:
>
>> Yes, Ali. These are targeted for Solr 6 but you
ne question, how can it tell which core a
> > SolrDocument or IndexableField is from? Seems we'd have to add an
> > attribute for that.
> >
> > The other possibly simpler thing to do is execute the join at index time
> > with an update processor.
> >
> > Ryan
I was wondering how can I overcome this query requirement in Solr 5.2.1:
I have two different Solr cores refer as "core1" and "core2". core1 has
some fields such as field1, field2 and field3 and core2 has some other
fields such as field1, field4 and field5. I am looking for Solr query which
can
Dear Mikhail,
Hi,
I want to enrich the result.
Regards
On Oct 6, 2015 7:07 PM, "Mikhail Khludnev" <mkhlud...@griddynamics.com>
wrote:
> Hello,
>
> Why do you need sibling core fields? do you facet? or just want to enrich
> result page with them?
>
> On Tue, Oc
t; <mkhlud...@griddynamics.com>
wrote:
> thus, something like [child]
>
> https://cwiki.apache.org/confluence/display/solr/Transforming+Result+Documents
> can be developed.
>
> On Tue, Oct 6, 2015 at 6:45 PM, Ali Nazemian <alinazem...@gmail.com>
> wrote:
>
Hi,
I am going to implement a searchcomponent for Solr to return document main
keywords with using the more like this interesting terms. The main part of
implemented component which uses mlt.retrieveInterestingTerms by lucene
docID does not work for all of the documents. I mean for some of the
Dear Yonik,
Hi,
Really thanks for you response.
Best regards.
On Tue, Jul 21, 2015 at 5:42 PM, Yonik Seeley ysee...@gmail.com wrote:
On Tue, Jul 21, 2015 at 3:09 AM, Ali Nazemian alinazem...@gmail.com
wrote:
Dear Erick,
I found another thing, I did check the number of unique terms
Dears,
Hi,
I know that there are lots of tips about how to make the Solr indexing
faster. Probably some of the most important ones which are considered in
client side are choosing batch indexing and multi-thread indexing. There
are other important factors that are server side which I dont want to
20, 2015 at 9:32 PM, Ali Nazemian alinazem...@gmail.com
wrote:
Dear Toke and Davidphilip,
Hi,
The fieldtype text_fa has some custom language specific normalizer and
charfilter, here is the schema.xml value related for this field:
fieldType name=text_fa class=solr.TextField
.
On Tue, Jul 21, 2015 at 10:00 AM, Ali Nazemian alinazem...@gmail.com
wrote:
Dear Erick,
Actually faceting on this field is not a user wanted application. I did
that for the purpose of testing the customized normalizer and charfilter
which I used. Therefore it just used for the purpose of testing
Dears,
Hi,
I have a collection of 1.6m documents in Solr 5.2.1. When I use facet on
field of content this error will appear after around 30s of trying to
return the results:
null:org.apache.solr.common.SolrException: Exception during facet.field: content
at
.
On Mon, Jul 20, 2015 at 8:07 PM, Toke Eskildsen t...@statsbiblioteket.dk
wrote:
Ali Nazemian alinazem...@gmail.com wrote:
I have a collection of 1.6m documents in Solr 5.2.1.
[...]
Caused by: java.lang.IllegalStateException: Too many values for
UnInvertedField faceting on field content
_sure_?
Best,
Erick
On Mon, Jul 20, 2015 at 9:32 PM, Ali Nazemian alinazem...@gmail.com
wrote:
Dear Toke and Davidphilip,
Hi,
The fieldtype text_fa has some custom language specific normalizer and
charfilter, here is the schema.xml value related for this field:
fieldType name
Dear Lucene/Solr developers,
Hi,
I decided to develop a plugin for Solr in order to extract main keywords
from article. Since Solr already did the hard-working for calculating
tf-idf scores I decided to use that for the sake of better performance. I
know that UpdateRequestProcessor is the best
erickerick...@gmail.com
wrote:
I'm not sure what you're asking for, give us an example input/output pair?
Best,
Erick
On Tue, Jun 9, 2015 at 8:47 AM, Ali Nazemian alinazem...@gmail.com
wrote:
Dear all,
Hi,
I was wondering is there any function query for converting date format in
Solr
formatted Date.
The would be simple to use it :
fl=id,persian_date:dateFormat(/mm/dd,gregorian_Date)
The date format is an example in input is an example.
Cheers
2015-06-10 7:24 GMT+01:00 Ali Nazemian alinazem...@gmail.com:
Dear Erick,
Hi,
Actually I want
Dear all,
Hi,
I was wondering is there any function query for converting date format in
Solr? If no, how can I implement such function query myself?
--
A.Nazemian
:16 +0430
: From: Ali Nazemian alinazem...@gmail.com
: Reply-To: solr-user@lucene.apache.org
: To: solr-user@lucene.apache.org solr-user@lucene.apache.org
: Subject: Lucene updateDocument does not affect index until restarting
solr
:
: Dear all,
: Hi,
: As a part of my code I have to update
, but the actual searching is
done via the main query with the q parameter.
-- Jack Krupansky
On Tue, Apr 14, 2015 at 4:17 AM, Ali Nazemian alinazem...@gmail.com
wrote:
Dears,
Hi,
I have strange problem with Solr 4.10.x. My problem is when I do
searching
on solr Zero date which is 0002
Dears,
Hi,
I have strange problem with Solr 4.10.x. My problem is when I do searching
on solr Zero date which is 0002-11-30T00:00:00Z if more than one filter
be considered, the results became invalid. For example consider this
scenario:
When I search for a document with
Dear all,
Hi,
As a part of my code I have to update Lucene document. For this purpose I
used writer.updateDocument() method. My problem is the update process is
not affect index until restarting Solr. Would you please tell me what part
of my code is wrong? Or what should I add in order to apply
I implement a small code for the purpose of extracting some keywords out of
Lucene index. I did implement that using search component. My problem is
when I tried to update Lucene IndexWriter, Solr index which is placed on
top of that, does not affect. As you can see I did the commit part.
to closing the IndexWriter.
On Tue, Apr 7, 2015 at 6:13 PM, Ali Nazemian alinazem...@gmail.com wrote:
Dear Upayavira,
Hi,
It is just the part of my code in which caused the problem. I know
searchComponent is not for changing the index, but for the purpose of
extracting document keywords I
is not intended for
updating the index, so it really doesn’t surprise me that you aren’t
seeing updates.
I’d suggest you describe the problem you are trying to solve before
proposing solutions.
Upayavira
On Tue, Apr 7, 2015, at 01:32 PM, Ali Nazemian wrote:
I implement a small code for the purpose
Dear all,
Hi,
I am looking for a way to filtering lucene index with multiple conditions.
For this purpose I checked two different method of filtering search, none
of them work for me:
Using BooleanQuery:
BooleanQuery query = new BooleanQuery();
String lower = *;
String upper = *;
for
Dear All,
Hi,
I wrote a customize updateProcessorFactory for the purpose of extracting
interesting terms at index time an putting them in a new field. Since I use
MLT interesting terms for this purpose, I have to make sure that the added
document exists in index or not. If it was indexed before
on previously indexed but not post-processed
documents.
Regards,
Alex.
Solr Analyzers, Tokenizers, Filters, URPs and even a newsletter:
http://www.solr-start.com/
On 23 March 2015 at 15:07, Ali Nazemian alinazem...@gmail.com wrote:
Dear All,
Hi,
I wrote a customize
Hi,
I was wondering is it possible to filter tfq() function query to specific
selection of collection? Suppose I want to count all occurrences of term
test in documents with fq=category:2, how can I handle such query with
tfq() function query? It seems applying fq=category:2 in a select query
with
Dear all,
Hi,
I was wondering is there any performance comparison available for different
solr queries?
I meant what is the cost of different Solr queries from memory and CPU
points of view? I am looking for a report that could help me in case of
having different alternatives for sending single
Dear all,
Hi,
I was wondering how can I extract the k highest rank tf-idf terms for
each document at query time? If such information is not available in solr
by default how can I implement that? (any example or similar scenario would
be appropriate)
Best regards.
--
A.Nazemian
Hi,
I am looking for a best practice on More Like This parameters. I really
appreciate if somebody can tell me what is the best value for these
parameters in MLT query? Or at lease the proper methodology for finding the
best value for each of these parameters:
mlt.mintf
mlt.mindf
mlt.maxqt
Thank
:01 PM, Ali Nazemian wrote:
Dear Markus,
Hi,
Thank you very much for your response. I did check the reason why it is
not
recommended to filter by score in search query. But I think it is
reasonable to filter by score in case of finding similar documents. I
know
in both of them
On Tue, Feb 3, 2015, at 08:01 PM, Ali Nazemian wrote:
Dear Markus,
Hi,
Thank you very much for your response. I did check the reason why it is
not
recommended to filter by score in search query. But I think it is
reasonable to filter by score in case of finding similar documents. I
Dear Markus,
Would you please explain more about maxqt parameter and the methodology of
choosing best number of terms for this value?
Best regards.
On Wed, Feb 4, 2015 at 2:46 PM, Markus Jelsma markus.jel...@openindex.io
wrote:
Well, maxqt is easy, it is just the number of terms that compose
Hi,
I was wondering how can I limit the result of MoreLikeThis query by the
score value instead of filtering them by document count?
Thank you very much.
--
A.Nazemian
Dear Markus,
Hi,
Thank you very much for your response. I did check the reason why it is not
recommended to filter by score in search query. But I think it is
reasonable to filter by score in case of finding similar documents. I know
in both of them (simple search query and mlt query) vsm of
/TFIDFSimilarity.html
Koji
--
http://soleami.com/blog/comparing-document-classification-functions-of-
lucene-and-mahout.html
On 2015/02/03 5:39, Ali Nazemian wrote:
Dear Erik,
Thank you for your response. Would younplease tell me why this score could
be higher than 1? While cosine
is computed, use Solr’s debug=true mode to see the explain details in the
response.
Erik
On Feb 2, 2015, at 10:49 AM, Ali Nazemian alinazem...@gmail.com wrote:
Hi,
I was wondering what is the range of score is brought by more like this
query in Solr? I know that the Lucene
Hi,
I was wondering what is the range of score is brought by more like this
query in Solr? I know that the Lucene uses cosine similarity in vector
space model for calculating similarity between two documents. I also know
that cosine similarity is between -1 and 1 but the fact that I dont
, Jan 12, 2015 at 1:01 AM, Ali Nazemian alinazem...@gmail.com
wrote:
Dear Jack,
Thank you very much.
Yeah I was thinking of function query for sorting, but I have to
problems
in this case, 1) function query do the process at query time which I
dont
want to. 2) I also want to have
very much.
On Tue, Jan 13, 2015 at 4:21 PM, Jack Krupansky jack.krupan...@gmail.com
wrote:
A function query or an update processor to create a separate field are
still your best options.
-- Jack Krupansky
On Tue, Jan 13, 2015 at 4:18 AM, Ali Nazemian alinazem...@gmail.com
wrote:
Dear Markus
Hi everybody,
I am going to add some analysis to Solr at the index time. Here is what I
am considering in my mind:
Suppose I have two different fields for Solr schema, field a and field
b. I am going to use the created reverse index in a way that some terms
are considered as important ones and
://lucene.apache.org/core/4_10_3/core/org/apache/lucene/search/similarities/TFIDFSimilarity.html
And to use your custom similarity class in Solr:
https://cwiki.apache.org/confluence/display/solr/Other+Schema+Elements#OtherSchemaElements-Similarity
-- Jack Krupansky
On Sun, Jan 11, 2015 at 9:04 AM, Ali
UpdateRequestProcessors? They have access to the full
document when it is sent and can do whatever they want with it.
Regards,
Alex.
Sign up for my Solr resources newsletter at http://www.solr-start.com/
On 11 January 2015 at 10:55, Ali Nazemian alinazem...@gmail.com wrote:
Dear Jack
of arbitrary terms,
using the tf, mul, and add functions.
See:
https://cwiki.apache.org/confluence/display/solr/Function+Queries
-- Jack Krupansky
On Sun, Jan 11, 2015 at 10:55 AM, Ali Nazemian alinazem...@gmail.com
wrote:
Dear Jack,
Hi,
I think you misunderstood my need. I dont want
Hi,
I was wondering what is the hardware requirement for indexing 500 million
documents in Solr? Suppose maximum number of concurrent users in peak time
would be 20.
Thank you very much.
--
A.Nazemian
the SignatureUpdateProcessorFactory for your usecase,
just don't configure teh signatureField to be the same as your uniqueKey
field.
configure some othe fieldname (ie signature) instead.
: Date: Tue, 14 Oct 2014 12:08:26 +0330
: From: Ali Nazemian alinazem...@gmail.com
: Reply-To: solr-user@lucene.apache.org
on the business level?
Regards,
Alex
On 22/10/2014 7:27 am, Ali Nazemian alinazem...@gmail.com wrote:
The problem is when I partially update some fields of document. The
signature becomes useless! Even if the updated fields are not included in
the signatureField!
Regards.
On Wed
... And then, from your Eclipse import existing java project, and
select
the directory where you placed lucene-solr-trunk
On Sun, Oct 12, 2014 at 7:09 AM, Ali Nazemian alinazem...@gmail.com
wrote:
Hi,
I am going to import solr source code to eclipse for some development
purpose
Dear all,
Hi,
I was wondering how can I mark some documents as duplicate (just marking
for future usage not deleting) based on the hash combination of some
fields? Suppose I have 2 fields name url and title I want to create
hash based on url+title and send it to another field name signature. If I
Hi,
I am going to import solr source code to eclipse for some development
purpose. Unfortunately every tutorial that I found for this purpose is
outdated and did not work. So would you please give me some hint about how
can I import solr source code to eclipse?
Thank you very much.
--
A.Nazemian
Dear all,
Hi,
I am going to do partial update on a field that has not any value. Suppose
I have a document with document id (unique key) '12345' and field
read_flag which does not index at the first place. So the read_flag field
for this document has not any value. After I did partial update to
://www.linkedin.com/groups?gid=6713853
On 6 October 2014 03:40, Ali Nazemian alinazem...@gmail.com wrote:
Dear all,
Hi,
I am going to do partial update on a field that has not any value.
Suppose
I have a document with document id (unique key) '12345' and field
read_flag which does not index
: https://www.linkedin.com/groups?gid=6713853
On 6 October 2014 11:23, Ali Nazemian alinazem...@gmail.com wrote:
Dear Alex,
Hi,
LOL, yeah I am sure. You can test it yourself. I did that on default
schema
too. The results are same!
Regards.
On Mon, Oct 6, 2014 at 4:20 PM, Alexandre
--
http://soleami.com/blog/comparing-document-classification-functions-of-
lucene-and-mahout.html
(2014/09/29 4:25), Ali Nazemian wrote:
Dear all,
Hi,
I was wondering how can I implement solr boosting words from specific list
of important words? I mean I want to have a list of important
.
On Tue, Sep 30, 2014 at 7:07 PM, Ali Nazemian alinazem...@gmail.com wrote:
Dear Koji,
Hi,
Thank you very much.
Do you know any example code for UpdateRequestProcessor? Anything would be
appreciated.
Best regards.
On Tue, Sep 30, 2014 at 3:41 AM, Koji Sekiguchi k...@r.email.ne.jp
wrote:
Hi
Dear all,
Hi,
Right now I face with the strange problem related to solJ client:
When I use only incremental partial update. The incremental partial update
works fine. When I use only the add child documents. It works perfectly and
the child documents added successfully. But when I have both of
I also check both solr log and solr console. There is no error inside that,
it seems that every thing is fine! But actually there is not any child
document after executing process.
On Mon, Sep 29, 2014 at 1:47 PM, Ali Nazemian alinazem...@gmail.com wrote:
Dear all,
Hi,
Right now I face
Dear all,
Hi,
I was wondering how can I implement solr boosting words from specific list
of important words? I mean I want to have a list of important words and
tell solr to score documents based on the weighted sum of these words. For
example let word school has weight of 2 and word president has
Dear all,
Hi,
I was wondering how can I use solrJ for sending nested document to solr?
Unfortunately I did not find any tutorial for this purpose. I really
appreciate if you can guide me through that. Thank you very much.
Best regards.
--
A.Nazemian
data world where we're
talking petabyte scale,
having HDFS as the underpinning opens up possibilities for working on data
that were
difficult/impossible with Solr previously.
Best,
Erick
On Tue, Aug 5, 2014 at 9:37 PM, Ali Nazemian alinazem...@gmail.com
wrote:
Dear Erick,
I
, Ali Nazemian alinazem...@gmail.com
wrote:
Thank you very much. But why we should go for solr distributed with
hadoop?
There is already solrCloud which is pretty applicable in the case of big
index. Is there any advantage for sending indexes over map reduce that
solrCloud can not provide
Dear all,
Hi,
I was wondering how can I mange to index comments in solr? suppose I am
going to index a web page that has a content of news and some comments that
are presented by people at the end of this page. How can I index these
comments in solr? consider the fact that I am going to do some
these
comments? What is the document granularity?
Best regards.
On Wed, Aug 6, 2014 at 1:29 PM, Gora Mohanty g...@mimirtech.com wrote:
On 6 August 2014 14:13, Ali Nazemian alinazem...@gmail.com wrote:
Dear all,
Hi,
I was wondering how can I mange to index comments in solr? suppose I am
/ and @solrstart
Solr popularizers community: https://www.linkedin.com/groups?gid=6713853
On Wed, Aug 6, 2014 at 11:18 AM, Ali Nazemian alinazem...@gmail.com
wrote:
Dear Gora,
I think you misunderstood my problem. Actually I used nutch for crawling
websites and my problem is in index side
Dear all,
Hi,
I changed solr 4.9 to write index and data on hdfs. Now I am going to
connect to those data from the outside of solr for changing some of the
values. Could somebody please tell me how that is possible? Suppose I am
using Hbase over hdfs for do these changes.
Best regards.
--
Actually I am going to do some analysis on the solr data using map reduce.
For this purpose it might be needed to change some part of data or add new
fields from outside solr.
On Tue, Aug 5, 2014 at 5:51 PM, Shawn Heisey s...@elyograg.org wrote:
On 8/5/2014 7:04 AM, Ali Nazemian wrote:
I
luck! I'm 99.99% certain that's going to cause
you endless grief.
Best,
Erick
On Tue, Aug 5, 2014 at 9:55 AM, Ali Nazemian alinazem...@gmail.com
wrote:
Actually I am going to do some analysis on the solr data using map
reduce.
For this purpose it might be needed to change some part
purposes
such as Analysis. So why we go for HDFS in the case of analysis if we want
to use SolrJ for this purpose? What is the point?
Regards.
On Wed, Aug 6, 2014 at 8:59 AM, Ali Nazemian alinazem...@gmail.com wrote:
Dear Erick,
Hi,
Thank you for you reply. Yeah I am aware that SolrJ is my last
and not mere ETL,
which people do all the time with batch scripts, Java extraction and
ingestion connectors, and cron jobs.
Give it a shot and let us know how it works out.
-- Jack Krupansky
-Original Message- From: Ali Nazemian
Sent: Sunday, July 27, 2014 1:20 AM
To: solr-user
have to define
something (probably with accumulo iterator) to import to solr on inserting
new data.
Regards.
On Fri, Jul 25, 2014 at 12:59 PM, Ali Nazemian alinazem...@gmail.com
wrote:
Dear Jack,
Actually I am going to do benefit-cost analysis for in-house developement
or going for sqrrl
that integrated Lucene support of
Sqrrl Enterprise?
-- Jack Krupansky
-Original Message- From: Ali Nazemian
Sent: Thursday, July 24, 2014 3:07 PM
To: solr-user@lucene.apache.org
Subject: Re: integrating Accumulo with solr
Dear Jack,
Thank you. I am aware of datastax but I am looking
want? Is there a reason you're thinking
of using both databases in particular?
On Wed, Jul 23, 2014 at 5:17 AM, Ali Nazemian alinazem...@gmail.com
wrote:
Dear All,
Hi,
I was wondering is there anybody out there that tried to integrate Solr
with Accumulo? I was thinking about using
know if you have any other questions.
Joe
On Thu, Jul 24, 2014 at 4:07 AM, Ali Nazemian alinazem...@gmail.com
wrote:
Dear Joe,
Hi,
I am going to store the crawl web pages in accumulo as the main storage
part of my project and I need to give these data to solr for indexing and
user
direction. And
it has Hadoop and Spark integration as well.
See:
http://www.datastax.com/what-we-offer/products-services/
datastax-enterprise
-- Jack Krupansky
-Original Message- From: Ali Nazemian
Sent: Thursday, July 24, 2014 10:30 AM
To: solr-user@lucene.apache.org
Subject
Dear All,
Hi,
I was wondering is there anybody out there that tried to integrate Solr
with Accumulo? I was thinking about using Accumulo on top of HDFS and using
Solr to index data inside Accumulo? Do you have any idea how can I do such
integration?
Best regards.
--
A.Nazemian
...@snapdeal.com wrote:
:
: Please look at https://wiki.apache.org/solr/Atomic_Updates
:
: This does what you want just update relevant fields.
:
: Thanks,
: Himanshu
:
:
: On Tue, Jul 8, 2014 at 1:09 PM, Ali Nazemian alinazem...@gmail.com
: wrote:
:
: Dears,
: Hi
Dears,
Hi,
According to my requirement I need to change the default behavior of Solr
for overwriting the whole document on unique-key duplication. I am going to
change that the overwrite just part of document (some fields) and other
parts of document (other fields) remain unchanged. First of all I
On Tue, Jul 8, 2014 at 1:09 PM, Ali Nazemian alinazem...@gmail.com
wrote:
Dears,
Hi,
According to my requirement I need to change the default behavior of Solr
for overwriting the whole document on unique-key duplication. I am going
to
change that the overwrite just part of document (some
and add your preserve-field functionality.
Could even be a nice contribution.
Regards,
Alex.
Personal website: http://www.outerthoughts.com/
Current project: http://www.solr-start.com/ - Accelerating your Solr
proficiency
On Tue, Jul 1, 2014 at 6:50 PM, Ali Nazemian alinazem...@gmail.com
://www.solr-start.com/ - Accelerating your Solr
proficiency
On Mon, Jul 7, 2014 at 2:08 PM, Ali Nazemian alinazem...@gmail.com
wrote:
Dears,
Is there any way that I can do that in other way?
I mean if you look at my main problem again you will find out that I have
two types of fields in my
On Mon, Jul 7, 2014 at 4:32 PM, Ali Nazemian alinazem...@gmail.com
wrote:
Updating documents will add some extra time to indexing process. (I send
the documents via apache Nutch) I prefer to make indexing as fast as
possible.
On Mon, Jul 7, 2014 at 12:05 PM, Alexandre Rafalovitch
arafa
website: http://www.outerthoughts.com/
Current project: http://www.solr-start.com/ - Accelerating your Solr
proficiency
On Mon, Jul 7, 2014 at 4:48 PM, Ali Nazemian alinazem...@gmail.com
wrote:
Dear Alexande,
What if I use ExternalFileFiled for the fields that I dont want to be
changed
I think this will not improve the performance of indexing but probably it
would be a solution for using HDFS HA with replication factor. But I am not
sure about that.
On Mon, Jul 7, 2014 at 12:53 PM, search engn dev sachinyadav0...@gmail.com
wrote:
Currently i am exploring hadoop with solr,
Any suggestion would be appreciated.
Regards.
On Mon, Jun 30, 2014 at 2:49 PM, Ali Nazemian alinazem...@gmail.com wrote:
Hi,
I used solr 4.8 for indexing the web pages that come from nutch. I know
that solr deduplication operation works on uniquekey field. So I set that
to URL field
1 - 100 of 110 matches
Mail list logo